Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Tutorial
Numpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).
For a the full user guide and reference for numpy see
Step1: Creating Numpy Arrays
New arrays can be made in several ways. We can take an existing list and convert it to a numpy array
Step2: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions
Step3: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
Step4: Accessing array elements
Accessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.
Step5: 2D arrays are accessed similarly by referring to the row and column index separated by a comma
Step6: Sequences of indices can be accessed using '
Step7: You can also pass a list of indices.
Step8: You can also use true/false values to select values
Step9: For 2D arrays you can select specific columns and specific rows. Passing '
Step10: Operations on Arrays
You can use the operations '*', '**', '\', '+' and '-' on numpy arrays and they operate elementwise.
Step11: You can compute the sum with np.sum() and the average with np.average()
Step12: The dot product
An important mathematical operation in linear algebra is the dot product.
When we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()
Step13: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself
Step14: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
Step15: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
Step16: Multiplying Matrices
If we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication. | Python Code:
import numpy as np # importing this way allows us to refer to numpy as np
Explanation: Numpy Tutorial
Numpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).
For a the full user guide and reference for numpy see: http://docs.scipy.org/doc/numpy/
End of explanation
mylist = [1., 2., 3., 4.]
mynparray = np.array(mylist)
mynparray
Explanation: Creating Numpy Arrays
New arrays can be made in several ways. We can take an existing list and convert it to a numpy array:
End of explanation
one_vector = np.ones(4)
print one_vector # using print removes the array() portion
one2Darray = np.ones((2, 4)) # an 2D array with 2 "rows" and 4 "columns"
print one2Darray
zero_vector = np.zeros(4)
print zero_vector
Explanation: You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions:
End of explanation
empty_vector = np.empty(5)
print empty_vector
Explanation: You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.
End of explanation
mynparray[2]
Explanation: Accessing array elements
Accessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.
End of explanation
my_matrix = np.array([[1, 2, 3], [4, 5, 6]])
print my_matrix
print my_matrix[1,2]
Explanation: 2D arrays are accessed similarly by referring to the row and column index separated by a comma:
End of explanation
print my_matrix[0:2, 2] # recall 0:2 = [0, 1]
print my_matrix[0, 0:3]
Explanation: Sequences of indices can be accessed using ':' for example
End of explanation
fib_indices = np.array([1, 1, 2, 3])
random_vector = np.random.random(10) # 10 random numbers between 0 and 1
print random_vector
print random_vector[fib_indices]
Explanation: You can also pass a list of indices.
End of explanation
my_vector = np.array([1, 2, 3, 4])
select_index = np.array([True, False, True, False])
print my_vector[select_index]
Explanation: You can also use true/false values to select values
End of explanation
select_cols = np.array([True, False, True]) # 1st and 3rd column
select_rows = np.array([False, True]) # 2nd row
print my_matrix[select_rows, :] # just 2nd row but all columns
print my_matrix[:, select_cols] # all rows and just the 1st and 3rd column
print my_matrix[select_rows, select_cols]
Explanation: For 2D arrays you can select specific columns and specific rows. Passing ':' selects all rows/columns
End of explanation
my_array = np.array([1., 2., 3., 4.])
print my_array*my_array
print my_array**2
print my_array - np.ones(4)
print my_array + np.ones(4)
print my_array / 3
print my_array / np.array([2., 3., 4., 5.]) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]
Explanation: Operations on Arrays
You can use the operations '*', '**', '\', '+' and '-' on numpy arrays and they operate elementwise.
End of explanation
print np.sum(my_array)
print np.average(my_array)
print np.sum(my_array)/len(my_array)
Explanation: You can compute the sum with np.sum() and the average with np.average()
End of explanation
array1 = np.array([1., 2., 3., 4.])
array2 = np.array([2., 3., 4., 5.])
print np.dot(array1, array2)
print np.sum(array1*array2)
Explanation: The dot product
An important mathematical operation in linear algebra is the dot product.
When we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()
End of explanation
array1_mag = np.sqrt(np.dot(array1, array1))
print array1_mag
print np.sqrt(np.sum(array1*array1))
Explanation: Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself:
End of explanation
my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])
print my_features
my_weights = np.array([0.4, 0.5])
print my_weights
my_predictions = np.dot(my_features, my_weights) # note that the weights are on the right
print my_predictions # which has 4 elements since my_features has 4 rows
Explanation: We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.
End of explanation
my_matrix = my_features
my_array = np.array([0.3, 0.4, 0.5, 0.6])
print np.dot(my_array, my_matrix) # which has 2 elements because my_matrix has 2 columns
Explanation: Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.
End of explanation
matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])
print matrix_1
matrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])
print matrix_2
print np.dot(matrix_1, matrix_2)
Explanation: Multiplying Matrices
If we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication.
End of explanation |
11,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Response functions
This notebook provides an overview of the response functions that are available in Pastas. Response functions describe the response of the dependent variable (e.g., groundwater levels) to an independent variable (e.g., groundwater pumping) and form a fundamental part in the transfer function noise models implemented in Pastas. Depending on the problem under investigation, a less or more complex response function may be required, where the complexity is quantified by the number of parameters. Response function are generally used in combination with a stressmodel, but in this notebook the response functions are studied independently to provide an overview of the different response functions and what they represent.
Step1: The use of response functions
Depending on the stress type (e.g., recharge, river levels or groundwater pumping) different response function may be used. All response functions that are tested and supported in Pastas are summarized in the table below for reference. The equation in the third column is the formula for the impulse response function ($\theta(t)$).
|Name|Parameters|Formula|Description|
|----|----------|
Step2: Scaling of the step response functions
An important characteristic is the so-called "gain" of a response function. The gain is the final increase or decrease that results from a unit increase or decrease in a stress that continues infinitely in time (e.g., pumping at a constant rate forever). This can be visually inspected by the value of the step response function for large values of $t$ but can also be inferred from the parameters as follows
Step3: Parameter settings
up
Step4: Comparison to classical analytical response functions
Polder step function compared to classic polder function
The classic polder function is (Eq. 123.32 in Bruggeman, 1999)
$$
h(t) = \Delta h \text{P}\left(\frac{x}{2\lambda}, \sqrt{\frac{t}{cS}}\right)
$$
where P is the polder function.
Step5: Hantush step function compared to classic Hantush function
The classic Hantush function is
$$
h(r, t) = \frac{-Q}{4\pi T}\int_u ^\infty \exp\left(-y - \frac{r^2}{4 \lambda^2 y} \right) \frac{\text{d}y}{y}
$$
where
$$
u=\frac{r^2 S}{4 T t}
$$
The parameters in Pastas are
$$
A = \frac{1}{4\pi T}
$$
$$
a = cS
$$
$$
b = \frac{r^2}{4\lambda^2}
$$
where $\lambda^2=cT$. | Python Code:
import numpy as np
import pandas as pd
import pastas as ps
import matplotlib.pyplot as plt
ps.show_versions()
Explanation: Response functions
This notebook provides an overview of the response functions that are available in Pastas. Response functions describe the response of the dependent variable (e.g., groundwater levels) to an independent variable (e.g., groundwater pumping) and form a fundamental part in the transfer function noise models implemented in Pastas. Depending on the problem under investigation, a less or more complex response function may be required, where the complexity is quantified by the number of parameters. Response function are generally used in combination with a stressmodel, but in this notebook the response functions are studied independently to provide an overview of the different response functions and what they represent.
End of explanation
# Default Settings
cutoff = 0.999
meanstress = 1
up = True
responses = {}
exp = ps.Exponential(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Exponential"] = exp
gamma = ps.Gamma(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Gamma"] = gamma
hantush = ps.Hantush(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Hantush"] = hantush
polder = ps.Polder(up=up, meanstress=meanstress, cutoff=cutoff)
responses["Polder"] = polder
fourp = ps.FourParam(up=up, meanstress=meanstress, cutoff=cutoff)
responses["FourParam"] = fourp
DoubleExp = ps.DoubleExponential(up=up, meanstress=meanstress, cutoff=cutoff)
responses["DoubleExponential"] = DoubleExp
parameters = pd.DataFrame()
fig, [ax1, ax2] = plt.subplots(1,2, sharex=True, figsize=(10,3))
for name, response in responses.items():
p = response.get_init_parameters(name)
parameters = parameters.append(p)
ax1.plot(response.block(p.initial), label=name)
ax2.plot(response.step(p.initial), label=name)
ax1.set_title("Block response")
ax2.set_title("Step responses")
ax1.set_xlabel("Time [days]")
ax2.set_xlabel("Time [days]")
ax1.legend()
plt.xlim(1e-1, 500)
plt.show()
Explanation: The use of response functions
Depending on the stress type (e.g., recharge, river levels or groundwater pumping) different response function may be used. All response functions that are tested and supported in Pastas are summarized in the table below for reference. The equation in the third column is the formula for the impulse response function ($\theta(t)$).
|Name|Parameters|Formula|Description|
|----|----------|:------|-----------|
| FourParam |4 - A, n, a, b| $$ \theta(t) = A \frac{t^{n-1}}{a^n \Gamma(n)} e^{-t/a- ab/t} $$ | Response function with four parameters that may be used for many purposes. Many other response function are a simplification of this function. |
| Gamma |3 - A, a, n | $$ \theta(t) = A \frac{t^{n-1}}{a^n \Gamma(n)} e^{-t/a} $$ | Three parameter version of FourParam, used for all sorts of stresses ($b=0$) |
| Exponential |2 - A, a | $$ \theta(t) = \frac{A}{a} e^{-t/a} $$ | Response function that can be used for stresses that have an (almost) instant effect. ($n=1$ and $b=0$)|
| Hantush |3 - A, a, b | $$ \theta(t) = At^{-1} e^{-t/a - ab/t} $$ | Response function commonly used for groundwater abstraction wells ($n=0$) |
| Polder |3 - a, b, c | $$ \theta(t) = At^{-3/2} e^{-t/a -b/t} $$ | Response function commonly used to simulate the effects of (river) water levels on the groundwater levels ($n=-1/2$) |
| DoubleExponential |4 - A, $\alpha$, $a_1$,$a_2$| $$ \theta(t) = A (1 - \alpha) e^{-t/a_1} + A \alpha e^{-t/a_2} $$ | Response Function with a double exponential, simulating a fast and slow response. |
| Edelman | 1 - $\beta$ | $$ \theta(t) = \text{?} $$ | The function of Edelman, describing the propagation of an instantaneous water level change into an adjacent half-infinite aquifer. |
| HantushWellModel | 3 - A, a, b| $$ \theta(t) = \text{?} $$ | A special implementation of the Hantush well function for multiple wells. |
Below the different response functions are plotted.
End of explanation
A = 1
a = 50
b = 0.4
plt.figure(figsize=(16, 8))
for i, n in enumerate([-0.5, 1e-6, 0.5, 1, 1.5]):
plt.subplot(2, 3, i + 1)
plt.title(f'n={n:0.1f}')
fp = fourp.step([A, n, a, b], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(fp) + 1), fp, 'C0', label='4-param')
e = exp.step([A, a], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(e) + 1), e, 'C1', label='exp')
if n > 0:
g = gamma.step([A, n, a], dt=1, cutoff=0.95)
plt.plot(np.arange(1, len(g) + 1), g, 'C2', label='gamma')
h = hantush.step([A, a, b], dt=1, cutoff=0.95) / hantush.gain([A, a, b])
plt.plot(np.arange(1, len(h) + 1), h, 'C3', label='hantush')
p = polder.step([A, a, b], dt=1, cutoff=0.95) / polder.gain([A, a, b])
plt.plot(np.arange(1, len(p) + 1), p, 'C4', label='polder')
plt.xlim(0, 200)
plt.legend()
if n > 0:
print('fp, e, g, h, p:', fp[-1], e[-1], g[-1], h[-1], p[-1])
else:
print('fp, e, h, p:', fp[-1], e[-1], h[-1], p[-1])
plt.axhline(0.95, linestyle=':')
Explanation: Scaling of the step response functions
An important characteristic is the so-called "gain" of a response function. The gain is the final increase or decrease that results from a unit increase or decrease in a stress that continues infinitely in time (e.g., pumping at a constant rate forever). This can be visually inspected by the value of the step response function for large values of $t$ but can also be inferred from the parameters as follows:
The FourParam, Gamma, and Exponential step functions are scaled such that the gain equals $A$
The Hantush step function is scaled such that the gain equals AK$_0(\sqrt{4b})$
The Polder function is scaled such that the gain equals $\exp\left(-2\sqrt{b}\right)$
The gain of the Edelman function always equals 1, but this will take an infinite amount of time.
Comparison of the different response functions
The Gamma, Exponential, Polder, and Hantush response function can all be derived from the more general FourParam response function by fixing the parameters $n$ and/or $b$ to a specific value. The DoubleExponential, Edelman, and HantushWellModel cannot be written as some form of the FourParam function. Below the response function that are special forms of the four parameter function are are shown for different values of $n$ and $b$.
End of explanation
parameters
Explanation: Parameter settings
up : This parameters determines whether the influence of the stress goes up or down, hence a positive or a negative response function. For example, when groundwater pumping is defined as a positive flux, up=False because we want the groundwater levels to decrease as a result of pumping.
meanstress : This parameter is used to estimate the initial value of the stationary effect of a stress. Hence the effect when a stress stays at an unit level for infinite amount of time. This parameter is usually referred from the stress time series and does not have to be provided by the user.
cutoff : This parameter determines for how many time steps the response is calculated. This reduces calculation times as it reduces the length of the array the stress is convolved with. The default value is 0.999, meaning that the response is cutoff after 99.9% of the effect of the stress impulse has occurred. A minimum of length of three times the simulation time step is applied.
The default parameter values for each of the response function are as follows:
End of explanation
from scipy.special import erfc
def polder_classic(t, x, T, S, c):
X = x / (2 * np.sqrt(T * c))
Y = np.sqrt(t / (c * S))
rv = 0.5 * np.exp(2 * X) * erfc(X / Y + Y) + \
0.5 * np.exp(-2 * X) * erfc(X / Y - Y)
return rv
delh = 2
T = 20
c = 5000
S = 0.01
x = 400
x / np.sqrt(c * T)
t = np.arange(1, 121)
h_polder_classic = np.zeros(len(t))
for i in range(len(t)):
h_polder_classic[i] = delh * polder_classic(t[i], x=x, T=T, S=S, c=c)
#
A = delh
a = c * S
b = x ** 2 / (4 * T * c)
pd = polder.step([A, a, b], dt=1, cutoff=0.95)
#
plt.plot(t, h_polder_classic, label='Polder classic')
plt.plot(np.arange(1, len(pd) + 1), pd, label='Polder Pastas', linestyle="--")
plt.legend()
Explanation: Comparison to classical analytical response functions
Polder step function compared to classic polder function
The classic polder function is (Eq. 123.32 in Bruggeman, 1999)
$$
h(t) = \Delta h \text{P}\left(\frac{x}{2\lambda}, \sqrt{\frac{t}{cS}}\right)
$$
where P is the polder function.
End of explanation
from scipy.integrate import quad
def integrand_hantush(y, r, lab):
return np.exp(-y - r ** 2 / (4 * lab ** 2 * y)) / y
def hantush_classic(t=1, r=1, Q=1, T=100, S=1e-4, c=1000):
lab = np.sqrt(T * c)
u = r ** 2 * S / (4 * T * t)
F = quad(integrand_hantush, u, np.inf, args=(r, lab))[0]
return -Q / (4 * np.pi * T) * F
c = 1000 # d
S = 0.01 # -
T = 100 # m^2/d
r = 500 # m
Q = 20 # m^3/d
#
t = np.arange(1, 45)
h_hantush_classic = np.zeros(len(t))
for i in range(len(t)):
h_hantush_classic[i] = hantush_classic(t[i], r=r, Q=20, T=T, S=S, c=c)
#
a = c * S
b = r ** 2 / (4 * T * c)
ht = hantush.step([1, a, b], dt=1, cutoff=0.99) * (-Q / (2 * np.pi * T))
#
plt.plot(t, h_hantush_classic, label='Hantush classic')
plt.plot(np.arange(1, len(ht) + 1), ht, '--', label='Hantush Pastas')
plt.legend();
Explanation: Hantush step function compared to classic Hantush function
The classic Hantush function is
$$
h(r, t) = \frac{-Q}{4\pi T}\int_u ^\infty \exp\left(-y - \frac{r^2}{4 \lambda^2 y} \right) \frac{\text{d}y}{y}
$$
where
$$
u=\frac{r^2 S}{4 T t}
$$
The parameters in Pastas are
$$
A = \frac{1}{4\pi T}
$$
$$
a = cS
$$
$$
b = \frac{r^2}{4\lambda^2}
$$
where $\lambda^2=cT$.
End of explanation |
11,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction à l'I/O Asynchrone
Le Socket de Berkeley
Il est difficile d'imaginer le nombre d'instanciations d'objets de type Socket depuis leur introduction en 1983 à l'université Berkeley.
Le Socket est l'interface de programmation la plus populaire pour faire de la réseautique.
Elle est si populaire, que tous les systèmes d'exploitation l'offrent et tous les étudiants sont introduits à la programmation réseau avec les Sockets.
Exemple d'un Socket client qui se connecte
Step1: Exemple d'un Socket client qui envoi des données
Step2: Exemple d'un Socket qui reçoit des données
Step3: Quoique les versions de l'interface Socket ont évolué avec les années, surtout sur les plateformes orientées-objet, l'essence de l'interface de 1983 reste très présente dans les implémentations modernes.
2.3.1.5. Making connections
connect(s, name, namelen);
2.3.1.6. Sending and receiving data
cc = sendto(s, buf, len, flags, to, tolen);
msglen = recvfrom(s, buf, len, flags, from, fromlenaddr);
Extrait du manuel system de BSD 4.2 [1983]
Socket Synchrone
Le Socket Berkeley de 1983 est synchrone
Ceci implique que lorsqu'une fonction comme connect, sendto, recvfrom... est invoquée, le processus bloque jusqu'à l'obtention de la réponse.
Notons la présence dans le même document d'une fonction d'I/O asynchrone.
Fail Whale
Le Socket synchrone n'est pas efficace en conditions de charge élevée
Le déploiement de réseaux haute vitesse combiné à l'explosion de la popularité des réseaux sociaux le démontre bien
Les sites de réseau sociaux ne savent pas comment gérer cette impasse.
La situation est telle, que les pages d'erreurs de Twitter deviennent célèbres.
Le Problème
Step4: Création d'un Socket Asynchrone
Step5: Enregistrement du Connector | Python Code:
from IPython.display import Image
from IPython.display import display
import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(("etsmtl.ca" , 80))
Explanation: Introduction à l'I/O Asynchrone
Le Socket de Berkeley
Il est difficile d'imaginer le nombre d'instanciations d'objets de type Socket depuis leur introduction en 1983 à l'université Berkeley.
Le Socket est l'interface de programmation la plus populaire pour faire de la réseautique.
Elle est si populaire, que tous les systèmes d'exploitation l'offrent et tous les étudiants sont introduits à la programmation réseau avec les Sockets.
Exemple d'un Socket client qui se connecte
End of explanation
msg = b'GET /ETS/media/Prive/logo/ETS-rouge-devise-ecran.jpg HTTP/1.1\r\nHost:etsmtl.ca\r\n\r\n'
sock.sendall(msg)
Explanation: Exemple d'un Socket client qui envoi des données
End of explanation
recvd = b''
while True:
data = sock.recv(1024)
if not data:
break
recvd += data
sock.shutdown(1)
sock.close()
response = recvd.split(b'\r\n\r\n', 1)
Image(data=response[1])
Explanation: Exemple d'un Socket qui reçoit des données
End of explanation
import selectors
import socket
import errno
sel = selectors.DefaultSelector()
def connector(sock, mask):
msg = b'GET /ETS/media/Prive/logo/ETS-rouge-devise-ecran.jpg HTTP/1.1\r\nHost:etsmtl.ca\r\n\r\n'
sock.sendall(msg)
# Le connector a pour responsabilité
# d'instancier un nouveau Handler
# et de l'ajouter au Selector
h = HTTPHandler()
sel.modify(sock, selectors.EVENT_READ, h.handle)
class HTTPHandler:
recvd = b''
def handle(self, sock, mask):
data = sock.recv(1024)
if not data:
# Le Handler se retire
# lorsqu'il a terminé.
sel.unregister(sock)
response = self.recvd.split(b'\r\n\r\n', 1)
display(Image(data=response[1]))
else:
self.recvd += data
Explanation: Quoique les versions de l'interface Socket ont évolué avec les années, surtout sur les plateformes orientées-objet, l'essence de l'interface de 1983 reste très présente dans les implémentations modernes.
2.3.1.5. Making connections
connect(s, name, namelen);
2.3.1.6. Sending and receiving data
cc = sendto(s, buf, len, flags, to, tolen);
msglen = recvfrom(s, buf, len, flags, from, fromlenaddr);
Extrait du manuel system de BSD 4.2 [1983]
Socket Synchrone
Le Socket Berkeley de 1983 est synchrone
Ceci implique que lorsqu'une fonction comme connect, sendto, recvfrom... est invoquée, le processus bloque jusqu'à l'obtention de la réponse.
Notons la présence dans le même document d'une fonction d'I/O asynchrone.
Fail Whale
Le Socket synchrone n'est pas efficace en conditions de charge élevée
Le déploiement de réseaux haute vitesse combiné à l'explosion de la popularité des réseaux sociaux le démontre bien
Les sites de réseau sociaux ne savent pas comment gérer cette impasse.
La situation est telle, que les pages d'erreurs de Twitter deviennent célèbres.
Le Problème:
Lors d'un appel bloquant, le processus et ses ressources sont suspendus. Lorsque la charge augmente, la quantité de ressources suspendue devient ingérable pour le système d'exploitation
La Solution:
Il ne faut pas bloquer
Le Socket Asynchrone
Dès 1983, le socket de Berkley offre un mode asynchrones. Cependant, il n'est pas très utilisé, car ils sont beaucoup plus complexes et prône à l'erreur.
Le Patron Reactor
En 1995, le Patron Reactor est découvert
ce patron simplifie grandement l'I/O Asynchrone
http://www.dre.vanderbilt.edu/~schmidt/PDF/reactor-siemens.pdf
Une influence est du patron Reactor est la fonction Select
Select est la fonction asynchrone présentée dans le même document que le Socket
Exemple d'un Socket client asynchrone qui se connecte
(Avec le Patron Reactor)
End of explanation
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setblocking(False)
try:
sock.connect(("etsmtl.ca" , 80))
except socket.error:
pass # L'exception est toujours lancé!
# C'est normal, l'OS veut nous avertir que
# nous ne sommes pas encore connecté
Explanation: Création d'un Socket Asynchrone
End of explanation
# L'application enregistre le Connector
sel.register(sock, selectors.EVENT_WRITE, connector)
# Le Reactor
while len(sel.get_map()):
events = sel.select()
for key, mask in events:
handleEvent = key.data
handleEvent(key.fileobj, mask)
Explanation: Enregistrement du Connector
End of explanation |
11,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
14 - Advanced topics - Cement Pavers albedo example
This journal creates a paver underneath the single-axis trackers, and evaluates the improvement for one day -- June 17th with and without the pavers for a location in Davis, CA.
Measurements
Step1: Simulation without Pavers
Step2: Looping on the day
Step3: Simulation With Pavers
Step4: You can view the geometry generated in the terminal with
Step5: LOOP WITH PAVERS
Step6: RESULTS ANALYSIS NOON
Step7: Improvement in Rear Irradiance
Step8: RESULT ANALYSIS DAY | Python Code:
import os
from pathlib import Path
import pandas as pd
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_14')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
print ("Your simulation will be stored in %s" % testfolder)
from bifacial_radiance import *
import numpy as np
simulationname = 'tutorial_14'
#Location:
lat = 38.5449 # Davis, CA
lon = -121.7405 # Davis, CA
# MakeModule Parameters
moduletype='test-module'
numpanels = 1 # AgriPV site has 3 modules along the y direction (N-S since we are facing it to the south) .
x = 0.95
y = 1.838
xgap = 0.02# Leaving 2 centimeters between modules on x direction
ygap = 0.0 # 1 - up
zgap = 0.06 # gap between modules and torquetube.
# Other default values:
# TorqueTube Parameters
axisofrotationTorqueTube=True
torqueTube = False
cellLevelModule = True
numcellsx = 6
numcellsy = 10
xcell = 0.156
ycell = 0.158
xcellgap = 0.015
ycellgap = 0.015
sensorsy = numcellsy # one sensor per cell
cellLevelModuleParams = {'numcellsx': numcellsx, 'numcellsy':numcellsy,
'xcell': xcell, 'ycell': ycell, 'xcellgap': xcellgap, 'ycellgap': ycellgap}
# SceneDict Parameters
gcr = 0.33 # m
albedo = 0.2 #'grass' # ground albedo
hub_height = 1.237 # m
nMods = 20 # six modules per row.
nRows = 3 # 3 row
azimuth_ang = 90 # Facing east
demo = RadianceObj(simulationname,path = testfolder) # Create a RadianceObj 'object'
demo.setGround(albedo) #
epwfile = demo.getEPW(lat, lon)
metdata = demo.readWeatherFile(epwfile, coerce_year=2021) # read in the EPW weather data from above
mymodule=demo.makeModule(name=moduletype,x=x,y=y,numpanels = numpanels, xgap=xgap, ygap=ygap)
mymodule.addCellModule(numcellsx=numcellsx, numcellsy=numcellsy,
xcell=xcell, ycell=ycell, xcellgap=xcellgap, ycellgap=ycellgap)
description = 'Sherman Williams "Chantilly White" acrylic paint'
materialpav = 'sw_chantillywhite'
Rrefl = 0.5
Grefl = 0.5
Brefl = 0.5
demo.addMaterial(material=materialpav, Rrefl=Rrefl, Grefl=Grefl, Brefl=Brefl, comment=description)
Explanation: 14 - Advanced topics - Cement Pavers albedo example
This journal creates a paver underneath the single-axis trackers, and evaluates the improvement for one day -- June 17th with and without the pavers for a location in Davis, CA.
Measurements:
End of explanation
timeindex = metdata.datetime.index(pd.to_datetime('2021-06-17 12:0:0 -8')) # Davis, CA is TZ -8
demo.gendaylit(timeindex)
tilt = demo.getSingleTimestampTrackerAngle(metdata, timeindex=timeindex, gcr=gcr,
azimuth=180, axis_tilt=0,
limit_angle=60, backtrack=True)
# create a scene with all the variables
sceneDict = {'tilt':tilt,'gcr': gcr,'hub_height':hub_height,'azimuth':azimuth_ang, 'module_type':moduletype, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(module=mymodule, sceneDict=sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows.
octfile = demo.makeOct(demo.getfilelist()) # makeOct combines all of the ground, sky and object fil|es into a .oct file.
analysis = AnalysisObj(octfile, demo.name) # return an analysis object including the scan dimensions for back irradiance
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=sensorsy)
analysis.analysis(octfile, simulationname+"_noPavers", frontscan, backscan) # compare the back vs front irradiance
print("Simulation without Pavers Finished")
Explanation: Simulation without Pavers
End of explanation
j=0
starttimeindex = metdata.datetime.index(pd.to_datetime('2021-06-17 7:0:0 -8'))
endtimeindex = metdata.datetime.index(pd.to_datetime('2021-06-17 19:0:0 -8'))
for timess in range (starttimeindex, endtimeindex):
j+=1
demo.gendaylit(timess)
tilt = demo.getSingleTimestampTrackerAngle(metdata, timeindex=timess, gcr=gcr,
azimuth=180, axis_tilt=0,
limit_angle=60, backtrack=True)
# create a scene with all the variables
sceneDict = {'tilt':tilt,'gcr': gcr,'hub_height':hub_height,'azimuth':azimuth_ang, 'module_type':moduletype, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(module=mymodule, sceneDict=sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows.
octfile = demo.makeOct(demo.getfilelist()) # makeOct combines all of the ground, sky and object fil|es into a .oct file
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=sensorsy)
analysis.analysis(octfile, simulationname+"_noPavers_"+str(j), frontscan, backscan) # compare the back vs front irradiance
Explanation: Looping on the day
End of explanation
demo.gendaylit(timeindex)
tilt = demo.getSingleTimestampTrackerAngle(metdata, timeindex=timeindex, gcr=gcr,
azimuth=180, axis_tilt=0,
limit_angle=60, backtrack=True)
# create a scene with all the variables
sceneDict = {'tilt':tilt,'gcr': gcr,'hub_height':hub_height,'azimuth':azimuth_ang, 'module_type':moduletype, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(module=mymodule, sceneDict=sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows.
torquetubelength = demo.module.scenex*(nMods)
pitch = demo.module.sceney/gcr
startpitch = -pitch * (nRows-1)/2
p_w = 0.947 # m
p_h = 0.092 # m
p_w2 = 0.187 # m
p_h2 = 0.184 # m
offset_w1y = -(p_w/2)+(p_w2/2)
offset_w2y = (p_w/2)-(p_w2/2)
customObjects = []
for i in range (0, nRows):
name='PAVER'+str(i)
text='! genbox {} paver{} {} {} {} | xform -t {} {} 0 | xform -t {} 0 0'.format(materialpav, i,
p_w, torquetubelength, p_h,
-p_w/2, (-torquetubelength+demo.module.sceney)/2.0,
startpitch+pitch*i)
text += '\r\n! genbox {} paverS1{} {} {} {} | xform -t {} {} 0 | xform -t {} 0 0'.format(materialpav, i,
p_w2, torquetubelength, p_h2,
-p_w2/2+offset_w1y, (-torquetubelength+demo.module.sceney)/2.0,
startpitch+pitch*i)
text += '\r\n! genbox {} paverS2{} {} {} {} | xform -t {} {} 0 | xform -t {} 0 0'.format(materialpav, i,
p_w2, torquetubelength, p_h2,
-p_w2/2+offset_w2y, (-torquetubelength+demo.module.sceney)/2.0,
startpitch+pitch*i)
customObject = demo.makeCustomObject(name,text)
customObjects.append(customObject)
demo.appendtoScene(radfile=scene.radfiles, customObject=customObject, text="!xform -rz 0")
demo.makeOct()
Explanation: Simulation With Pavers
End of explanation
## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window
#!rvu -vf views\front.vp -e .01 -pe 0.01 -vp -5 -14 1 -vd 0 0.9946 -0.1040 tutorial_14.oct
analysis = AnalysisObj(octfile, demo.name) # return an analysis object including the scan dimensions for back irradiance
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=sensorsy)
analysis.analysis(octfile, simulationname+"_WITHPavers", frontscan, backscan) # compare the back vs front irradiance
print("Simulation WITH Pavers Finished")
Explanation: You can view the geometry generated in the terminal with:
rvu -vf views\front.vp -e .01 -pe 0.01 -vp -5 -14 1 -vd 0 0.9946 -0.1040 tutorial_14.oct
End of explanation
j=0
for timess in range (starttimeindex, endtimeindex):
j+=1
demo.gendaylit(timess)
tilt = demo.getSingleTimestampTrackerAngle(metdata, timeindex=timess, gcr=gcr,
azimuth=180, axis_tilt=0,
limit_angle=60, backtrack=True)
# create a scene with all the variables
sceneDict = {'tilt':tilt,'gcr': gcr,'hub_height':hub_height,'azimuth':azimuth_ang, 'module_type':moduletype, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(mymodule, sceneDict=sceneDict) #makeScene creates a .rad file with 20 modules per row, 7 rows.
# Appending Pavers here
demo.appendtoScene(radfile=scene.radfiles, customObject=customObjects[0], text="!xform -rz 0")
demo.appendtoScene(radfile=scene.radfiles, customObject=customObjects[1], text="!xform -rz 0")
demo.appendtoScene(radfile=scene.radfiles, customObject=customObjects[2], text="!xform -rz 0")
octfile = demo.makeOct(demo.getfilelist()) # makeOct combines all of the ground, sky and object fil|es into a .oct file
frontscan, backscan = analysis.moduleAnalysis(scene, sensorsy=sensorsy)
analysis.analysis(octfile, simulationname+"_WITHPavers_"+str(j), frontscan, backscan) # compare the back vs front irradiance
Explanation: LOOP WITH PAVERS
End of explanation
df_0 = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_noPavers.csv'))
df_w = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_WITHPavers.csv'))
df_0
df_w
Explanation: RESULTS ANALYSIS NOON
End of explanation
round((df_w['Wm2Back'].mean()-df_0['Wm2Back'].mean())*100/df_0['Wm2Back'].mean(),1)
Explanation: Improvement in Rear Irradiance
End of explanation
df_0 = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_noPavers_1.csv'))
df_w = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_WITHPavers_1.csv'))
df_w
df_0
round((df_w['Wm2Back'].mean()-df_0['Wm2Back'].mean())*100/df_0['Wm2Back'].mean(),1)
average_back_d0=[]
average_back_dw=[]
average_front = []
hourly_rearirradiance_comparison = []
timessimulated = endtimeindex-starttimeindex
for i in range (1, timessimulated+1):
df_0 = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_noPavers_'+str(i)+'.csv'))
df_w = load.read1Result(os.path.join(testfolder, 'results', 'irr_tutorial_14_WITHPavers_'+str(i)+'.csv'))
print(round((df_w['Wm2Back'].mean()-df_0['Wm2Back'].mean())*100/df_0['Wm2Back'].mean(),1))
hourly_rearirradiance_comparison.append(round((df_w['Wm2Back'].mean()-df_0['Wm2Back'].mean())*100/df_0['Wm2Back'].mean(),1))
average_back_d0.append(df_0['Wm2Back'].mean())
average_back_dw.append(df_w['Wm2Back'].mean())
average_front.append(df_0['Wm2Front'].mean())
print("Increase in rear irradiance: ", round((sum(average_back_dw)-sum(average_back_d0))*100/sum(average_back_d0),1))
print("BG no Pavers: ", round(sum(average_back_d0)*100/sum(average_front),1))
print("BG with Pavers: ", round(sum(average_back_dw)*100/sum(average_front),1))
import matplotlib.pyplot as plt
#metdata.datetime[starttime].hour # 7
#metdata.datetime[endtimeindex].hour # 17
xax= [7, 8, 9, 10, 11, 12,13,14,15,16,17,18] # Lazy way to get the x axis...
plt.plot(xax,hourly_rearirradiance_comparison)
plt.ylabel('$\Delta$ in G$_{rear}$ [%] \n(G$_{rear-with}$ - G$_{rear-without}$ / G$_{rear-without}$)')
plt.xlabel('Hour')
Explanation: RESULT ANALYSIS DAY
End of explanation |
11,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
Step1: Load data
Step2: Set neural network hyperparameters
Step3: Set parameters for each layer
Step4: Define placeholder Tensors for inputs and labels
Step5: Define types of layers
Step6: Design neural network architecture
Step7: Define dictionaries for storing weights and biases for each layer -- and initialize
Step8: Build model
Step9: Define model's loss and its optimizer
Step10: Define evaluation metrics
Step11: Create op for variable initialization
Step12: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress) | Python Code:
import numpy as np
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
Explanation: Deep Convolutional Neural Network in TensorFlow
In this notebook, we convert our LeNet-5-inspired, MNIST-classifying, deep convolutional network from Keras to TensorFlow (compare them side by side) following Aymeric Damien's style.
Load dependencies
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Load data
End of explanation
epochs = 20
batch_size = 128
display_progress = 40 # after this many batches, output progress to screen
wt_init = tf.contrib.layers.xavier_initializer() # weight initializer
Explanation: Set neural network hyperparameters
End of explanation
# input layer:
n_input = 784
# first convolutional layer:
n_conv_1 = 32
k_conv_1 = 3 # k_size
# second convolutional layer:
n_conv_2 = 64
k_conv_2 = 3
# max pooling layer:
pool_size = 2
mp_layer_dropout = 0.25
# dense layer:
n_dense = 128
dense_layer_dropout = 0.5
# output layer:
n_classes = 10
Explanation: Set parameters for each layer
End of explanation
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
Explanation: Define placeholder Tensors for inputs and labels
End of explanation
# dense layer with ReLU activation:
def dense(x, W, b):
z = tf.add(tf.matmul(x, W), b)
a = tf.nn.relu(z)
return a
# convolutional layer with ReLU activation:
def conv2d(x, W, b, stride_length=1):
xW = tf.nn.conv2d(x, W, strides=[1, stride_length, stride_length, 1], padding='SAME')
z = tf.nn.bias_add(xW, b)
a = tf.nn.relu(z)
return a
# max-pooling layer:
def maxpooling2d(x, p_size):
return tf.nn.max_pool(x,
ksize=[1, p_size, p_size, 1],
strides=[1, p_size, p_size, 1],
padding='SAME')
Explanation: Define types of layers
End of explanation
def network(x, weights, biases, n_in, mp_psize, mp_dropout, dense_dropout):
# reshape linear MNIST pixel input into square image:
square_dimensions = int(np.sqrt(n_in))
square_x = tf.reshape(x, shape=[-1, square_dimensions, square_dimensions, 1])
# convolutional and max-pooling layers:
conv_1 = conv2d(square_x, weights['W_c1'], biases['b_c1'])
conv_2 = conv2d(conv_1, weights['W_c2'], biases['b_c2'])
pool_1 = maxpooling2d(conv_2, mp_psize)
pool_1 = tf.nn.dropout(pool_1, 1-mp_dropout)
# dense layer:
flat = tf.reshape(pool_1, [-1, weights['W_d1'].get_shape().as_list()[0]])
dense_1 = dense(flat, weights['W_d1'], biases['b_d1'])
dense_1 = tf.nn.dropout(dense_1, 1-dense_dropout)
# output layer:
out_layer_z = tf.add(tf.matmul(dense_1, weights['W_out']), biases['b_out'])
return out_layer_z
Explanation: Design neural network architecture
End of explanation
bias_dict = {
'b_c1': tf.Variable(tf.zeros([n_conv_1])),
'b_c2': tf.Variable(tf.zeros([n_conv_2])),
'b_d1': tf.Variable(tf.zeros([n_dense])),
'b_out': tf.Variable(tf.zeros([n_classes]))
}
# calculate number of inputs to dense layer:
full_square_length = np.sqrt(n_input)
pooled_square_length = int(full_square_length / pool_size)
dense_inputs = pooled_square_length**2 * n_conv_2
weight_dict = {
'W_c1': tf.get_variable('W_c1',
[k_conv_1, k_conv_1, 1, n_conv_1], initializer=wt_init),
'W_c2': tf.get_variable('W_c2',
[k_conv_2, k_conv_2, n_conv_1, n_conv_2], initializer=wt_init),
'W_d1': tf.get_variable('W_d1',
[dense_inputs, n_dense], initializer=wt_init),
'W_out': tf.get_variable('W_out',
[n_dense, n_classes], initializer=wt_init)
}
Explanation: Define dictionaries for storing weights and biases for each layer -- and initialize
End of explanation
predictions = network(x, weight_dict, bias_dict, n_input,
pool_size, mp_layer_dropout, dense_layer_dropout)
Explanation: Build model
End of explanation
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=predictions, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Define model's loss and its optimizer
End of explanation
correct_prediction = tf.equal(tf.argmax(predictions, 1), tf.argmax(y, 1))
accuracy_pct = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) * 100
Explanation: Define evaluation metrics
End of explanation
initializer_op = tf.global_variables_initializer()
Explanation: Create op for variable initialization
End of explanation
with tf.Session() as session:
session.run(initializer_op)
print("Training for", epochs, "epochs.")
# loop over epochs:
for epoch in range(epochs):
avg_cost = 0.0 # track cost to monitor performance during training
avg_accuracy_pct = 0.0
# loop over all batches of the epoch:
n_batches = int(mnist.train.num_examples / batch_size)
for i in range(n_batches):
# to reassure you something's happening!
if i % display_progress == 0:
print("Step ", i+1, " of ", n_batches, " in epoch ", epoch+1, ".", sep='')
batch_x, batch_y = mnist.train.next_batch(batch_size)
# feed batch data to run optimization and fetching cost and accuracy:
_, batch_cost, batch_acc = session.run([optimizer, cost, accuracy_pct],
feed_dict={x: batch_x, y: batch_y})
# accumulate mean loss and accuracy over epoch:
avg_cost += batch_cost / n_batches
avg_accuracy_pct += batch_acc / n_batches
# output logs at end of each epoch of training:
print("Epoch ", '%03d' % (epoch+1),
": cost = ", '{:.3f}'.format(avg_cost),
", accuracy = ", '{:.2f}'.format(avg_accuracy_pct), "%",
sep='')
print("Training Complete. Testing Model.\n")
test_cost = cost.eval({x: mnist.test.images, y: mnist.test.labels})
test_accuracy_pct = accuracy_pct.eval({x: mnist.test.images, y: mnist.test.labels})
print("Test Cost:", '{:.3f}'.format(test_cost))
print("Test Accuracy: ", '{:.2f}'.format(test_accuracy_pct), "%", sep='')
Explanation: Train the network in a session (identical to intermediate_net_in_tensorflow.ipynb except addition of display_progress)
End of explanation |
11,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-Linear Time History Analysis (NLTHA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis (NLTHA) using a suite of ground motion records. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates a fragility model developed using this method.
<img src="../../../../figures/NLTHA_SDOF.png" width="400" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Find an adequate intensity measure
This sections allows users to find an intensity measure (PGA or Spectral Acceleration) that correlates well with damage. To do so, it is necessary to establish a range of periods of vibration and step (minT, maxT and stepT).
Step6: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step7: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step8: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step9: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step10: Plot vulnerability function
Step11: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import NLTHA_on_SDOF
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Non-Linear Time History Analysis (NLTHA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis (NLTHA) using a suite of ground motion records. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates a fragility model developed using this method.
<img src="../../../../figures/NLTHA_SDOF.png" width="400" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
sdof_hysteresis = "Default"
#sdof_hysteresis = "../../../../../rmtk_data/pinching_parameters.csv"
from read_pinching_parameters import read_parameters
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = '../../../../../rmtk_data/accelerograms'
minT, maxT = 0.0, 2.0
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../rmtk_data/damage_model.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
degradation = True
PDM, Sds = NLTHA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, gmrs, damage_model, damping_ratio, degradation)
utils.save_result(PDM,'../../../../../rmtk_data/PDM.csv')
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
minT, maxT,stepT = 0.0, 2.0, 0.1
regression_method = 'least squares'
utils.evaluate_optimal_IM(gmrs,PDM,minT,maxT,stepT,damage_model,damping_ratio,regression_method)
Explanation: Find an adequate intensity measure
This sections allows users to find an intensity measure (PGA or Spectral Acceleration) that correlates well with damage. To do so, it is necessary to establish a range of periods of vibration and step (minT, maxT and stepT).
End of explanation
IMT = "Sa"
T = 0.7
utils.export_IMLs_PDM(gmrs,T,PDM,damping_ratio,damage_model,'../../../../../rmtk_data/IMLs_PDM.csv')
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, T, damping_ratio,IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa" and "Sd".
2. T: This parameter defines the time period of the fundamental mode of vibration of the structure.
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.0, 1.2
utils.plot_fragility_model(fragility_model, minIML, maxIML)
utils.plot_fragility_scatter(fragility_model, minIML, maxIML, PDM, gmrs, IMT, T, damping_ratio)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
11,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoML for Text Classification
Learning Objectives
Learn how to create a text classification dataset for AutoML using BigQuery
Learn how to train AutoML to build a text classification model
Learn how to evaluate a model trained with AutoML
Learn how to predict on new test data with AutoML
Introduction
In this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Step12: Let's write the sample datatset to disk. | Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
Explanation: AutoML for Text Classification
Learning Objectives
Learn how to create a text classification dataset for AutoML using BigQuery
Learn how to train AutoML to build a text classification model
Learn how to evaluate a model trained with AutoML
Learn how to predict on new test data with AutoML
Introduction
In this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles: New York Times, TechCrunch or GitHub.
In a first step, we will query a public dataset on BigQuery taken from hacker news ( it is an aggregator that displays tech related headlines from various sources) to create our training set.
In a second step, use the AutoML UI to upload our dataset, train a text model on it, and evaluate the model we have just trained.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
%%bash
gsutil mb gs://$BUCKET
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
url, title, score
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
LENGTH(title) > 10
AND score > 10
AND LENGTH(url) > 0
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Here is a sample of the dataset:
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
COUNT(title) AS num_articles
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
AND LENGTH(title) > 10
GROUP BY
source
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
End of explanation
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = title_dataset.sample(n=1000)
sample_title_dataset.source.value_counts()
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
End of explanation
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
%%bash
gsutil cp data/titles_sample.csv gs://$BUCKET
Explanation: Let's write the sample datatset to disk.
End of explanation |
11,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
# compute the product of the output and the input_feature and its mean
# compute the squared value of the input_feature and its mean
# use the formula for the slope
# use the formula for the intercept
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
# square the residuals and add them up
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
# Compute RSS when using squarfeet on TEST data:
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
11,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Birthday problem simulated
Let's say we can't figure out how to formally calculate the probability that 2 people out of N have the same birthday (ignoring leap years). Not to worry
Step1: That doesn't give as an emperical probability though, we need to run some trials to do that.
Step2: Note that because the liklihood is high, we need 1000 trials to suss out that it's not 100%; here's what we get if we use 10 a couple times
Step3: Just to round out the "fun", let's see a bar chart of % of time we expect people to have the same birthday with groups ranging from 5 to 50 (counting by 5).
Step4: And now let's see a histogram of the number of people with the same birthday among 60 people out of 1000 trials | Python Code:
import random
from collections import defaultdict
def num_people_same_birthday(n):
bdays = defaultdict(int)
for i in range(n):
bdays[random.randrange(0, 365)] += 1
return len([k for (k, v) in bdays.items() if v > 1])
num_people_same_birthday(60)
Explanation: Birthday problem simulated
Let's say we can't figure out how to formally calculate the probability that 2 people out of N have the same birthday (ignoring leap years). Not to worry: we can simulate it!
First let's write a function to simulate the number of people who have the same birthday out of N:
End of explanation
def prob_n_people_have_same_birthday(n, *, num_trials):
num_positive_trials = len([1 for i in range(num_trials) if num_people_same_birthday(n) > 0])
return float(num_positive_trials) / num_trials
prob_n_people_have_same_birthday(60, num_trials=1000)
Explanation: That doesn't give as an emperical probability though, we need to run some trials to do that.
End of explanation
prob_n_people_have_same_birthday(60, num_trials=10)
prob_n_people_have_same_birthday(60, num_trials=10)
Explanation: Note that because the liklihood is high, we need 1000 trials to suss out that it's not 100%; here's what we get if we use 10 a couple times:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
group_sizes = list(range(5, 55, 5))
probs = [prob_n_people_have_same_birthday(n, num_trials=1000) for n in group_sizes]
plt.xlabel("Group Size")
plt.ylabel("Prob")
plt.title("Probability that among given group size at least 2 people have the same birthday")
plt.bar(group_sizes, probs, width=2.5)
plt.show()
Explanation: Just to round out the "fun", let's see a bar chart of % of time we expect people to have the same birthday with groups ranging from 5 to 50 (counting by 5).
End of explanation
plt.hist([num_people_same_birthday(60) for i in range(1000)])
None
Explanation: And now let's see a histogram of the number of people with the same birthday among 60 people out of 1000 trials
End of explanation |
11,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python中常用的高级特性
三目运算符
例如Javascript或者php等大部分编程语言中都会提供三目运算符以达到快捷的判断赋值功能.
但是在Python开发者认为不符合Python简洁, 简单的特点, 所以其实Python中没有常见的?
Step1: 列表生成式
列表生成式是一种生成规律数组的简写形式.
Step2: 字典生成式
字典生成式和列表生成式语法类型. 也是一个生成规律字典的简写形式.
Step3: lambda表达式
所谓lambda表达式, 即其他语言中的匿名函数. 但是与一般的匿名函数又有所不同.
在lambda表达式中只允许直接写返回值, 不允许有其他语句
语法
Step4: 上下文管理器
对于一个文件或者网络操作, 我们需要在使用完资源之后关闭这个资源才能保证内存不泄露.
这时候在退出的时候进行方便的资源关闭就显的非常重要.
Step5: 在python中, 这是通过对象中的__enter__和__exit__保留方法实现的. 我们也可以自己定义类中的这两个方法来实现我们自己的上下文管理器
Step6: 生成器
在一个普通函数中我们使用yield关键字, 临时的"返回"一些值给到调用者
这在Python中就是我们所说的生成器.
生成器按照某种算法每次生成一次值(可以多个). 这样可以避免生成大量不需要的值, 节省内存空间.
Step7: 我们可以使用 next 方法获取每次临时返回的值
Step8: 我们可以在临时返回值的时候, 自动往生成器里 使用 send 方法发送值
首次发送的值必须为None
Step9: 理解了以上内容之后, 我们再看__如何在一个生成器中临时返回另一个生成器中的内容__
要达到以上需求, 我们使用yield from关键字, 从一个生成器中生成值. 思考为什么不使用yield?
Step10: 生成器结合上下文管理器使用
如果每次使用上下文管理器都需要写一个类来定义__enter__和__exit__方法, 这不符合Python简洁至上的原则.
我们结合生成器的特效, 临时返回一个值, 下次next的时候继续执行 的特性, 我们是不是可以使用这个特性来自动生成一个上下文管理器呢? | Python Code:
# 给一个变量赋值, 取得给定整形变量的绝对值
number1 = -11
value1 = number1
if value1 < 0:
value1 = -value1
print(value1)
# 我们这边可以使用Python中特殊的三目运算符形式
number2 = -22
value2 = number2 if number2 > 0 else -number2
print(value2)
Explanation: Python中常用的高级特性
三目运算符
例如Javascript或者php等大部分编程语言中都会提供三目运算符以达到快捷的判断赋值功能.
但是在Python开发者认为不符合Python简洁, 简单的特点, 所以其实Python中没有常见的?:形式的三目运算符.
取而代之的是true_exp if cond else false_exp这种形式.
语法为: 为真时候的值 if 判断表达式 else 为假时候的值
End of explanation
# 例子1: 生成一个1*1, 2*2, 3*3, ..., 9*9的数组, 我们分别使用2种形式
# 循环形式
list1 = []
for n in range(10):
list1.append(n * n)
print(list1)
# 使用列表生成式
print([ n * n for n in range(10) ])
# 例子2: 生成一个乘法口诀列表
# 循环结构
list2 = []
for i in range(1, 10):
for j in range(1, i + 1):
list2.append(i * j)
print(list2)
# 列表生成式方式
print([i * j for i in range(1, 10) for j in range(1, i + 1)])
Explanation: 列表生成式
列表生成式是一种生成规律数组的简写形式.
End of explanation
# 生成键值为相同数字的字典
print({ n: n for n in range(10) })
# 生成键为ASCII, 值为字母的字典
print({chr(ord('A') + i): (ord('A') + i) for i in range(26)})
Explanation: 字典生成式
字典生成式和列表生成式语法类型. 也是一个生成规律字典的简写形式.
End of explanation
# 对于给定数组A, 返回该数组的拷贝, 值为对应元素的值的平方
list(map(lambda x: x * x, range(10)))
# 对于给定数组A, 自定义排序其中的字段
from functools import reduce
reduce(lambda x, s: x + s, range(10), 0)
Explanation: lambda表达式
所谓lambda表达式, 即其他语言中的匿名函数. 但是与一般的匿名函数又有所不同.
在lambda表达式中只允许直接写返回值, 不允许有其他语句
语法: lambda 参数: 返回值
End of explanation
# 首先使用其他语言中常用的方式来处理文件的关闭
def proc_file1(filename):
fp = open(filename, 'r')
for line in fp.readlines():
print(line)
if "5" in line:
print("err")
fp.close()
return False
if line == "xxx":
fp.close()
return True
fp.close()
proc_file1('data.txt')
# 使用上下文管理器的方式进行安全的文件读取
def proc_file2(filename):
with open(filename, 'r') as fp:
for line in fp.readlines():
print(line)
if "5" in line:
print("err")
return False
if line == "xxx":
return True
proc_file2('data.txt')
Explanation: 上下文管理器
对于一个文件或者网络操作, 我们需要在使用完资源之后关闭这个资源才能保证内存不泄露.
这时候在退出的时候进行方便的资源关闭就显的非常重要.
End of explanation
class Resource(object):
# 返回值会作为获取到资源
def __enter__(self):
print('获取字段')
return 'resource handler'
# 退出时执行清理操作
def __exit__(self, exc_type, exc_val, exc_tb):
print('释放资源', exc_type, exc_val, exc_tb)
# 无异常情况
with Resource() as r:
print(r)
# 有异常情况下, __exit__ 方法中会获取到异常的错误
with Resource() as r:
print(r)
raise Exception("error on process")
# 思考以下会不会字段释放资源
with Resource() as r:
print(r)
exit()
Explanation: 在python中, 这是通过对象中的__enter__和__exit__保留方法实现的. 我们也可以自己定义类中的这两个方法来实现我们自己的上下文管理器
End of explanation
# 对于一个最简单的生成器, 只需要我们将列表生成式的 `[]` 替换为 `()` 就可以得到一个生成器对象
g1 = (n * n for n in range(3))
print(g1, type(g1))
Explanation: 生成器
在一个普通函数中我们使用yield关键字, 临时的"返回"一些值给到调用者
这在Python中就是我们所说的生成器.
生成器按照某种算法每次生成一次值(可以多个). 这样可以避免生成大量不需要的值, 节省内存空间.
End of explanation
# 我们可以使用 next 方法获取每次临时返回的值
print(next(g1))
print(next(g1))
print(next(g1))
# 在内部不再 临时 返回值的时候, 我们再次调用 next 方法会得到一个StopIteration异常, 表示生成器已结束
next(g1)
# 自定义生成器
def generator2():
print(1)
yield 'a'
print(2)
yield 'b'
print(3)
yield 'c'
return "return返回的结果值"
g2 = generator2()
# 观察结果判断执行流程
next(g2)
# 我们也可以使用 for 循环的方式来获取值
g3 = generator2()
for v in g3:
print("得到值: ", v)
# 注意这时候我们就不能得到生成器的返回值了
# 或者使用 list() 或者tuple() 类似的方式将生成的值放到对应的类型数组/元组中
print(list(generator2()))
print(tuple(generator2()))
Explanation: 我们可以使用 next 方法获取每次临时返回的值
End of explanation
def generator3():
print("生成器内部: 1")
v1 = yield 'a'
print("生成器内部: 2")
print(v1)
print("生成器内部: 3")
v2 = yield 'b'
print("生成器内部: 4")
print(v2)
print("生成器内部: 5")
v3 = yield 'c'
print("生成器内部: 6")
print(v3)
print("生成器内部: 7")
return "return返回的结果值"
g4 = generator3()
print("生成器返回值:", g4.send(1))
Explanation: 我们可以在临时返回值的时候, 自动往生成器里 使用 send 方法发送值
首次发送的值必须为None
End of explanation
# 我们有2个初始生成器
# 奇数生成器
def odd():
for i in range(1, 10, 2):
yield i
# 偶数生成器
def even():
for i in range(0, 10, 2):
yield i
print(list(odd()))
print(list(even()))
# 我们定义一个新的生成器, 依赖这两个生成器生成值
def numbers():
yield from odd()
yield from even()
print(list(numbers()))
Explanation: 理解了以上内容之后, 我们再看__如何在一个生成器中临时返回另一个生成器中的内容__
要达到以上需求, 我们使用yield from关键字, 从一个生成器中生成值. 思考为什么不使用yield?
End of explanation
import contextlib
# 使用装饰器自动生成上下文管理器对象
@contextlib.contextmanager
def resource():
print("初始化...")
yield "资源"
print("清理资源...")
with resource() as r:
print("获取到: ", r)
Explanation: 生成器结合上下文管理器使用
如果每次使用上下文管理器都需要写一个类来定义__enter__和__exit__方法, 这不符合Python简洁至上的原则.
我们结合生成器的特效, 临时返回一个值, 下次next的时候继续执行 的特性, 我们是不是可以使用这个特性来自动生成一个上下文管理器呢?
End of explanation |
11,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates
Step1: Run the command again if you are getting oauth2client error.
Note
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step6: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options
Step7: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip install --user apache-beam[gcp]==2.16.0
Explanation: <h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
import apache_beam as beam
print(beam.__version__)
Explanation: Run the command again if you are getting oauth2client error.
Note: You may ignore the following responses in the cell output above:
ERROR (in Red text) related to: witwidget-gpu, fairing
WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client
<b>Restart</b> the kernel before proceeding further.
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
import datetime, os
def to_csv(rowdict):
# Pull columns from BQ and create a line
import hashlib
import copy
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks'.split(',')
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
End of explanation |
11,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3/ Exercises solutions
Step1: RREF exercises
E3.1
Step2: Verify the solution geometrically
Step3: E3.2
Step4: E3.3
Step5: Matrix equations
Matrix product
E3.5
Compute the following matrix products | Python Code:
# helper code needed for running in colab
if 'google.colab' in str(get_ipython()):
print('Downloading plot_helpers.py to util/ (only neded for colab')
!mkdir util; wget https://raw.githubusercontent.com/minireference/noBSLAnotebooks/master/util/plot_helpers.py -P util
from sympy import *
init_printing()
%matplotlib inline
import matplotlib.pyplot as mpl
from util.plot_helpers import plot_augmat # helper function
Explanation: 3/ Exercises solutions
End of explanation
AUG = Matrix([
[3, 3, 6],
[2, S(3)/2, 5]])
AUG
AUG.rref()
# solution is x=4, y=-2
Explanation: RREF exercises
E3.1
End of explanation
AUG = Matrix([
[3, 3, 6],
[2, S(3)/2, 5]])
fig = mpl.figure()
plot_augmat(AUG)
# the solution (4,-2) is where the two lines intersect
Explanation: Verify the solution geometrically
End of explanation
A = Matrix([
[3, 3, 6],
[2, S(3)/2, 5]])
A
# First row operation: R1 <-- 1/3*R1
A[0,:] = A[0,:]/3
A
# Second row operation: R2 <-- R2 - 2*R1
A[1,:] = A[1,:] - 2*A[0,:]
A
# Third row operation: R2 <-- -2*R2
A[1,:] = -2*A[1,:]
A
# Fourth row operation: R1 <-- R1 - R2
A[0,:] = A[0,:] - A[1,:]
A
Explanation: E3.2
End of explanation
AUGA = Matrix([
[3, 3, 6],
[1, 1, 5]])
AUGA.rref()
# no solutions since second row 0x+0y == 1 is impossible
# verify geomtetrically
AUGA = Matrix([
[3, 3, 6],
[1, 1, 5]])
fig = mpl.figure()
plot_augmat(AUGA)
# no solution since lines two lines are parallel
AUGB = Matrix([
[3, 3, 6],
[2, S(3)/2, 3]])
AUGB.rref()
# one solution x=0, y=2
# verify geomtetrically
AUGB = Matrix([
[3, 3, 6],
[2, S(3)/2, 3]])
fig = mpl.figure()
plot_augmat(AUGB)
# the solution (0,2) is where the two lines intersect
AUGC = Matrix([
[3, 3, 6],
[1, 1, 2]])
AUGC.rref()
# infinitely many soln's of the form point + s*dir, for s in \mathbb{R}
# to complete the solution,
# observe the second column is a free varible y = s
# and thus we have these equations:
# x + s = 2
# 0x + 0s = 0 (trivial eqn.)
# y = s (def'n of free variable)
# thus solution is:
# [x,y] = [2-s,s] = [2,0] + s*[-1,1] for s in \mathbb{R}
# verify geomtetrically
AUGC = Matrix([
[3, 3, 6],
[1, 1, 2]])
fig = mpl.figure()
plot_augmat(AUGC)
# the solution is infinite since two lines are the same
Explanation: E3.3
End of explanation
# define matrices
P1 = Matrix([
[1, 2],
[3, 4]])
P2 = Matrix([
[5, 6],
[7, 8]])
# compute product
P = P1*P2
P
# define matrices
Q1 = Matrix([[3, 1, 2, 2],
[0, 2, -2, 1]])
Q2 = Matrix([[-2, 3],
[ 1, 0],
[-2, -2],
[ 2, 2]])
# compute product
Q = Q1*Q2
Q
Explanation: Matrix equations
Matrix product
E3.5
Compute the following matrix products:
$$
P =
\begin{bmatrix} 1&2 \ 3&4 \end{bmatrix}
\begin{bmatrix} 5&6 \ 7&8 \end{bmatrix}
\quad
\textrm{and}
\quad
Q =
\begin{bmatrix}
3& 1 & 2& 2 \ 0 & 2 & -2 & 1
\end{bmatrix}!! %
\begin{bmatrix}
-2 & 3 \
\ \ 1 & 0 \
-2 & !!-2 \
\ \ 2 & 2
\end{bmatrix}!.
$$
End of explanation |
11,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predict Concentrations of Metabolites
The objective is to see whether continuous vector embedding can help in the prediction of concentrations of metabolites.
Step1: Load Standards Data
Step2: Try ordinary least squares regression on all the chemical properties
Step3: Performs 10-folds cross-validation on the regression model.
cross_val_predict returns an array of the same size as y where each entry is a prediction obtained by cross validation
Step4: Now try the embedding stuff
Load model
Step5: Get the SMILES strings of our standards molecules, and convert them to canonical SMILES strings using rdkit.
Step6: Pre-process the SMILES strings.
Step7: Try to auto-encode a few SMILES for sanity check.
Step8: Extract the latent vectors
Step9: Visualise the latent vectors
Step10: Concatenate the latent + chemical features for regression
Step11: Make new predictions using X_new
Step12: Try other regressions
Try ridge regression
Step13: Try gaussian process regression with this kernel | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import display, HTML
from scipy import stats
from sklearn import linear_model
from sklearn.linear_model import LinearRegression
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
from sklearn.model_selection import cross_val_predict
from sklearn.model_selection import cross_val_score
import pandas as pd
from rdkit import Chem
from keras.models import Sequential, Model, load_model
import sys
sys.path.append('/Users/joewandy/git/keras-molecules')
from molecules.model import MoleculeVAE
from molecules.utils import load_dataset
from embedding import to_one_hot_array, get_input_arr, autoencode, encode
from embedding import visualize_latent_rep, get_classifyre, get_scatter_colours
plt.style.use('seaborn-notebook')
Explanation: Predict Concentrations of Metabolites
The objective is to see whether continuous vector embedding can help in the prediction of concentrations of metabolites.
End of explanation
df = pd.read_csv('data_all_standards.csv')
df.head()
df.shape
def plot_graph(df, conc_label, feature_label):
x = np.array(df[feature_label], dtype=float)
y = np.array(df[conc_label], dtype=float)
plt.scatter(x, y, s=10)
slope, intercept, r_value, p_value, std_err=stats.linregress(x, y)
z = np.polyfit(x, y, 1)
p = np.poly1d(z)
plt.plot(x, p(x), "r--")
print ("r_value = " + str(r_value))
plt.title(feature_label + ' v intensity')
plt.xlabel(feature_label)
plt.ylabel('Intensity (@20uM)')
plt.text((np.min(x)+1), (np.max(y)-1), "y=%.6fx+(%.6f)"%(z[0], z[1]))
return y
y = plot_graph(df, '4.321928095', 'ASA_P')
Explanation: Load Standards Data
End of explanation
X = df[['molP', 'TPSA9', 'VDWSA',
'ASA', 'ASA+', 'ASA-',
'ASA_H', 'ASA_P', 'MW',
'NC', 'PC', 'HLB',
'Sol', 'Neg', 'Pos',
'Ref', 'logP', 'logD',
'TPSA', 'H-donors', 'H-acceptors']].values.astype(float)
print X.shape
print y.shape
Explanation: Try ordinary least squares regression on all the chemical properties
End of explanation
k = 10
lr = LinearRegression()
predicted = cross_val_predict(lr, X, y, cv=k)
fig, ax = plt.subplots()
ax.scatter(y, predicted, s=10)
slope, intercept, r_value, p_value, std_err=stats.linregress(y, predicted)
z = np.polyfit(y, predicted, 1)
p = np.poly1d(z)
ax.plot(y, p(y), "r--")
print ("r_value = " + str(r_value))
ax.set_xlim((10, 35))
ax.set_ylim((10, 35))
ax.plot(ax.get_xlim(), ax.get_ylim(), ls="--", c=".3")
ax.set_xlabel('Measured Intensity')
ax.set_ylabel('Predicted Intensity')
plt.title('Predicted vs. Measured Intensity @20uM (10-folds CV)', size=14)
Explanation: Performs 10-folds cross-validation on the regression model.
cross_val_predict returns an array of the same size as y where each entry is a prediction obtained by cross validation
End of explanation
base_dir = '/Users/joewandy/git/keras-molecules/'
data_file = base_dir + 'data/smiles_500k_processed.h5'
model_file = base_dir + 'data/smiles_500k_model_292.h5'
latent_dim = 292
_, charset = load_dataset(data_file, split=False)
print charset
model = MoleculeVAE()
model.load(charset, model_file, latent_rep_size=latent_dim)
Explanation: Now try the embedding stuff
Load model
End of explanation
smiles_list = df['Smiles'].values.tolist()
smiles_list = [Chem.MolToSmiles(Chem.MolFromSmiles(x)) for x in smiles_list]
Explanation: Get the SMILES strings of our standards molecules, and convert them to canonical SMILES strings using rdkit.
End of explanation
input_array = get_input_arr(smiles_list, charset)
Explanation: Pre-process the SMILES strings.
End of explanation
autoencode(model, charset, input_array, N=10)
Explanation: Try to auto-encode a few SMILES for sanity check.
End of explanation
X_latent = encode(model, input_array)
print X_latent.shape
Explanation: Extract the latent vectors
End of explanation
visualize_latent_rep(input_array, model, latent_dim)
Explanation: Visualise the latent vectors
End of explanation
X_new = np.concatenate((X, X_latent), axis=1)
print X.shape
print X_latent.shape
print X_new.shape
Explanation: Concatenate the latent + chemical features for regression
End of explanation
predicted_new = cross_val_predict(lr, X_new, y, cv=k)
def make_plot(predicted, predicted_new, y):
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(14, 7))
ax1.scatter(y, predicted, s=10)
slope, intercept, r_value, p_value, std_err=stats.linregress(y, predicted)
z = np.polyfit(y, predicted, 1)
p = np.poly1d(z)
ax1.plot(y, p(y), "r--")
ax1.set_title('Chemical Features')
ax1.set_ylabel('Predicted Intensity')
ax1.set_xlabel('Measured Intensity')
ax2.scatter(y, predicted_new, s=10)
slope, intercept, r_value, p_value, std_err=stats.linregress(y, predicted_new)
z = np.polyfit(y, predicted_new, 1)
p = np.poly1d(z)
ax2.plot(y, p(y), "r--")
ax2.set_title('Chemical + Embedding Features')
ax2.set_xlabel('Measured Intensity')
ax1.set_xlim((10, 40))
ax2.set_xlim((10, 40))
ax1.set_ylim((10, 40))
ax1.plot(ax1.get_xlim(), ax1.get_ylim(), ls="--", c=".3")
ax2.plot(ax2.get_xlim(), ax2.get_ylim(), ls="--", c=".3")
plt.suptitle('Predicted vs. Measured Intensity @20uM (10-folds CV)', size=14)
make_plot(predicted, predicted_new, y)
Explanation: Make new predictions using X_new
End of explanation
reg = linear_model.Ridge()
predicted = cross_val_predict(reg, X, y, cv=k)
predicted_new = cross_val_predict(reg, X_new, y, cv=k)
make_plot(predicted, predicted_new, y)
Explanation: Try other regressions
Try ridge regression
End of explanation
kernel = 1.0 * RBF(length_scale=100.0, length_scale_bounds=(1e-2, 1e3)) \
+ WhiteKernel(noise_level=1, noise_level_bounds=(1e-10, 1e+1))
gp = GaussianProcessRegressor(kernel=kernel, alpha=0.0)
predicted = cross_val_predict(gp, X, y, cv=k)
predicted_new = cross_val_predict(gp, X_new, y, cv=k)
make_plot(predicted, predicted_new, y)
Explanation: Try gaussian process regression with this kernel:
http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gpr_noisy.html#sphx-glr-auto-examples-gaussian-process-plot-gpr-noisy-py
End of explanation |
11,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Biblioteca Shapely e Objetos geométricos
Fonte
Step1: Vamos ver como a variável do tipo Point é mostrada no jupyter
Step2: Também podemos imprimir os pontos para ver a sua definição
Step3: Pontos tridimensionais podem ser reconhecidos pela letra maiúscula Z.
Vamos verificar o tipo de dados da variável point
Step4: Podemos observar que o tipo do objeto ponto é um Point do módulo Shapely que é especificado em um formato baseado na biblioteca GEOS do C++, que é um biblioteca padrão de GIS. Uma das bibliotecas utilizadas para construir por exemplo o QGIS.
1.1 Objeto Point - Atributos e funções
Os objetos do tipo Point já possuem atributos e funções internas para realizar operações básicas. Uma das funções mais úteis é a capacidade de extrair as coordenadas e a possibilidade de calcular a distância euclidiana entre dois pontos.
Step5: Como pode ser observado o tipo de dado da variável point_coords é um Shapely CoordinateSequence.
Vamos ver como recuperar as coordenadas deste objeto
Step6: Como podemos ver acima a variável xy contém uma tupla em que as coordendas x e y são armazendas em um array numpy.
Usando os atributos point1.x e point1.y é possível obter as coordendas diretamente como números decimais.
Também é possível calcular a distância entre pontos, que é muito útil em diversas aplicações. A distância retornada é baseada na projeção dos pontos (ex. graus em WGS84, metros em UTM)
Step7: 2. Objeto LineString (Linha)
Criar um objeto LineString é relativamente similar de como o objeto Point foi criado.
Agora em vez de usar uma única tupla de coordenadas, nós podemos contruir uma linha usando uma lista de Point ou um vetor de tuplas com as respectivas coordenadas
Step8: Vamos ver como a variável do tipo LineString é mostrada no jupyter
Step9: Como podemos ver acima, a variável line constitue um multiplo par de coordenadas.
Vamos verificar o tipo de dados da variável line
Step10: 2.1 Objeto LineString - Atributos e funções
O objeto LineString possui diversos atributos e funções internas. Com ele é possível extrair as coordenadas ou o tamanho da linha, calcular o centróide, criar pontos ao longo da linha em distâncias específicas, calcular a menor distância da linha para um ponto específico e etc. A lista completa de funcionalidades pode ser acessada na documentação
Step11: Como podemos observar, as coordenadas novamente são armazenadas em arrays numpy, em que o primeiro array inclui todas as coordenadas x e o segundo array todas as coordendas y.
Podemos extrair somente as coordendas x e y das seguintes formas.
Step12: É possível recuperar atributos específicos, como o tamanho da linha e o ponto central (centróide) diretamente do objeto
Step13: Com estas informações, já podemos realizar muitas tarefas construindo aplicações em mapas, e ainda não calculamos nada ainda. Estes atributos estão embutidos em todos os objetos LineString que são criados.
É importante perceber que o centróide retornado no exemplo acima é um objeto Point, que possui suas próprias funções já mencionadas anteriormente.
3. Objeto Polygon (Polígono)
Para criar um objeto Polygon usaremos a mesma lógica do Point e LineString, porém na criação do objeto Polygon só podemos utilizar uma sequência de coordendas.
Para criar um polígono são necessários pelo menos três coordendas (que basicamente formam um triângulo).
Step14: Vamos ver como a variável do tipo Polygon é mostrada no jupyter
Step15: Vamos verificar o tipo de dados da variável polygon
Step16: Perceba que a representação do Polygon possui dois parentesis ao redor das coordendas (ex.
Step17: ```
Help on Polygon in module shapely.geometry.polygon object
Step18: Vamos ver como a variável o nosso Polygon é mostrado no jupyter
Step19: Como podemos observar, o Polygon agora possui duas tuplas diferentes de coordendas. A primeira representa o exterior e a segunda representa o buraco presente no interior do polígono.
3.1 Objeto Polygon - Atributos e funções
Podemos novamente acessar diferentes atributos diretamente do objeto Polygon, que podem ser bastante úteis para muitas análises, como
Step20: Como podemos ver acima, de maneira direta podemos acessar diferentes atributos do objeto Polygon. Porém perceba que o tamanho do perímetro exterior foi obtido em graus decimais, porque estamos utilizando coordendas de latitude e longitude na entrada do nosso polígono. Posteriormente vamos aprender a mudar a projeção desses dados e ser capaz de obter as mesmas informações em metros.
4. Coleção de geometrias (opcional)
Em algumas ocasiões é necessário armazenar multiplas linhas ou polígonos em um único objeto (ex
Step21: Vamos ver como as nossas coleções de geometrias são mostradas no jupyter
Step22: Podemos observar que as saídas são similares as geometrias básicas que foram criadas anteriormente, más agora esses objetos são extensíveis para multiplos pontos, linhas e polígonos.
4.1 Coleção de geometrias - Atributos e funções
Podemos também utilizar muitas funções internas especificas destas coleções, como o Convex Hull
Utilizando a nossa estrutura anterior de três pontos
Step23: Vamos chamar a função interna Convex Hull
Step24: Outros atributos internos da coleção de geometria
Step25: Também podemos acessar diferentes itens dentro de nossas coleções de geometria. Podemos, por exemplo, acessar um único polígono de nosso objeto MultiPolygon, referindo-se ao índice
Step26: Como mostrado acima podemos ver que o objeto MultiPolygon possui os mesmos atributos que o objeto Polygon, mas agora métodos como cálculo da área retorna a soma das áreas de todos os polígonos presentes no objeto.
Também existem algumas funções extras disponíveis apenas para estas coleções, como is_valid que retorna se o polígono ou linhas possuem interseção um com o outro. | Python Code:
# Import necessary geometric objects from shapely module
from shapely.geometry import Point, LineString, Polygon
# Create Point geometric object(s) with coordinates
point1 = Point(2.2, 4.2)
point2 = Point(7.2, -25.1)
point3 = Point(9.26, -2.456)
point3D = Point(9.26, -2.456, 0.57)
Explanation: Biblioteca Shapely e Objetos geométricos
Fonte:
este material é uma tradução e adaptação do notebook: <br/> https://github.com/Automating-GIS-processes/site/blob/master/source/notebooks/L1/geometric-objects.ipynb
Modelo espacial de dados
Objetos fundamentais que podem ser utilizados em python com a biblioteca Shapely
Os objetos geométricos fundamentais para trabalhar com dados georeferenciados são: Points, Lines e Polygons. Em python podemos usar o módulo Shapely para trabalhar com esses Geometric Objects. Entre algumas funcionalidades, podemos citar:
Criar uma Line ou Polygon de uma Collection de geometrias Point;
Calcular área, tamanho, limites e etc. dos objetos geométricos;
Realizar operações geométricas como: Union, Difference, Distance e etc;
Realizar consultas espaciais entre geometrias, como: Intersects, Touches, Crosses, Within e etc.
Os objetos geométricos consistem em tuplas de coordenadas, em que:
Point: representa um ponto no espaço. Podendo ser bidimensional (x, y) ou tridimensional (x, y, z);
LineString: representa uma sequência de pontos para formar uma linha. Uma linha consiste de pelo menos dois pontos.
Polygon: representa um polígono preenchido, formado por uma lista de pelo menos três pontos, que indicam uma estrutura de anel externo. Os polígonos também podem apresentar aberturas internas (buracos).
Também é possível construir uma coleção de objetos geométricos, como:
MultiPoint: representa uma coleção de Point;
MultiLineString: representa uma coleção de LineString;
MultiPolygon: representa uma coleção de Polygon.
É possível instalar o módulo Shapely em nosso ambiente através do comando:
conda install shapely
1. Objeto Point (Ponto)
Para criar um objeto Point é fácil, basta passar as coordenadas x e y para o objeto Point() (também é possível incluir a coordenada z):
End of explanation
point1
Explanation: Vamos ver como a variável do tipo Point é mostrada no jupyter:
End of explanation
print(point1)
print(point3D)
Explanation: Também podemos imprimir os pontos para ver a sua definição:
End of explanation
# What is the type of the point?
print(type(point1))
Explanation: Pontos tridimensionais podem ser reconhecidos pela letra maiúscula Z.
Vamos verificar o tipo de dados da variável point:
End of explanation
# Get the coordinates
point_coords = point1.coords
# What is the type of this?
type(point_coords)
Explanation: Podemos observar que o tipo do objeto ponto é um Point do módulo Shapely que é especificado em um formato baseado na biblioteca GEOS do C++, que é um biblioteca padrão de GIS. Uma das bibliotecas utilizadas para construir por exemplo o QGIS.
1.1 Objeto Point - Atributos e funções
Os objetos do tipo Point já possuem atributos e funções internas para realizar operações básicas. Uma das funções mais úteis é a capacidade de extrair as coordenadas e a possibilidade de calcular a distância euclidiana entre dois pontos.
End of explanation
# Get x and y coordinates
xy = point_coords.xy
# Get only x coordinates of Point1
x = point1.x
# Whatabout y coordinate?
y = point1.y
# Print out
print("xy:", xy, "\n")
print("x:", x, "\n")
print("y:", y)
Explanation: Como pode ser observado o tipo de dado da variável point_coords é um Shapely CoordinateSequence.
Vamos ver como recuperar as coordenadas deste objeto:
End of explanation
# Calculate the distance between point1 and point2
point_dist = point1.distance(point2)
print("Distance between the points is {0:.2f} decimal degrees".format(point_dist))
Explanation: Como podemos ver acima a variável xy contém uma tupla em que as coordendas x e y são armazendas em um array numpy.
Usando os atributos point1.x e point1.y é possível obter as coordendas diretamente como números decimais.
Também é possível calcular a distância entre pontos, que é muito útil em diversas aplicações. A distância retornada é baseada na projeção dos pontos (ex. graus em WGS84, metros em UTM):
Vamos calcular a distância entre o ponto 1 e o ponto 2:
End of explanation
# Create a LineString from our Point objects
line = LineString([point1, point2, point3])
# It is also possible to produce the same outcome using coordinate tuples
line2 = LineString([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)])
Explanation: 2. Objeto LineString (Linha)
Criar um objeto LineString é relativamente similar de como o objeto Point foi criado.
Agora em vez de usar uma única tupla de coordenadas, nós podemos contruir uma linha usando uma lista de Point ou um vetor de tuplas com as respectivas coordenadas:
End of explanation
# Visualize the line
line
print("line: \n", line, "\n")
print("line2: \n", line2, "\n")
Explanation: Vamos ver como a variável do tipo LineString é mostrada no jupyter:
End of explanation
print("Object data type:", type(line))
print("Geometry type as text:", line.geom_type)
Explanation: Como podemos ver acima, a variável line constitue um multiplo par de coordenadas.
Vamos verificar o tipo de dados da variável line:
End of explanation
# Get x and y coordinates of the line
lxy = line.xy
print(lxy)
Explanation: 2.1 Objeto LineString - Atributos e funções
O objeto LineString possui diversos atributos e funções internas. Com ele é possível extrair as coordenadas ou o tamanho da linha, calcular o centróide, criar pontos ao longo da linha em distâncias específicas, calcular a menor distância da linha para um ponto específico e etc. A lista completa de funcionalidades pode ser acessada na documentação: Shapely documentation. Vamos utilizar algumas delas.
Nós podemos extrair as coordenadas da LineString similar a ao objeto Point
End of explanation
# Extract x coordinates
line_x = lxy[0]
# Extract y coordinates straight from the LineObject by referring to a array at index 1
line_y = line.xy[1]
print('line_x:\n', line_x, '\n')
print('line_y:\n', line_y)
Explanation: Como podemos observar, as coordenadas novamente são armazenadas em arrays numpy, em que o primeiro array inclui todas as coordenadas x e o segundo array todas as coordendas y.
Podemos extrair somente as coordendas x e y das seguintes formas.
End of explanation
# Get the lenght of the line
l_length = line.length
# Get the centroid of the line
l_centroid = line.centroid
# What type is the centroid?
centroid_type = type(l_centroid)
# Print the outputs
print("Length of our line: {0:.2f}".format(l_length))
print("Centroid of our line: ", l_centroid)
print("Type of the centroid:", centroid_type)
Explanation: É possível recuperar atributos específicos, como o tamanho da linha e o ponto central (centróide) diretamente do objeto:
End of explanation
# Create a Polygon from the coordinates
poly = Polygon([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)])
# We can also use our previously created Point objects (same outcome)
# --> notice that Polygon object requires x,y coordinates as input
poly2 = Polygon([[p.x, p.y] for p in [point1, point2, point3]])
Explanation: Com estas informações, já podemos realizar muitas tarefas construindo aplicações em mapas, e ainda não calculamos nada ainda. Estes atributos estão embutidos em todos os objetos LineString que são criados.
É importante perceber que o centróide retornado no exemplo acima é um objeto Point, que possui suas próprias funções já mencionadas anteriormente.
3. Objeto Polygon (Polígono)
Para criar um objeto Polygon usaremos a mesma lógica do Point e LineString, porém na criação do objeto Polygon só podemos utilizar uma sequência de coordendas.
Para criar um polígono são necessários pelo menos três coordendas (que basicamente formam um triângulo).
End of explanation
poly2
print('poly:', poly)
print('poly2:', poly2)
Explanation: Vamos ver como a variável do tipo Polygon é mostrada no jupyter:
End of explanation
print("Object data type:", type(poly))
print("Geometry type as text:", poly.geom_type)
Explanation: Vamos verificar o tipo de dados da variável polygon:
End of explanation
# # Help function to show the documentation of Shapely's Polygon
# help(Polygon)
Explanation: Perceba que a representação do Polygon possui dois parentesis ao redor das coordendas (ex.: POLYGON ((<valores>)) ). Isso acontece porque o objeto Polygon pode ter aberturas (buracos) dentro dele.
Como é mostrado na documentação do Polygon (utilizando a função help do python) um polígono pode ser construído usando as coordendas exteriores e coordendas interiores (opcionais), em que as coordendas interiores criam um buraco dentro do polígono.
End of explanation
# First we define our exterior
poly_exterior = [(-180, 90), (-180, -90), (180, -90), (180, 90)]
# Let's create a single big hole where we leave ten decimal degrees at the boundaries of the polygon
# Notice: there could be multiple holes, thus we need to provide a list of holes
hole = [[(-100, 50), (-100, -50), (100, -50), (100, 50)]]
# Plygon without a hole
poly = Polygon(shell=poly_exterior)
# Now we can construct our Polygon with the hole inside
poly_has_a_hole = Polygon(shell=poly_exterior, holes=hole)
Explanation: ```
Help on Polygon in module shapely.geometry.polygon object:
class Polygon(shapely.geometry.base.BaseGeometry)
| A two-dimensional figure bounded by a linear ring
|
| A polygon has a non-zero area. It may have one or more negative-space
| "holes" which are also bounded by linear rings. If any rings cross each
| other, the feature is invalid and operations on it may fail.
|
| Attributes
| ----------
| exterior : LinearRing
| The ring which bounds the positive space of the polygon.
| interiors : sequence
| A sequence of rings which bound all existing holes.
```
Vamos ver como podemos criar um Polygon com um buraco interno. Primeiro vamos definir um bounding box (caixa delimitadora) e depois criar um buraco na parte interna.
End of explanation
poly_has_a_hole
print('poly:', poly)
print('poly_has_a_hole:', poly_has_a_hole)
print('type:', type(poly_has_a_hole))
Explanation: Vamos ver como a variável o nosso Polygon é mostrado no jupyter:
End of explanation
# Get the centroid of the Polygon
poly_centroid = poly.centroid
# Get the area of the Polygon
poly_area = poly.area
# Get the bounds of the Polygon (i.e. bounding box)
poly_bbox = poly.bounds
# Get the exterior of the Polygon
poly_ext = poly.exterior
# Get the length of the exterior
poly_ext_length = poly_ext.length
# Print the outputs
print("Poly centroid: ", poly_centroid)
print("Poly Area: ", poly_area)
print("Poly Bounding Box: ", poly_bbox)
print("Poly Exterior: ", poly_ext)
print("Poly Exterior Length: ", poly_ext_length)
Explanation: Como podemos observar, o Polygon agora possui duas tuplas diferentes de coordendas. A primeira representa o exterior e a segunda representa o buraco presente no interior do polígono.
3.1 Objeto Polygon - Atributos e funções
Podemos novamente acessar diferentes atributos diretamente do objeto Polygon, que podem ser bastante úteis para muitas análises, como: obter área, centróide, bounding box, o exterior e o perímetro (tamanho exterior).
Aqui, podemos ver algunas atributos disponíveis e como acessá-los:
End of explanation
# Import collections of geometric objects + bounding box
from shapely.geometry import MultiPoint, MultiLineString, MultiPolygon, box
# Create a MultiPoint object of our points 1,2 and 3
multi_point = MultiPoint([point1, point2, point3])
# It is also possible to pass coordinate tuples inside
multi_point2 = MultiPoint([(2.2, 4.2), (7.2, -25.1), (9.26, -2.456)])
# We can also create a MultiLineString with two lines
line1 = LineString([point1, point2])
line2 = LineString([point2, point3])
multi_line = MultiLineString([line1, line2])
# MultiPolygon can be done in a similar manner
# Let's divide our world into western and eastern hemispheres with a hole on the western hemisphere
# --------------------------------------------------------------------------------------------------
# Let's create the exterior of the western part of the world
west_exterior = [(-180, 90), (-180, -90), (0, -90), (0, 90)]
# Let's create a hole --> remember there can be multiple holes, thus we need to have a list of hole(s).
# Here we have just one.
west_hole = [[(-170, 80), (-170, -80), (-10, -80), (-10, 80)]]
# Create the Polygon
west_poly = Polygon(shell=west_exterior, holes=west_hole)
# Let's create the Polygon of our Eastern hemisphere polygon using bounding box
# For bounding box we need to specify the lower-left corner coordinates and upper-right coordinates
min_x, min_y = 0, -90
max_x, max_y = 180, 90
# Create the polygon using box() function
east_poly_box = box(minx=min_x, miny=min_y, maxx=max_x, maxy=max_y)
# Let's create our MultiPolygon. We can pass multiple Polygon -objects into our MultiPolygon as a list
multi_poly = MultiPolygon([west_poly, east_poly_box])
# Print outputs
print("MultiPoint:", multi_point)
print("MultiLine: ", multi_line)
print("Bounding box: ", east_poly_box)
print("MultiPoly: ", multi_poly)
Explanation: Como podemos ver acima, de maneira direta podemos acessar diferentes atributos do objeto Polygon. Porém perceba que o tamanho do perímetro exterior foi obtido em graus decimais, porque estamos utilizando coordendas de latitude e longitude na entrada do nosso polígono. Posteriormente vamos aprender a mudar a projeção desses dados e ser capaz de obter as mesmas informações em metros.
4. Coleção de geometrias (opcional)
Em algumas ocasiões é necessário armazenar multiplas linhas ou polígonos em um único objeto (ex: uma geometria que representa vários polígonos). Estas coleções são implementadas através dos objetos:
MultiPoint: representa uma coleção de Point;
MultiLineString: representa uma coleção de LineString;
MultiPolygon: representa uma coleção de Polygon.
Estas coleções não são computacionalmente significantes, mas são úteis para modelar certos tipos de features. Por exemplo, uma rua em formato de Y utilizando MultiLineString, um conjunto de ilhas em um arquipélago com MultiPolygon.
Criar e visualizar um bounding box minímo ao redor dos seus dados de pontos é uma função útil para muitos propósitos (ex: tentar entender a extensão dos seus dados), em seguida veremos como é possível realizar esta operação.
End of explanation
multi_point
multi_line
west_poly
east_poly_box
multi_poly
Explanation: Vamos ver como as nossas coleções de geometrias são mostradas no jupyter:
End of explanation
multi_point
Explanation: Podemos observar que as saídas são similares as geometrias básicas que foram criadas anteriormente, más agora esses objetos são extensíveis para multiplos pontos, linhas e polígonos.
4.1 Coleção de geometrias - Atributos e funções
Podemos também utilizar muitas funções internas especificas destas coleções, como o Convex Hull
Utilizando a nossa estrutura anterior de três pontos:
End of explanation
# Convex Hull of our MultiPoint --> https://en.wikipedia.org/wiki/Convex_hull
convex = multi_point.convex_hull
print("Convex hull of the points: ", convex)
convex
Explanation: Vamos chamar a função interna Convex Hull
End of explanation
# How many lines do we have inside our MultiLineString?
lines_count = len(multi_line)
# Print output:
print("Number of lines in MultiLineString:", lines_count)
# Let's calculate the area of our MultiPolygon
multi_poly_area = multi_poly.area
Explanation: Outros atributos internos da coleção de geometria:
End of explanation
# Let's calculate the area of our Western hemisphere (with a hole) which is at index 0
west_area = multi_poly[0].area
# Print outputs:
print("Area of our MultiPolygon:", multi_poly_area)
print("Area of our Western Hemisphere polygon:", west_area)
Explanation: Também podemos acessar diferentes itens dentro de nossas coleções de geometria. Podemos, por exemplo, acessar um único polígono de nosso objeto MultiPolygon, referindo-se ao índice:
End of explanation
valid = multi_poly.is_valid
print("Is polygon valid?: ", valid)
Explanation: Como mostrado acima podemos ver que o objeto MultiPolygon possui os mesmos atributos que o objeto Polygon, mas agora métodos como cálculo da área retorna a soma das áreas de todos os polígonos presentes no objeto.
Também existem algumas funções extras disponíveis apenas para estas coleções, como is_valid que retorna se o polígono ou linhas possuem interseção um com o outro.
End of explanation |
11,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analyse
If you want to do your own analyse of the data on db.sqlite3 and are going to use Python you can take advantage of some Django code. This Jupyter Notebook will help you to enable the Django code.
Setup and run
To setup your environment to run this Jupyter notebook you need to install some packages. Our suggestion is to run
~~~
$ python -m pip install -r requirements.txt
$ python -m pip install -r requirements-jupyter.txt
~~~
from your terminal.
To start Jupyter server, run
~~~
$ python manage.py shell_plus --notebook
~~~
Basic (Django Part)
You can use all power of Django on the notebook. For example, to gain access to the models you can use
Step1: To select all the fellows you can use
Step2: Remember that the Claimant table can have entries that aren't fellows and because of it we need to use .filter(selected=True).
Basic (Pandas Part)
You can use Pandas with Django.
Step3: When converting a Django QuerySet into a Pandas DataFrame you will need to as the previous example because so far Pandas can't process Django QuerySets by default.
Step4: Pandas table as CSV and as Data URIs
For the report, we need to Pandas table as CSV encoded inside data URIs so users can download the CSV file without querying the server.
Step5: The output of b64encode can be included in
<a download="fellows.csv" href="data
Step6: Get a list of all tags
Step7: You can loop over each tag
Step8: Filter for a specific tag
Step9: You can query for part of the name of the tag | Python Code:
import lowfat.models as models
Explanation: Data Analyse
If you want to do your own analyse of the data on db.sqlite3 and are going to use Python you can take advantage of some Django code. This Jupyter Notebook will help you to enable the Django code.
Setup and run
To setup your environment to run this Jupyter notebook you need to install some packages. Our suggestion is to run
~~~
$ python -m pip install -r requirements.txt
$ python -m pip install -r requirements-jupyter.txt
~~~
from your terminal.
To start Jupyter server, run
~~~
$ python manage.py shell_plus --notebook
~~~
Basic (Django Part)
You can use all power of Django on the notebook. For example, to gain access to the models you can use
End of explanation
fellows = models.Claimant.objects.filter(fellow=True)
fellows
Explanation: To select all the fellows you can use
End of explanation
from django_pandas.io import read_frame
fellows = read_frame(fellows.values())
fellows
Explanation: Remember that the Claimant table can have entries that aren't fellows and because of it we need to use .filter(selected=True).
Basic (Pandas Part)
You can use Pandas with Django.
End of explanation
expenses = read_frame(Expense.objects.all())
expenses
expenses.sum()
expenses["amount_authorized_for_payment"].sum()
Explanation: When converting a Django QuerySet into a Pandas DataFrame you will need to as the previous example because so far Pandas can't process Django QuerySets by default.
End of explanation
from base64 import b64encode
csv = fellows.to_csv(
header=True,
index=False
)
b64encode(csv.encode())
Explanation: Pandas table as CSV and as Data URIs
For the report, we need to Pandas table as CSV encoded inside data URIs so users can download the CSV file without querying the server.
End of explanation
funds = models.Fund.objects.all()
read_frame(funds)
Explanation: The output of b64encode can be included in
<a download="fellows.csv" href="data:application/octet-stream;charset=utf-16le;base64,{{ b64encode_output | safe }}">Download the data as CSV.</a>
so that user can download the data.
Basic (Tagulous)
We use Tagulous as a tag library.
End of explanation
funds[0].activity.all()
Explanation: Get a list of all tags:
End of explanation
for tag in funds[0].activity.all():
print(tag.name)
Explanation: You can loop over each tag:
End of explanation
models.Fund.objects.filter(activity="ssi2/fellowship")
Explanation: Filter for a specific tag:
End of explanation
models.Fund.objects.filter(activity__name__contains="fellowship")
for fund in models.Fund.objects.filter(activity__name__contains="fellowship"):
print("{} - {}".format(fund, fund.activity.all()))
Explanation: You can query for part of the name of the tag:
End of explanation |
11,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table align="left">
<td>
<a href="https
Step1: Restart the Kernel
Once you've installed the {packages}, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type
Follow before you begin in Guide
{ add link to any online before you begin tutorial on the product }
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
If you are running this notebook locally, you will need to install Google Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project id here.
Step4: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step5: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
Step6: Only if your bucket doesn't already exist
Step7: Finally, validate access to your Cloud Storage bucket by examining its contents
Step8: Import libraries and define constants
{Put all your imports and installs up into a setup section.}
Step9: Notes
The tips below are specific to notebooks for Tensorflow/Scikit-Learn/PyTorch/XGBoost code.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three future imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Python Style guide
As Guido van Rossum said, “Code is read much more often than it is written”. Please make sure you are following
the guidelines to write Python code from the Python style guide.
Writing readable code here is critical. Specially when working with Notebooks
Step10: Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
For a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the notebook run | Python Code:
%pip install -U missing_or_updating_package --user
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/templates/ai_platform_notebooks_template_hybrid.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/templates/ai_platform_notebooks_template_hybrid.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
{Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}
Dataset
{Include a paragraph with Dataset information and where to obtain it}
Objective
In this notebook, you will learn how to {Complete the sentence explaining briefly what you will learn from the notebook, example
ML Training, HP tuning, Serving} The steps performed include:
* { add high level bullets for the steps of what you will perform in the notebook }
Costs
Example:
This tutorial uses billable components of Google Cloud Platform (GCP):
Cloud AI Platform
Cloud Storage
Learn about Cloud AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install additional dependencies not installed in Notebook environment
(e.g. XGBoost, adanet, tf-hub)
Use the latest major GA version of the framework.
End of explanation
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the {packages}, you need to restart the notebook kernel so it can find the packages.
End of explanation
# Get your GCP project id from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID=shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type
Follow before you begin in Guide
{ add link to any online before you begin tutorial on the product }
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
If you are running this notebook locally, you will need to install Google Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Project ID
If you don't know your project ID, you may be able to get your PROJECT_ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
Explanation: Otherwise, set your project id here.
End of explanation
import sys, os
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import sys, os
Explanation: Import libraries and define constants
{Put all your imports and installs up into a setup section.}
End of explanation
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
Explanation: Notes
The tips below are specific to notebooks for Tensorflow/Scikit-Learn/PyTorch/XGBoost code.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three future imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Python Style guide
As Guido van Rossum said, “Code is read much more often than it is written”. Please make sure you are following
the guidelines to write Python code from the Python style guide.
Writing readable code here is critical. Specially when working with Notebooks: This will help other people, to read and understand your code. Having guidelines that you follow and recognize will make it easier for others to read your code.
Google Python Style guide
Code content
Use the highest level API that gets the job done (unless the goal is to demonstrate the low level API). For example, when using Tensorflow:
Use TF.keras.Sequential > keras functional api > keras model subclassing > ...
Use model.fit > model.train_on_batch > manual GradientTapes.
Use eager-style code.
Use tensorflow_datasets and tf.data where possible.
Text
Use an imperative style. "Run a batch of images through the model."
Use sentence case in titles/headings.
Use short titles/headings: "Download the data", "Build the Model", "Train the model".
Code Style
Notebooks are for people. Write code optimized for clarity.
Demonstrate small parts before combining them into something more complex. Like below:
End of explanation
# Delete model version resource
! gcloud ai-platform versions delete $MODEL_VERSION --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $JOB_DIR
# If training job is still running, cancel it
! gcloud ai-platform jobs cancel $JOB_NAME --quiet
Explanation: Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
For a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the notebook run:
! pip install git+https://github.com/tensorflow/examples
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
{Include commands to delete individual resources below}
End of explanation |
11,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Addons 优化器:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 构建模型
Step3: 准备数据
Step5: 定义自定义回调函数
Step6: 训练和评估:使用 CG 作为优化器
只需用新的 TFA 优化器替换典型的 Keras 优化器
Step7: 训练和评估:使用 SGD 作为优化器
Step8: 权重的弗罗宾尼斯范数:CG 与 SGD
CG 优化器的当前实现基于弗罗宾尼斯范数,并将弗罗宾尼斯范数视为目标函数中的正则化器。因此,我们将 CG 的正则化效果与尚未采用弗罗宾尼斯范数正则化器的 SGD 优化器进行了比较。
Step9: 训练和验证准确率:CG 与 SGD | Python Code:
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
from matplotlib import pyplot as plt
# Hyperparameters
batch_size=64
epochs=10
Explanation: TensorFlow Addons 优化器:ConditionalGradient
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/addons/tutorials/optimizers_conditionalgradient"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/addons/tutorials/optimizers_conditionalgradient.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
此笔记本将演示如何使用 Addons 软件包中的条件梯度优化器。
ConditionalGradient
由于潜在的正则化效果,约束神经网络的参数已被证明对训练有益。通常,参数通过软惩罚(从不保证约束满足)或通过投影运算(计算资源消耗大)进行约束。另一方面,条件梯度 (CG) 优化器可严格执行约束,而无需消耗资源的投影步骤。它通过最大程度减小约束集中目标的线性逼近来工作。在此笔记本中,我们通过 MNIST 数据集上的 CG 优化器演示弗罗宾尼斯范数约束的应用。CG 现在可以作为 Tensorflow API 提供。有关优化器的更多详细信息,请参阅 https://arxiv.org/pdf/1803.06453.pdf
设置
End of explanation
model_1 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
Explanation: 构建模型
End of explanation
# Load MNIST dataset as NumPy arrays
dataset = {}
num_validation = 10000
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data
x_train = x_train.reshape(-1, 784).astype('float32') / 255
x_test = x_test.reshape(-1, 784).astype('float32') / 255
Explanation: 准备数据
End of explanation
def frobenius_norm(m):
This function is to calculate the frobenius norm of the matrix of all
layer's weight.
Args:
m: is a list of weights param for each layers.
total_reduce_sum = 0
for i in range(len(m)):
total_reduce_sum = total_reduce_sum + tf.math.reduce_sum(m[i]**2)
norm = total_reduce_sum**0.5
return norm
CG_frobenius_norm_of_weight = []
CG_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: CG_frobenius_norm_of_weight.append(
frobenius_norm(model_1.trainable_weights).numpy()))
Explanation: 定义自定义回调函数
End of explanation
# Compile the model
model_1.compile(
optimizer=tfa.optimizers.ConditionalGradient(
learning_rate=0.99949, lambda_=203), # Utilize TFA optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_cg = model_1.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[CG_get_weight_norm])
Explanation: 训练和评估:使用 CG 作为优化器
只需用新的 TFA 优化器替换典型的 Keras 优化器
End of explanation
model_2 = tf.keras.Sequential([
tf.keras.layers.Dense(64, input_shape=(784,), activation='relu', name='dense_1'),
tf.keras.layers.Dense(64, activation='relu', name='dense_2'),
tf.keras.layers.Dense(10, activation='softmax', name='predictions'),
])
SGD_frobenius_norm_of_weight = []
SGD_get_weight_norm = tf.keras.callbacks.LambdaCallback(
on_epoch_end=lambda batch, logs: SGD_frobenius_norm_of_weight.append(
frobenius_norm(model_2.trainable_weights).numpy()))
# Compile the model
model_2.compile(
optimizer=tf.keras.optimizers.SGD(0.01), # Utilize SGD optimizer
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy'])
history_sgd = model_2.fit(
x_train,
y_train,
batch_size=batch_size,
validation_data=(x_test, y_test),
epochs=epochs,
callbacks=[SGD_get_weight_norm])
Explanation: 训练和评估:使用 SGD 作为优化器
End of explanation
plt.plot(
CG_frobenius_norm_of_weight,
color='r',
label='CG_frobenius_norm_of_weights')
plt.plot(
SGD_frobenius_norm_of_weight,
color='b',
label='SGD_frobenius_norm_of_weights')
plt.xlabel('Epoch')
plt.ylabel('Frobenius norm of weights')
plt.legend(loc=1)
Explanation: 权重的弗罗宾尼斯范数:CG 与 SGD
CG 优化器的当前实现基于弗罗宾尼斯范数,并将弗罗宾尼斯范数视为目标函数中的正则化器。因此,我们将 CG 的正则化效果与尚未采用弗罗宾尼斯范数正则化器的 SGD 优化器进行了比较。
End of explanation
plt.plot(history_cg.history['accuracy'], color='r', label='CG_train')
plt.plot(history_cg.history['val_accuracy'], color='g', label='CG_test')
plt.plot(history_sgd.history['accuracy'], color='pink', label='SGD_train')
plt.plot(history_sgd.history['val_accuracy'], color='b', label='SGD_test')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(loc=4)
Explanation: 训练和验证准确率:CG 与 SGD
End of explanation |
11,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
STA 208
Step1: Load the following medical dataset with 750 patients. The response variable is survival dates (Y), the predictors are 104 measurements measured at a specific time (numerical variables have been standardized).
Step2: The response variable is Y.
Step3: Exercise 2.1 (10 pts) Perform ridge regression on the method and cross-validate to find the best ridge parameter.
Step4: The natural conclusion is that ridge regularization does not improve the performance based on leave-one-out classification.
Exercise 2.2 (10 pts) Plot the lasso and lars path for each of the coefficients. All coefficients for a given method should be on the same plot, you should get 2 plots. What are the major differences, if any? Are there any 'leaving' events in the lasso path?
Step5: The Lars and Lasso paths look identical, which is due to the lack of any leaving events. Recall that leaving events were the lasso modification to the lars path.
Exercise 2.3 (10 pts) Cross-validate the Lasso and compare the results to the answer to 2.1.
Step6: The optimal cross-validated score for the Lasso is 3.4e6 and that for ridge regression is 3.65e6. Hence, the lasso outperforms ridge regression and OLS.
Exercise 2.4 (15 pts) Obtain the 'best' active set from 2.3, and create a new design matrix with only these variables. Use this to predict the categorical variable $z$ with logistic regression. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge, RidgeCV, Lasso, LassoCV, lars_path, LogisticRegression
from sklearn.preprocessing import scale
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
# dataset path
data_dir = "."
Explanation: STA 208: Homework 2
This is based on the material in Chapters 3, 4.4 of 'Elements of Statistical Learning' (ESL), in addition to lectures 4-6. Chunzhe Zhang came up with the dataset and the analysis in the second section.
Instructions
We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you
MUST add cells in between the exercise statements and add answers within them and
MUST NOT modify the existing cells, particularly not the problem statement
To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax
In the conceptual exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in
$$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$
for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html
1. Conceptual Exercises
Exercise 1.1. (5 pts) Ex. 3.29 in ESL
We will assume that X is centered. The intercept is
$$\bar y = \arg \min_{\beta_0} \| y - \beta_0 - X \beta \|2^2 + \lambda \| \beta \|_2^2$$
and
$$\hat \beta = \arg \min\beta \| (y - \bar y) - X \beta \|_2^2 + \lambda \| \beta \|_2^2$$
satisfies the Ridge normal equations:
$$(X^\top X + \lambda I) \hat \beta = X^\top y$$
Furthermore, $X^\top y$ has identical values in each coordinate, call it $x^\top y$. Also, $X^\top X$ has the same value $\|x\|_2^2$ throughout.
The solution for a single $x$ is $\hat \beta = x^\top y / (\| x \|_2^2 + \lambda)$.
Let's guess that by symmetry $$\hat \beta_j = \frac{1}{p\|x\|_2^2 + \lambda} x^\top y, \quad \forall j$$.
Then we see that $(X^\top X + \lambda I)_k \hat \beta = \| x \|_2^2 \sum_j \hat \beta_j + \lambda \hat \beta_k = (p \| x \|_2^2 + \lambda) \hat \beta_k = x^\top y$.
Check!
Exercise 1.2 (5 pts) Ex. 3.30 in ESL
Consider the elastic net,
$$\min_\beta \| y - X \beta \|2^2 + \lambda (\alpha \|\beta \|_2^2 + (1 - \alpha) \|\beta\|_1)$$
Let's try to absorb the $\ell_2$ penalty into the loss function.
$$y^\top y - 2 y^\top X \beta + \beta^\top X^\top X \beta + \lambda \alpha \beta^\top \beta$$
which is (up to a constant addition)
$$\beta^\top (X^\top X + \lambda \alpha I) \beta - 2 y^\top X \beta.$$
So we can re-write the objective as
$$\min\beta \beta^\top (X^\top X + \lambda \alpha I) \beta - 2 y^\top X \beta + \lambda (1 - \alpha) \|\beta\|_1$$
which is a lasso problem.
Exercise 1.3 (5 pts) $Y \in {0,1}$ follows an exponential family model with natural parameter $\eta$ if
$$P(Y=y) = \exp\left( y \eta - \psi(\eta) \right).$$
Show that when $\eta = x^\top \beta$ then $Y$ follows a logistic regression model.
$$P(Y=1) + P(Y=0) = (1 + \exp(\eta)) \exp(- \psi (\eta)) = 1$$
So $\psi(\eta) = \log(1 + \exp(\eta))$, then one could recognize
$$P(Y=y) = \exp \left(y x^\top \beta + \log (1 + \exp(x^\top \beta))\right)$$
as the logistic model. Or see that
$$\log \frac{P(Y=1)}{P(Y=0)} = \eta = x^\top \beta.$$
2. Data Analysis
End of explanation
sample_data = pd.read_csv(data_dir+"/hw2.csv", delimiter=',')
sample_data.head()
sample_data.V1 = sample_data.V1.eq('Yes').mul(1)
Explanation: Load the following medical dataset with 750 patients. The response variable is survival dates (Y), the predictors are 104 measurements measured at a specific time (numerical variables have been standardized).
End of explanation
X = np.array(sample_data.iloc[:,range(2,104)])
y = np.array(sample_data.iloc[:,0])
z = np.array(sample_data.iloc[:,1])
Explanation: The response variable is Y.
End of explanation
X = scale(X)
alphas = 2.**np.arange(20) / 2.**10
ridgecv = RidgeCV(alphas=alphas, cv=None, store_cv_values=True)
ridgecv.fit(X,y)
print(ridgecv.alpha_)
plt.plot(np.log(alphas),ridgecv.cv_values_.mean(axis=0))
plt.xlabel('log(reg-param)')
plt.ylabel('Leave-1-out score')
_ = plt.title('Ridge regression cross-validation')
np.min(ridgecv.cv_values_.mean(axis=0))
Explanation: Exercise 2.1 (10 pts) Perform ridge regression on the method and cross-validate to find the best ridge parameter.
End of explanation
def plot_lars(coefs, lines=False, title="Lars Path"):
xx = np.sum(np.abs(coefs.T), axis=1)
xx /= xx[-1]
plt.plot(xx, coefs.T)
ymin, ymax = plt.ylim()
if lines:
plt.vlines(xx, ymin, ymax, linestyle='dashed')
plt.xlabel('|coef| / max|coef|')
plt.ylabel('Coefficients')
plt.title(title)
plt.axis('tight')
alphas_lars, _, coefs_lars = lars_path(X,y)
plot_lars(coefs_lars)
alphas_lasso, _, coefs_lasso = lars_path(X,y,method='lasso')
plot_lars(coefs_lasso,title="Lasso Path")
Explanation: The natural conclusion is that ridge regularization does not improve the performance based on leave-one-out classification.
Exercise 2.2 (10 pts) Plot the lasso and lars path for each of the coefficients. All coefficients for a given method should be on the same plot, you should get 2 plots. What are the major differences, if any? Are there any 'leaving' events in the lasso path?
End of explanation
lassocv = LassoCV(cv=None)
lassocv.fit(X,y)
alpha = lassocv.alpha_
cv_scores = lassocv.mse_path_.mean(axis=1)
_ = plt.plot(np.log(lassocv.alphas_),cv_scores)
np.min(cv_scores)
Explanation: The Lars and Lasso paths look identical, which is due to the lack of any leaving events. Recall that leaving events were the lasso modification to the lars path.
Exercise 2.3 (10 pts) Cross-validate the Lasso and compare the results to the answer to 2.1.
End of explanation
Xred = X[:,lassocv.coef_ > 0.0]
lr = LogisticRegression(C=99.)
lr.fit(Xred,z)
zhat = lr.predict(Xred)
confusion_matrix(z,zhat)
Explanation: The optimal cross-validated score for the Lasso is 3.4e6 and that for ridge regression is 3.65e6. Hence, the lasso outperforms ridge regression and OLS.
Exercise 2.4 (15 pts) Obtain the 'best' active set from 2.3, and create a new design matrix with only these variables. Use this to predict the categorical variable $z$ with logistic regression.
End of explanation |
11,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of Stacking BOSS Spectra using Speclite
Examples of using the speclite package to perform basic operations on spectral data accessed with the bossdata package. To keep the examples small, we use data from a single BOSS plate (6641 observed on MJD 56383) and show how to work with both the individual spec-lite files and the combined spPlate file (see here for details on the different data products).
Package Initialization
Step1: Stacked Sky
Get a list of sky spectra on plate 6641
Step2: Plot a stacked spectrum
Step3: Stack individual Spec-lite files
Loop over all sky spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes.
Step4: Stack Spectra from one Plate file
Accumulate the sky spectra from a Plate file, which will be automatically downloaded if necessary.
Step5: Stacked Quasars
Get a list of sky spectra on plate 6641, observed on MJD 56383
Step6: Plot the redshift distribution of the selected quasars
Step7: Stack spectra from individual Spec-lite files
Loop over all quasar spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes.
Step8: Stack spectra from one Plate file
Step9: Transform each spectrum to its quasar rest frame. We perform this operation in place (re-using the memory of the input array) and in parallel on all spectra.
Step10: Resample each spectrum to a uniform rest wavelength grid and stack them together to calculate the mean rest-frame quasar spectrum. The resample() and accumulate() operations re-use the same memory for each input spectrum, so this loop has fixed (small) memory requirements, independent of the number of spectra being stacked. | Python Code:
%pylab inline
import speclite
print(speclite.version.version)
import bossdata
print(bossdata.__version__)
finder = bossdata.path.Finder()
mirror = bossdata.remote.Manager()
Explanation: Examples of Stacking BOSS Spectra using Speclite
Examples of using the speclite package to perform basic operations on spectral data accessed with the bossdata package. To keep the examples small, we use data from a single BOSS plate (6641 observed on MJD 56383) and show how to work with both the individual spec-lite files and the combined spPlate file (see here for details on the different data products).
Package Initialization
End of explanation
spAll = bossdata.meta.Database(lite=True)
sky_table = spAll.select_all(where='PLATE=6641 and OBJTYPE="SKY"')
print('Found {0} sky fibers for plate 6641.'.format(len(sky_table)))
Explanation: Stacked Sky
Get a list of sky spectra on plate 6641:
End of explanation
def plot_stack(data, truncate_percentile):
valid = data['ivar'] > 0
wlen = data['wavelength'][valid]
flux = data['flux'][valid]
dflux = data['ivar'][valid]**(-0.5)
plt.figure(figsize=(12,5))
plt.fill_between(wlen, flux, lw=0, color='red')
plt.errorbar(wlen, flux, dflux, color='black', alpha=0.5, ls='None', capthick=0)
plt.xlim(np.min(wlen), np.max(wlen))
plt.ylim(0, np.percentile(flux + dflux, truncate_percentile))
plt.xlabel('Wavelength ($\AA$)')
plt.ylabel('Flux $10^{-17}$ erg/(s cm$^2 \AA$)')
plt.tight_layout();
Explanation: Plot a stacked spectrum:
End of explanation
spec_sky = None
for row in sky_table:
filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True)
spectrum = bossdata.spec.SpecFile(mirror.get(filename))
data = spectrum.get_valid_data(include_sky=True, use_ivar=True, fiducial_grid=True)
spec_sky = speclite.accumulate(spec_sky, data, data_out=spec_sky, join='wavelength',
add=('flux', 'sky'), weight='ivar')
spec_sky['flux'] += spec_sky['sky']
plot_stack(spec_sky, truncate_percentile=97.5)
Explanation: Stack individual Spec-lite files
Loop over all sky spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes.
End of explanation
plate_sky = None
filename = finder.get_plate_spec_path(plate=6641, mjd=56383)
plate = bossdata.plate.PlateFile(mirror.get(filename))
plate_data = plate.get_valid_data(sky_table['FIBER'], include_sky=True, use_ivar=True, fiducial_grid=True)
for data in plate_data:
plate_sky = speclite.accumulate(plate_sky, data, data_out=plate_sky, join='wavelength',
add=('flux', 'sky'), weight='ivar')
plate_sky['flux'] += plate_sky['sky']
plot_stack(plate_sky, truncate_percentile=97.5)
Explanation: Stack Spectra from one Plate file
Accumulate the sky spectra from a Plate file, which will be automatically downloaded if necessary.
End of explanation
DR12Q = bossdata.meta.Database(finder, mirror, quasar_catalog=True)
qso_table = DR12Q.select_all(where='PLATE=6641 and ZWARNING=0', what='PLATE,MJD,FIBER,Z_VI')
print('Found {0} QSO targets for plate 6641.'.format(len(qso_table)))
Explanation: Stacked Quasars
Get a list of sky spectra on plate 6641, observed on MJD 56383:
End of explanation
plt.hist(qso_table['Z_VI'], bins=25);
plt.xlabel('Redshift z')
plt.ylabel('Quasars')
plt.tight_layout();
Explanation: Plot the redshift distribution of the selected quasars:
End of explanation
fiducial_grid = np.arange(1000.,3000.)
rest_frame, resampled, spec_qso = None, None, None
for row in qso_table:
filename = finder.get_spec_path(plate=row['PLATE'], mjd=row['MJD'], fiber=row['FIBER'], lite=True)
spectrum = bossdata.spec.SpecFile(mirror.get(filename))
data = spectrum.get_valid_data(use_ivar=True, fiducial_grid=True)
rest_frame = speclite.redshift(z_in=row['Z_VI'], z_out=0, data_in=data, data_out=rest_frame, rules=[
dict(name='wavelength', exponent=+1),
dict(name='flux', exponent=-1),
dict(name='ivar', exponent=+2)])
resampled = speclite.resample(rest_frame, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'),
data_out=resampled)
spec_qso = speclite.accumulate(spec_qso, resampled, data_out=spec_qso, join='wavelength',
add='flux', weight='ivar')
plot_stack(spec_qso, truncate_percentile=99.5)
Explanation: Stack spectra from individual Spec-lite files
Loop over all quasar spectra on the plate. The necessary spec-lite files will be automatically downloaded, if necessary, which will take several minutes.
End of explanation
filename = finder.get_plate_spec_path(plate=6641, mjd=56383)
plate = bossdata.plate.PlateFile(mirror.get(filename))
plate_data = plate.get_valid_data(qso_table['FIBER'], use_ivar=True, fiducial_grid=True)
zorder = np.argsort(qso_table['Z_VI'])
Explanation: Stack spectra from one Plate file
End of explanation
z_in = qso_table['Z_VI'][:,np.newaxis]
plate_data = speclite.redshift(z_in=z_in, z_out=0, data_in=plate_data, data_out=plate_data, rules=[
dict(name='wavelength', exponent=+1),
dict(name='flux', exponent=-1),
dict(name='ivar', exponent=+2)
])
Explanation: Transform each spectrum to its quasar rest frame. We perform this operation in place (re-using the memory of the input array) and in parallel on all spectra.
End of explanation
resampled, plate_qso = None, None
for data in plate_data:
resampled = speclite.resample(data, x_in='wavelength', x_out=fiducial_grid, y=('flux', 'ivar'), data_out=resampled)
plate_qso = speclite.accumulate(spec_qso, resampled, data_out=plate_qso, join='wavelength',
add='flux', weight='ivar')
plot_stack(plate_qso, truncate_percentile=99.5)
Explanation: Resample each spectrum to a uniform rest wavelength grid and stack them together to calculate the mean rest-frame quasar spectrum. The resample() and accumulate() operations re-use the same memory for each input spectrum, so this loop has fixed (small) memory requirements, independent of the number of spectra being stacked.
End of explanation |
11,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 5
Step1: 1. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
Step2: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step3: Nota
Step4: 3. Selección de activos
Step5: 4. Optimización de portafolios | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
from sklearn.cluster import KMeans
import datetime
from datetime import datetime
import scipy.stats as stats
import scipy as sp
import scipy.optimize as optimize
import scipy.cluster.hierarchy as hac
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
Explanation: Clase 5: Portafolios y riesgo - Selección
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
def get_historical_closes(ticker, start_date, end_date):
p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis')
d = p.to_frame()['Adj Close'].reset_index()
d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)
pivoted = d.pivot(index='Date', columns='Ticker')
pivoted.columns = pivoted.columns.droplevel(0)
return pivoted
Explanation: 1. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
End of explanation
data=get_historical_closes(['AA','AAPL','AMZN','MSFT','KO','NVDA', '^GSPC'], '2011-01-01', '2016-12-31')
closes=data[['AA','AAPL','AMZN','MSFT','KO','NVDA']]
sp=data[['^GSPC']]
closes.plot(figsize=(8,6));
Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
End of explanation
def calc_daily_returns(closes):
return np.log(closes/closes.shift(1))[1:]
daily_returns=calc_daily_returns(closes)
daily_returns.plot(figsize=(8,6));
daily_returns.corr()
def calc_annual_returns(daily_returns):
grouped = np.exp(daily_returns.groupby(lambda date: date.year).sum())-1
return grouped
annual_returns = calc_annual_returns(daily_returns)
annual_returns
def calc_portfolio_var(returns, weights=None):
if (weights is None):
weights = np.ones(returns.columns.size)/returns.columns.size
sigma = np.cov(returns.T,ddof=0)
var = (weights * sigma * weights.T).sum()
return var
calc_portfolio_var(annual_returns)
def sharpe_ratio(returns, weights = None, risk_free_rate = 0.015):
n = returns.columns.size
if weights is None: weights = np.ones(n)/n
var = calc_portfolio_var(returns, weights)
means = returns.mean()
return (means.dot(weights) - risk_free_rate)/np.sqrt(var)
sharpe_ratio(annual_returns)
Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX.
Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
2. Formulación del riesgo de un portafolio
End of explanation
daily_returns_mean=daily_returns.mean()
daily_returns_mean
daily_returns_std=daily_returns.std()
daily_returns_std
daily_returns_ms=pd.concat([daily_returns_mean, daily_returns_std], axis=1)
daily_returns_ms
random_state = 10
y_pred = KMeans(n_clusters=3, random_state=random_state).fit_predict(daily_returns_ms)
plt.scatter(daily_returns_mean, daily_returns_std, c=y_pred);
plt.axis([-0.01, 0.01, 0, 0.05]);
corr_mat=daily_returns.corr(method='spearman')
corr_mat
Z = hac.linkage(corr_mat, 'single')
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
selected=closes[['AAPL', 'AMZN']]
selected.plot(figsize=(8,6));
daily_returns_sel=calc_daily_returns(selected)
daily_returns_sel.plot(figsize=(8,6));
annual_returns_sel = calc_annual_returns(daily_returns_sel)
annual_returns_sel
Explanation: 3. Selección de activos
End of explanation
def target_func(x, cov_matrix, mean_vector, r):
f = float(-(x.dot(mean_vector) - r) / np.sqrt(x.dot(cov_matrix).dot(x.T)))
return f
def optimal_portfolio(profits, r, allow_short=True):
x = np.ones(len(profits.T))
mean_vector = np.mean(profits)
cov_matrix = np.cov(profits.T)
cons = ({'type': 'eq','fun': lambda x: np.sum(x) - 1})
if not allow_short:
bounds = [(0, None,) for i in range(len(x))]
else:
bounds = None
minimize = optimize.minimize(target_func, x, args=(cov_matrix, mean_vector, r), bounds=bounds,
constraints=cons)
return minimize
opt=optimal_portfolio(annual_returns_sel, 0.015)
opt
annual_returns_sel.dot(opt.x)
asp=calc_annual_returns(calc_daily_returns(sp))
asp
def objfun(W, R, target_ret):
stock_mean = np.mean(R,axis=0)
port_mean = np.dot(W,stock_mean)
cov=np.cov(R.T)
port_var = np.dot(np.dot(W,cov),W.T)
penalty = 2000*abs(port_mean-target_ret)
return np.sqrt(port_var) + penalty
def calc_efficient_frontier(returns):
result_means = []
result_stds = []
result_weights = []
means = returns.mean()
min_mean, max_mean = means.min(), means.max()
nstocks = returns.columns.size
for r in np.linspace(min_mean, max_mean, 150):
weights = np.ones(nstocks)/nstocks
bounds = [(0,1) for i in np.arange(nstocks)]
constraints = ({'type': 'eq', 'fun': lambda W: np.sum(W) - 1})
results = optimize.minimize(objfun, weights, (returns, r), method='SLSQP', constraints = constraints, bounds = bounds)
if not results.success: # handle error
raise Exception(result.message)
result_means.append(np.round(r,4)) # 4 decimal places
std_=np.round(np.std(np.sum(returns*results.x,axis=1)),6)
result_stds.append(std_)
result_weights.append(np.round(results.x, 5))
return {'Means': result_means, 'Stds': result_stds, 'Weights': result_weights}
frontier_data = calc_efficient_frontier(annual_returns_sel)
def plot_efficient_frontier(ef_data):
plt.figure(figsize=(12,8))
plt.title('Efficient Frontier')
plt.xlabel('Standard Deviation of the porfolio (Risk))')
plt.ylabel('Return of the portfolio')
plt.plot(ef_data['Stds'], ef_data['Means'], '--');
plot_efficient_frontier(frontier_data)
Explanation: 4. Optimización de portafolios
End of explanation |
11,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chem 30324, Spring 2019, Homework 5
Due Febrary 25, 2020
Real-world particle-in-a-box.
A one-dimensional particle-in-a-box is a simple but plausible model for the π electrons of a conjugated alkene, like butadiene ($C_4H_6$, shown here). Suppose all the C–C bonds in a polyene are 1.4 Å long and the polyenes are perfectly linear.
<img src="https
Step1: 2. Plot out the normalized $n = 2$ particle-in-a-box wavefunction for an electron in butadiene and the normalized $n = 2$ probability distribution. Indicate on the plots the most probable location(s) of the electron, the average location of the electron, and the positions of any nodes.
Step2: 3. Butadiene has 4 π electrons, and we will learn later that in its lowest energy state, two of these are in the $n = 1$ and two in the $n = 2$ levels. Compare the wavelength of light (in nm) necessary to promote (“excite”) one electron from either of these levels to the empty $n = 3$ level.
Step3: 4. The probability of an electron jumping between two energy states by emitting or absorbing light is proportional to the square of the “transition dipole,” given by the integral $\lvert\langle\psi_{initial}\lvert \hat{x}\rvert\psi_{final}\rangle\rvert^2$. Contrast the relative probabilities of an electron jumping from $n = 1$ to $n = 3$ and from $n = 2$ to $n = 3$ levels. Can you propose any general rules about “allowed” and "forbidden" jumps?
$|\langle\psi_1|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is equal to zero. Therefore, it is "forbidden".
$|\langle\psi_2|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(2\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is not equal to zero. Therefore, it is "allowed".
5. Consider the reaction of two ethylene molecules to form butadiene
Step4: 6. This particle-in-a-box model has many flaws, not the least of which is that the ends of the polyene “box” are not infinitely high potential walls. In a somewhat better model the π electrons would travel in a finite-depth potential well. State two things that would change from the infinite depth to the finite depth model.
When wall potential drops from infinity to finite value,
Number of bound states/levels will drop from infinite to finite (molecules will eventually escape the box at high enough energy).
It's possible for electrons to tunnel into once forbidden region.
Energies of bound states decreases.
Quantum mechanics of vibrating NO.
The diatomic nitric oxide (NO) is an unusual and important molecule. It has an odd number of electrons, which is a rarity for stable molecule. It acts as a signaling molecule in the body, helping to regulate blood pressure, is a primary pollutant from combustion, and is a key constituent of smog. It exists in several isotopic forms, but the most common, ${}^{14}$N= ${}^{16}$O, has a bond length of 1.15077 Å and vibrational force constant of 1594.8 N/m.
7. Compute the reduced mass $\mu$ (amu), harmonic vibrational frequency (cm$^{-1}$), and zero point vibrational energy (kJ/mol) of ${}^{14}$N= ${}^{16}$O. Recall $1/\mu=1/M_\text{N} + 1/M_\text{O}$.
Step5: 8. Calculate the classical minimum and maximum values of the $^{14}$N=$^{16}$O bond length for a molecule in the ground vibrational state. Hint
Step6: 9. The normalized ground vibrational wavefunction of N=O can be written
$$\Psi_{\upsilon=0}(x) = \left ({\frac{1}{\alpha\sqrt{\pi}}}\right )^{1/2}e^{-x^2/2\alpha^2}, \quad x = R-R_{eq}, \quad \alpha = \left ({\frac{\hbar^2}{\mu k}}\right )^{1/4}$$
where $x = R-R_{eq}$. Calculate the probability for an ${}^{14}N={}^{16}O$ molecule to have a bond length outside the classical limits. This is an example of quantum mechanical tunneling.
Step7: 10. The gross selection rule for whether light can excite a vibration of a molecule is that the dipole moment of the molecule must change as it vibrates. Based on this criterion, do you expect NO to exhibit an absorption vibrational spectrum?
NO will exhibit an infrared spectrum. Because the molecule is heteronuclear (two ends are not the same), it has a dipole moment. Stretching the bond will change the dipole moment, so the molecule satisfies the gross selection rule.
11. The specific selection rule for whether light can excite a vibration of a molecule is that $\Delta v = \pm 1$. At ambient temperature, what initial and final vibrational states would contribute most significantly to an NO vibrational spectrum? Justify your answer. (Hint
Step8: 12. Based on your answers to questions 10 and 11, what do you expect the vibrational spectrum of an ${}^{14}N={}^{16}O$ molecule to look like? If it has a spectrum, in what region of the spectrum does it absorb (e.g., ultraviolet, x-ray, ...)?
There is only one peak that corresponds to the v0 to v=1 transition. Vibrational frequency is 1904 cm^-1, which is in the IR region.
Two-dimensional harmonic oscillator
Imagine an H atom embedded in a two-dimensional sheet of MoS$_2$. The H atom vibrates like a two-dimensional harmonic oscillator with mass 1 amu and force constants $k_x$ and $k_y$ in the two directions.
13. Write down the Schrödinger equation for the vibrating H atom. Remember to include any boundary conditions on the solutions.
$-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial x^2}}-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial y^2}}+{\frac{1}{2}k_xx^2\psi(x,y)}+{\frac{1}{2}k_yy^2\psi(x,y)}=E\psi(x,y) $
$\lim_{x\rightarrow\pm\infty} \psi(x,y)=0\qquad \lim_{y\rightarrow\pm\infty} \psi(x,y)=0$
14. The Schrödinger equation is seperable, so the wavefunctions are products of one-dimensional wavefunctions and the eigenenergies are sums of corresponding one-dimensional energies. Derive an expression for the H atom vibrational energy states, assuming $k_x = k_y/4 = k$.
Because it is separable, energies in $x$ and $y$ are additive.
$E = E_x + E_y = (v_x+\frac{1}{2})h\nu_x + (v_x+\frac{1}{2})h\nu_y$
$\nu_x= \frac{1}{2\pi}\sqrt{\frac{k}{m}}\qquad \nu_y= \frac{1}{2\pi}\sqrt{\frac{4k}{m}} = 2\nu_x$
$E = (v_x+\frac{1}{2})hν+ 2(v_y+\frac{1}{2})hν=(v_x+2v_y+\frac{3}{2})hν,ν=\frac{1}{2\pi}\sqrt{\frac{k}{\mu}}$
15. A spectroscopic experiment reveals that the spacing between the first and second energy levels is 0.05 eV. What is $k$, in N/m? | Python Code:
import numpy as np
import matplotlib.pyplot as plt
E = []
l = 1.4e-10 #m
hbar = 1.05457e-34 #J*s
m = 9.109e-31 #kg
N = [1,3,5,7,9] #N = number of C-C bonds
for n in range (1,7):
for i in N:
e = (n**2*np.pi**2*hbar**2*6.2415e18)/(2*m*(i*l)**2)
E.append(e)
plt.scatter(N,E[0:5], label = "n=1")
plt.scatter(N,E[5:10], label = "n=2")
plt.scatter(N,E[10:15], label = "n=3")
plt.scatter(N,E[15:20], label = "n=4")
plt.scatter(N,E[20:25], label = "n=5")
plt.scatter(N,E[25:30], label = "n=6")
plt.xticks(N, ["Ethylene", "Butadiene", "Hexatriene", "Octatriene", "Decapentaene"], rotation = 'vertical')
plt.ylabel('Energy (eV)')
plt.title('Energies of n = 1-6 Particle in a Box')
plt.legend()
plt.show()
Explanation: Chem 30324, Spring 2019, Homework 5
Due Febrary 25, 2020
Real-world particle-in-a-box.
A one-dimensional particle-in-a-box is a simple but plausible model for the π electrons of a conjugated alkene, like butadiene ($C_4H_6$, shown here). Suppose all the C–C bonds in a polyene are 1.4 Å long and the polyenes are perfectly linear.
<img src="https://github.com/wmfschneider/CHE30324/blob/master/Homework/imgs/HW5-1.png?raw=1" width="360">
1. Plot out the energies of the $n = 1 – 6$ particle-in-a-box states for ethylene (2 carbon chain), butadiene (4 carbon chain), hexatriene (6 carbon chain), octatetraene (8 carbon chain), and decapentaene (10 carbon chain). What happens to the spacing between energy levels as the molecule gets longer?
End of explanation
import numpy as np
import matplotlib.pyplot as plt
l = 1.4*3 #Angstrom
x = np.linspace(0,l,100)
psi = (2/l)**.5*np.sin(2*np.pi*x/l) #normalized wave function
plt.plot(x,psi,label = '$\Psi_2(x)$')
plt.plot(x,psi**2,label = '$|\Psi_2(x)|^2$')
plt.xlim(0,l)
plt.xlabel('$x(\AA)$')
plt.ylabel('$\Psi_2(x)(\AA^{-1/2})$, $|\Psi_2(x)|^2(\AA^{-1})$')
plt.title('Wavefunction and Probability Distribution of n=2 Butadiene in a Box')
plt.axhline(y=0, color = 'k', linestyle = '--')
plt.axvline(x=l/4, color = 'k', linestyle = '--')
plt.axvline(x=3*l/4, color = 'k', linestyle = '--')
plt.annotate('Most Probable\n Location',xy=(l/4,0),xytext=(1.1,.25), arrowprops = dict(facecolor = 'black'))
plt.annotate('Most Probable Location',xy=(3*l/4,0),xytext=(1.5,.65), arrowprops = dict(facecolor = 'black'))
plt.annotate('Node Location and\nAverage Location',xy=(l/2,0),xytext=(1.5,-.3), arrowprops = dict(facecolor = 'black'))
plt.legend()
plt.show()
Explanation: 2. Plot out the normalized $n = 2$ particle-in-a-box wavefunction for an electron in butadiene and the normalized $n = 2$ probability distribution. Indicate on the plots the most probable location(s) of the electron, the average location of the electron, and the positions of any nodes.
End of explanation
import numpy as np
l = 3*1.4e-10 #m, length of the box
hbar = 1.05457e-34 #J*s
me = 9.109e-31 #kg
E13 = ((3**2-1**2)*np.pi**2*hbar**2/2/me/l**2)*6.2415e18
E23 = ((3**2-2**2)*np.pi**2*hbar**2/2/me/l**2)*6.2415e18
lambda13 = 1240/E13 #nm
lambda23 = 1240/E23 #nm
print('From n=1 to n=3, light must have wavelength = {0:.2f}nm. \nFrom n=2 to n=3, light must have wavelength = {1:.2f}nm.'.format(lambda13,lambda23))
Explanation: 3. Butadiene has 4 π electrons, and we will learn later that in its lowest energy state, two of these are in the $n = 1$ and two in the $n = 2$ levels. Compare the wavelength of light (in nm) necessary to promote (“excite”) one electron from either of these levels to the empty $n = 3$ level.
End of explanation
import numpy as np
#Heat of Formatin data from NIST
ethylene = 52.4 #kJ/mol
butadiene = 108.8 #kJ/mol
print('According to NIST, the energy of reaction =', butadiene - 2*ethylene, 'kJ/mol.')
l = 1.4e-10 #m, length of C-C bond
hbar = 1.05457e-34 #J*s
me = 9.109e-31 #kg
ethyl_n = 4*1**2 #4 n1 electrons
buta_n = 2*(1**2+2**2) #2 n1 + 1 n2 electrons
Erxn = (buta_n*np.pi**2*hbar**2)/(2*me*(3*l)**2) - (ethyl_n*np.pi**2*hbar**2)/(2*me*l**2) #J/molecule
E_rxn = Erxn*6.022e23/1000 #kJ/mol
print('Using the particle in a box method, the energy of reaction =',round(E_rxn,1),'kJ/mol. ')
print('This model isn\'t perfect beacause the potential is not zero or infinite in real life, \n and the model ignores interaction between nucleus and electrons.')
Explanation: 4. The probability of an electron jumping between two energy states by emitting or absorbing light is proportional to the square of the “transition dipole,” given by the integral $\lvert\langle\psi_{initial}\lvert \hat{x}\rvert\psi_{final}\rangle\rvert^2$. Contrast the relative probabilities of an electron jumping from $n = 1$ to $n = 3$ and from $n = 2$ to $n = 3$ levels. Can you propose any general rules about “allowed” and "forbidden" jumps?
$|\langle\psi_1|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is equal to zero. Therefore, it is "forbidden".
$|\langle\psi_2|\hat{x}|\psi_3\rangle|^2$ = $(\int_{0}^{L} \frac{sin(2\pi x)}{L} * x * \frac{sin(3\pi x)}{L} dx)^2$
If we assume L = 1 m and integrate, this integral is not equal to zero. Therefore, it is "allowed".
5. Consider the reaction of two ethylene molecules to form butadiene:
<img src="https://github.com/wmfschneider/CHE30324/blob/master/Homework/imgs/HW5-2.png?raw=1" width="360">
As a very simple estimate, you could take the energy of each molecule as the sum of the energies of its π electrons, allowing only two electrons per energy level. Again taking each C—C bond as 1.4 Å long and treating the π electrons as particles in a box, calculate the total energy of an ethylene and a butadiene molecule within this model (in kJ/mol), and from these calculate the net reaction energy. Compare your results to the experimental reaction enthalpy. How well did the model do?
End of explanation
import numpy as np
MN = 14 #amu
MO = 16 #amu
k = 1594.8 #N/m
l = 1.15077 #Angstroms
conv = 6.022e26 #amu to kg
h = 6.626e-34 # m^2 kg/s
c = 299792458 #m/s, Speed of Light
N = 6.022e23 #molecules/mole
Mred = 1/(1/MN+1/MO) #amu
print('The reduced mass is',round(Mred,2), 'amu.')
vibfreq = 1/(2*np.pi)*np.sqrt(k/Mred*conv/(c**2)/(100**2)) #cm^-1
print('The harmonic vibrational frequency is',round(vibfreq,2), 'cm^-1.')
E0 = .5*h*vibfreq*c*100*N/1000
print('The zero point vibrational energy is', round(E0,2),'kJ/mol.')
Explanation: 6. This particle-in-a-box model has many flaws, not the least of which is that the ends of the polyene “box” are not infinitely high potential walls. In a somewhat better model the π electrons would travel in a finite-depth potential well. State two things that would change from the infinite depth to the finite depth model.
When wall potential drops from infinity to finite value,
Number of bound states/levels will drop from infinite to finite (molecules will eventually escape the box at high enough energy).
It's possible for electrons to tunnel into once forbidden region.
Energies of bound states decreases.
Quantum mechanics of vibrating NO.
The diatomic nitric oxide (NO) is an unusual and important molecule. It has an odd number of electrons, which is a rarity for stable molecule. It acts as a signaling molecule in the body, helping to regulate blood pressure, is a primary pollutant from combustion, and is a key constituent of smog. It exists in several isotopic forms, but the most common, ${}^{14}$N= ${}^{16}$O, has a bond length of 1.15077 Å and vibrational force constant of 1594.8 N/m.
7. Compute the reduced mass $\mu$ (amu), harmonic vibrational frequency (cm$^{-1}$), and zero point vibrational energy (kJ/mol) of ${}^{14}$N= ${}^{16}$O. Recall $1/\mu=1/M_\text{N} + 1/M_\text{O}$.
End of explanation
MN = 14 #amu
MO = 16 #amu
k = 1594.8 #N/m
hbar = 1.05457e-34 #J*s
conv = 6.022e26 #amu to kg
l = 1.15077e-10 #m
alpha = (hbar**2/Mred/k*conv)**0.25 #m
rmax = l+alpha #m
rmin = l-alpha #m
print('Classical bond length maximum is %e m.'%(rmax))
print('Classical bond length minimum is %e m.'%(rmin))
Explanation: 8. Calculate the classical minimum and maximum values of the $^{14}$N=$^{16}$O bond length for a molecule in the ground vibrational state. Hint: Calculate the classical limits on $x$, the value of $x$ at which the kinetic energy is 0 and thus the total energy equals the potential energy.
End of explanation
from sympy import *
a = 1 # in this case, a can be any number
x = Symbol('x')
pi = integrate(1/a/sqrt(pi)*exp(-x**2/a**2),(x,-a,a))
print('The probability of being inside the classical limits is:')
pprint(pi)
print('Therefore, the probability of being outside the classical limits is')
pprint(1-pi)
print('This is equal to 0.1523.')
Explanation: 9. The normalized ground vibrational wavefunction of N=O can be written
$$\Psi_{\upsilon=0}(x) = \left ({\frac{1}{\alpha\sqrt{\pi}}}\right )^{1/2}e^{-x^2/2\alpha^2}, \quad x = R-R_{eq}, \quad \alpha = \left ({\frac{\hbar^2}{\mu k}}\right )^{1/4}$$
where $x = R-R_{eq}$. Calculate the probability for an ${}^{14}N={}^{16}O$ molecule to have a bond length outside the classical limits. This is an example of quantum mechanical tunneling.
End of explanation
import numpy as np
h = 6.626e-34 # J*s
c = 299792458 #m/s, Speed of Light
T = 273 #K
k = 1.38e-23 #J/K
P = []
for v in [0,1,2,3]:
E = (v+0.5)*h*c*vibfreq*100
P.append(np.exp(-E/k/T))
print('The population of v = [0,1,2,3] is [%.2f,%.2e,%.2e,%.2e]'%(P[0]/sum(P),P[1]/sum(P),P[2]/sum(P),P[3]/sum(P)))
Explanation: 10. The gross selection rule for whether light can excite a vibration of a molecule is that the dipole moment of the molecule must change as it vibrates. Based on this criterion, do you expect NO to exhibit an absorption vibrational spectrum?
NO will exhibit an infrared spectrum. Because the molecule is heteronuclear (two ends are not the same), it has a dipole moment. Stretching the bond will change the dipole moment, so the molecule satisfies the gross selection rule.
11. The specific selection rule for whether light can excite a vibration of a molecule is that $\Delta v = \pm 1$. At ambient temperature, what initial and final vibrational states would contribute most significantly to an NO vibrational spectrum? Justify your answer. (Hint: What does the Boltzmann distribution say about the probability to be in each $\nu$ state?)
At 273 K, the most occupied vibrational state is v = 0. Therefore, it will contribute most significantly to the NO spectrum.
Quantitatively, we can prove this using the Boltzmann distribution:
End of explanation
import numpy as np
del_E = 0.05*1.60218e-19 #J
h = 6.626e-34 #Planck constant in m^2 kg / s
m = 1.66054e-27 #kg
freq = del_E/h #/s
k = (2*np.pi*freq)**2*m #kg/m/s^2
print('Force constant k is ',round(k,2),'N/m')
Explanation: 12. Based on your answers to questions 10 and 11, what do you expect the vibrational spectrum of an ${}^{14}N={}^{16}O$ molecule to look like? If it has a spectrum, in what region of the spectrum does it absorb (e.g., ultraviolet, x-ray, ...)?
There is only one peak that corresponds to the v0 to v=1 transition. Vibrational frequency is 1904 cm^-1, which is in the IR region.
Two-dimensional harmonic oscillator
Imagine an H atom embedded in a two-dimensional sheet of MoS$_2$. The H atom vibrates like a two-dimensional harmonic oscillator with mass 1 amu and force constants $k_x$ and $k_y$ in the two directions.
13. Write down the Schrödinger equation for the vibrating H atom. Remember to include any boundary conditions on the solutions.
$-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial x^2}}-{\frac{\hbar}{2m_e}}{\frac{\partial^2\psi(x,y)}{\partial y^2}}+{\frac{1}{2}k_xx^2\psi(x,y)}+{\frac{1}{2}k_yy^2\psi(x,y)}=E\psi(x,y) $
$\lim_{x\rightarrow\pm\infty} \psi(x,y)=0\qquad \lim_{y\rightarrow\pm\infty} \psi(x,y)=0$
14. The Schrödinger equation is seperable, so the wavefunctions are products of one-dimensional wavefunctions and the eigenenergies are sums of corresponding one-dimensional energies. Derive an expression for the H atom vibrational energy states, assuming $k_x = k_y/4 = k$.
Because it is separable, energies in $x$ and $y$ are additive.
$E = E_x + E_y = (v_x+\frac{1}{2})h\nu_x + (v_x+\frac{1}{2})h\nu_y$
$\nu_x= \frac{1}{2\pi}\sqrt{\frac{k}{m}}\qquad \nu_y= \frac{1}{2\pi}\sqrt{\frac{4k}{m}} = 2\nu_x$
$E = (v_x+\frac{1}{2})hν+ 2(v_y+\frac{1}{2})hν=(v_x+2v_y+\frac{3}{2})hν,ν=\frac{1}{2\pi}\sqrt{\frac{k}{\mu}}$
15. A spectroscopic experiment reveals that the spacing between the first and second energy levels is 0.05 eV. What is $k$, in N/m?
End of explanation |
11,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 2 of 3
Step1: Let's generate a cubic network again, but with a different connectivity
Step2: This Network has pores distributed in a cubic lattice, but connected to diagonal neighbors due to the connectivity being set to 8 (the default is 6 which is orthogonal neighbors). The various options are outlined in the Cubic class's documentation which can be viewed with the Object Inspector in Spyder.
OpenPNM includes several other classes for generating networks including random topology based on Delaunay tessellations (Delaunay).
It is also possible to import networks <data_io>_ from external sources, such as networks extracted from tomographic images, or that networks generated by external code.
Initialize and Build Multiple Geometry Objects
One of the main functionalities of OpenPNM is the ability to assign drastically different geometrical properties to different regions of the domain to create heterogeneous materials, such as layered structures. To demonstrate the motivation behind this feature, this tutorial will make a material that has different geometrical properties on the top and bottom surfaces compared to the internal pores. We need to create one Geometry object to manage the top and bottom pores, and a second to manage the remaining internal pores
Step3: The above statements result in two distinct Geometry objects, each applying to different regions of the domain. geom1 applies to only the pores on the top and bottom surfaces (automatically labeled 'top' and 'bottom' during the network generation step), while geom2 applies to the pores 'not' on the top and bottom surfaces.
The assignment of throats is more complicated and illustrates the find_neighbor_throats method, which is one of the more useful topological query methods <topology>_ on the Network class. In both of these calls, all throats connected to the given set of pores (Ps1 or Ps2) are found; however, the mode argument alters which throats are returned. The terms 'union' and 'intersection' are used in the "set theory" sense, such that 'union' returns all throats connected to the pores in the supplied list, while 'intersection' returns the throats that are only connected to the supplied pores. More specifically, if pores 1 and 2 have throats [1, 2] and [2, 3] as neighbors, respectively, then the 'union' mode returns [1, 2, 3] and the 'intersection' mode returns [2]. A detailed description of this behavior is given in
Step4: Each of the above lines produced an array of different length, corresponding to the number of pores assigned to each Geometry object. This is accomplished by the calls to geom1.Np and geom2.Np, which return the number of pores on each object.
Every Core object in OpenPNM possesses the same set of methods for managing their data, such as counting the number of pore and throat values they represent; thus, pn.Np returns 1000 while geom1.Np and geom2.Np return 200 and 800 respectively.
Accessing Data Distributed Between Geometries
The segmentation of the data between separate Geometry objects is essential to the management of pore-scale models, although it does create a complication
Step5: The following code illustrates the shortcut approach, which accomplishes the same result as above in a single line
Step6: This shortcut works because the pn dictionary does not contain an array called 'pore.seed', so all associated Geometry objects are then checked for the requested array(s). If it is found, then OpenPNM essentially performs the interleaving of the data as demonstrated by the manual approach and returns all the values together in a single full-size array. If it is not found, then a standard KeyError message is received.
This exchange of data between Network and Geometry makes sense if you consider that Network objects act as a sort of master object relative Geometry objects. Networks apply to all pores and throats in the domain, while Geometries apply to subsets of the domain, so if the Network needs some values from all pores it has direct access.
Add Pore Size Distribution Models to Each Geometry
Pore-scale models are mathematical functions that are applied to each pore (or throat) in the network to produce some local property value. Each of the modules in OpenPNM (Network, Geometry, Phase and Physics) have a "library" of pre-written models located under "models" (i.e. Geometry.models). Below this level, the models are further categorized according to what property they calculate, and there are typical 2-3 models for each. For instance, under Geometry.models.pore_diameter you will see random, normal and weibull among others.
Pore size distribution models are assigned to each Geometry object as follows
Step7: Pore-scale models tend to be the most complex (i.e. confusing) aspects of OpenPNM, so it's worth dwelling on the important points of the above two commands
Step8: Instead of using statistical distribution functions, the above lines use the neighbor model which determines each throat value based on the values found 'pore_prop' from it's neighboring pores. In this case, each throat is assigned the minimum pore diameter of it's two neighboring pores. Other options for mode include 'max' and 'mean'.
We'll also need throat length as well as the cross-sectional area of pores and throats, for calculating the hydraulic conductance model later.
Step9: Create a Phase Object and Assign Thermophysical Property Models
For this tutorial, we will create a generic Phase object for water, then assign some pore-scale models for calculating their properties. Alternatively, we could use the prewritten Water class included in OpenPNM, which comes complete with the necessary pore-scale models, but this would defeat the purpose of the tutorial.
Step10: Note that all Phase objects are automatically assigned standard temperature and pressure conditions when created. This can be adjusted
Step11: A variety of pore-scale models are available for calculating Phase properties, generally taken from correlations in the literature. An empirical correlation specifically for the viscosity of water is available
Step12: Create Physics Objects for Each Geometry
Physics objects are where geometric information and thermophysical properties are combined to produce the pore and throat scale transport parameters. Thus we need to create one Physics object for EACH Phase and EACH Geometry
Step13: Next add the Hagan-Poiseuille model to both
Step14: The same function (mod) was passed as the model argument to both Physics objects. This means that both objects will calculate the hydraulic conductance using the same function. A model must be assigned to both objects in order for the 'throat.hydraulic_conductance' property be defined everywhere in the domain since each Physics applies to a unique selection of pores and throats.
The "pore-scale model" mechanism was specifically designed to allow for users to easily create their own custom models. Creating custom models is outlined in
Step15: Each Physics applies to the same subset for pores and throats as the Geometries so its values are distributed spatially, but each Physics is also associated with a single Phase object. Consequently, only a Phase object can to request all of the values within the domain pertaining to itself.
In other words, a Network object cannot aggregate the Physics data because it doesn't know which Phase is referred to. For instance, when asking for 'throat.hydraulic_conductance' it could refer to water or air conductivity, so it can only be requested by water or air.
Pore-Scale Models
Step16: Now, let's alter the Geometry objects by assigning new random seeds, and adjust the temperature of water.
Step17: So far we have not run the regenerate command on any of these objects, which means that the above changes have not yet been applied to all the dependent properties. Let's do this and examine what occurs at each step
Step18: These two lines trigger the re-calculation of all the size related models on each Geometry object.
Step19: This line causes the viscosity to be recalculated at the new temperature. Let's confirm that the hydraulic conductance has NOT yet changed since we have not yet regenerated the Physics objects' models
Step20: Finally, if we regenerate phys1 and phys2 we can see that the hydraulic conductance will be updated to reflect the new sizes on the Geometries and the new temperature on the Phase
Step21: Determine Permeability Tensor by Changing Inlet and Outlet Boundary Conditions
The
Step22: Set boundary conditions for flow in the X-direction
Step23: The resulting pressure field can be seen using Paraview
Step24: To find K, we need to solve Darcy's law
Step25: The dimensions of the network can be determined manually from the shape and spacing specified during its generation
Step26: The pressure drop was specified as 1 atm when setting boundary conditions, so Kxx can be found as
Step27: We can either create 2 new Algorithm objects to perform the simulations in the other two directions, or reuse alg by adjusting the boundary conditions and re-running it.
Step28: The first call to set_boundary_conditions used the overwrite mode, which replaces all existing boundary conditions on the alg object with the specified values. The second call uses the merge mode which adds new boundary conditions to any already present, which is the default behavior.
A new value for the flow rate must be recalculated, but all other parameters are equal to the X-direction
Step29: The values of Kxx and Kyy should be nearly identical since both these two directions are parallel to the small surface pores. For the Z-direction
Step30: The permeability in the Z-direction is about half that in the other two directions due to the constrictions caused by the small surface pores. | Python Code:
import numpy as np
import scipy as sp
import openpnm as op
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
Explanation: Tutorial 2 of 3: Digging Deeper into OpenPNM
This tutorial will follow the same outline as Getting Started, but will dig a little bit deeper at each step to reveal the important features of OpenPNM that were glossed over previously.
Learning Objectives
Explore different network topologies, and learn some handy topological query methods
Create a heterogeneous domain with different geometrical properties in different regions
Learn about data exchange between objects
Utilize pore-scale models for calculating properties of all types
Propagate changing geometrical and thermo-physical properties to all dependent properties
Calculate the permeability tensor for the stratified media
Use the Workspace Manager to save and load a simulation
Building a Cubic Network
As usual, start by importing the OpenPNM and Scipy packages:
End of explanation
pn = op.network.Cubic(shape=[20, 20, 10], spacing=0.0001, connectivity=8)
Explanation: Let's generate a cubic network again, but with a different connectivity:
End of explanation
Ps1 = pn.pores(['top', 'bottom'])
Ts1 = pn.find_neighbor_throats(pores=Ps1, mode='union')
geom1 = op.geometry.GenericGeometry(network=pn, pores=Ps1, throats=Ts1, name='boundaries')
Ps2 = pn.pores(['top', 'bottom'], mode='not')
Ts2 = pn.find_neighbor_throats(pores=Ps2, mode='xnor')
geom2 = op.geometry.GenericGeometry(network=pn, pores=Ps2, throats=Ts2, name='core')
Explanation: This Network has pores distributed in a cubic lattice, but connected to diagonal neighbors due to the connectivity being set to 8 (the default is 6 which is orthogonal neighbors). The various options are outlined in the Cubic class's documentation which can be viewed with the Object Inspector in Spyder.
OpenPNM includes several other classes for generating networks including random topology based on Delaunay tessellations (Delaunay).
It is also possible to import networks <data_io>_ from external sources, such as networks extracted from tomographic images, or that networks generated by external code.
Initialize and Build Multiple Geometry Objects
One of the main functionalities of OpenPNM is the ability to assign drastically different geometrical properties to different regions of the domain to create heterogeneous materials, such as layered structures. To demonstrate the motivation behind this feature, this tutorial will make a material that has different geometrical properties on the top and bottom surfaces compared to the internal pores. We need to create one Geometry object to manage the top and bottom pores, and a second to manage the remaining internal pores:
End of explanation
geom1['pore.seed'] = np.random.rand(geom1.Np)*0.5 + 0.2
geom2['pore.seed'] = np.random.rand(geom2.Np)*0.5 + 0.2
Explanation: The above statements result in two distinct Geometry objects, each applying to different regions of the domain. geom1 applies to only the pores on the top and bottom surfaces (automatically labeled 'top' and 'bottom' during the network generation step), while geom2 applies to the pores 'not' on the top and bottom surfaces.
The assignment of throats is more complicated and illustrates the find_neighbor_throats method, which is one of the more useful topological query methods <topology>_ on the Network class. In both of these calls, all throats connected to the given set of pores (Ps1 or Ps2) are found; however, the mode argument alters which throats are returned. The terms 'union' and 'intersection' are used in the "set theory" sense, such that 'union' returns all throats connected to the pores in the supplied list, while 'intersection' returns the throats that are only connected to the supplied pores. More specifically, if pores 1 and 2 have throats [1, 2] and [2, 3] as neighbors, respectively, then the 'union' mode returns [1, 2, 3] and the 'intersection' mode returns [2]. A detailed description of this behavior is given in :ref:topology.
Assign Static Seed values to Each Geometry
In :ref:getting_started we only assigned 'static' values to the Geometry object, which we calculated explicitly. In this tutorial we will use the pore-scale models that are provided with OpenPNM. To get started, however, we'll assign static random seed values between 0 and 1 to each pore on both Geometry objects, by assigning random numbers to each Geometry's 'pore.seed' property:
End of explanation
seeds = np.zeros_like(pn.Ps, dtype=float)
seeds[pn.pores(geom1.name)] = geom1['pore.seed']
seeds[pn.pores(geom2.name)] = geom2['pore.seed']
print(np.all(seeds > 0)) # Ensure all zeros are overwritten
Explanation: Each of the above lines produced an array of different length, corresponding to the number of pores assigned to each Geometry object. This is accomplished by the calls to geom1.Np and geom2.Np, which return the number of pores on each object.
Every Core object in OpenPNM possesses the same set of methods for managing their data, such as counting the number of pore and throat values they represent; thus, pn.Np returns 1000 while geom1.Np and geom2.Np return 200 and 800 respectively.
Accessing Data Distributed Between Geometries
The segmentation of the data between separate Geometry objects is essential to the management of pore-scale models, although it does create a complication: it's not easy to obtain a single array containing all the values of a given property for the whole network. It is technically possible to piece this data together manually since we know the locations where each Geometry object applies, but this is tedious so OpenPNM provides a shortcut. First, let's illustrate the manual approach using the 'pore.seed' values we have defined:
End of explanation
seeds = pn['pore.seed']
Explanation: The following code illustrates the shortcut approach, which accomplishes the same result as above in a single line:
End of explanation
geom1.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.normal,
scale=0.00001, loc=0.00005,
seeds='pore.seed')
geom2.add_model(propname='pore.diameter',
model=op.models.geometry.pore_size.weibull,
shape=1.2, scale=0.00001, loc=0.00005,
seeds='pore.seed')
Explanation: This shortcut works because the pn dictionary does not contain an array called 'pore.seed', so all associated Geometry objects are then checked for the requested array(s). If it is found, then OpenPNM essentially performs the interleaving of the data as demonstrated by the manual approach and returns all the values together in a single full-size array. If it is not found, then a standard KeyError message is received.
This exchange of data between Network and Geometry makes sense if you consider that Network objects act as a sort of master object relative Geometry objects. Networks apply to all pores and throats in the domain, while Geometries apply to subsets of the domain, so if the Network needs some values from all pores it has direct access.
Add Pore Size Distribution Models to Each Geometry
Pore-scale models are mathematical functions that are applied to each pore (or throat) in the network to produce some local property value. Each of the modules in OpenPNM (Network, Geometry, Phase and Physics) have a "library" of pre-written models located under "models" (i.e. Geometry.models). Below this level, the models are further categorized according to what property they calculate, and there are typical 2-3 models for each. For instance, under Geometry.models.pore_diameter you will see random, normal and weibull among others.
Pore size distribution models are assigned to each Geometry object as follows:
End of explanation
geom1.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
pore_prop='pore.diameter',
mode='min')
geom2.add_model(propname='throat.diameter',
model=op.models.misc.from_neighbor_pores,
mode='min')
pn['pore.diameter'][pn['throat.conns']]
Explanation: Pore-scale models tend to be the most complex (i.e. confusing) aspects of OpenPNM, so it's worth dwelling on the important points of the above two commands:
Both geom1 and geom2 have a models attribute where the parameters specified in the add command are stored for future use if/when needed. The models attribute actually contains a ModelsDict object which is a customized dictionary for storing and managing this type of information.
The propname argument specifies which property the model calculates. This means that the numerical results of the model calculation will be saved in their respective Geometry objects as geom1['pore.diameter'] and geom2['pore.diameter'].
Each model stores it's result under the same propname but these values do not conflict since each Geometry object presides over a unique subset of pores and throats.
The model argument contains a handle to the desired function, which is extracted from the models library of the relevant Module (Geometry in this case). Each Geometry object has been assigned a different statistical model, normal and weibull. This ability to apply different models to different regions of the domain is reason multiple Geometry objects are permitted. The added complexity is well worth the added flexibility.
The remaining arguments are those required by the chosen model. In the above cases, these are the parameters that define the statistical distribution. Note that the mean pore size for geom1 will be 20 um (set by scale) while for geom2 it will be 50 um, thus creating the smaller surface pores as intended. The pore-scale models are well documented regarding what arguments are required and their meaning; as usual these can be viewed with Object Inspector in Spyder.
Now that we've added pore diameter models the each Geometry we can visualize the network in Paraview to confirm that distinctly different pore sizes on the surface regions:
<img src="http://i.imgur.com/5F70ens.png" style="width: 60%" align="left"/>
Add Additional Pore-Scale Models to Each Geometry
In addition to pore diameter, there are several other geometrical properties needed to perform a permeability simulation. Let's start with throat diameter:
End of explanation
geom1.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom2.add_model(propname='throat.endpoints',
model=op.models.geometry.throat_endpoints.spherical_pores)
geom1.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom2.add_model(propname='throat.area',
model=op.models.geometry.throat_area.cylinder)
geom1.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom2.add_model(propname='pore.area',
model=op.models.geometry.pore_area.sphere)
geom1.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
geom2.add_model(propname='throat.conduit_lengths',
model=op.models.geometry.throat_length.conduit_lengths)
Explanation: Instead of using statistical distribution functions, the above lines use the neighbor model which determines each throat value based on the values found 'pore_prop' from it's neighboring pores. In this case, each throat is assigned the minimum pore diameter of it's two neighboring pores. Other options for mode include 'max' and 'mean'.
We'll also need throat length as well as the cross-sectional area of pores and throats, for calculating the hydraulic conductance model later.
End of explanation
water = op.phases.GenericPhase(network=pn)
air = op.phases.GenericPhase(network=pn)
Explanation: Create a Phase Object and Assign Thermophysical Property Models
For this tutorial, we will create a generic Phase object for water, then assign some pore-scale models for calculating their properties. Alternatively, we could use the prewritten Water class included in OpenPNM, which comes complete with the necessary pore-scale models, but this would defeat the purpose of the tutorial.
End of explanation
water['pore.temperature'] = 353 # K
Explanation: Note that all Phase objects are automatically assigned standard temperature and pressure conditions when created. This can be adjusted:
End of explanation
water.add_model(propname='pore.viscosity',
model=op.models.phases.viscosity.water)
Explanation: A variety of pore-scale models are available for calculating Phase properties, generally taken from correlations in the literature. An empirical correlation specifically for the viscosity of water is available:
End of explanation
phys1 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom1)
phys2 = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom2)
Explanation: Create Physics Objects for Each Geometry
Physics objects are where geometric information and thermophysical properties are combined to produce the pore and throat scale transport parameters. Thus we need to create one Physics object for EACH Phase and EACH Geometry:
End of explanation
mod = op.models.physics.hydraulic_conductance.hagen_poiseuille
phys1.add_model(propname='throat.hydraulic_conductance', model=mod)
phys2.add_model(propname='throat.hydraulic_conductance', model=mod)
Explanation: Next add the Hagan-Poiseuille model to both:
End of explanation
g = water['throat.hydraulic_conductance']
Explanation: The same function (mod) was passed as the model argument to both Physics objects. This means that both objects will calculate the hydraulic conductance using the same function. A model must be assigned to both objects in order for the 'throat.hydraulic_conductance' property be defined everywhere in the domain since each Physics applies to a unique selection of pores and throats.
The "pore-scale model" mechanism was specifically designed to allow for users to easily create their own custom models. Creating custom models is outlined in :ref:advanced_usage.
Accessing Data Distributed Between Multiple Physics Objects
Just as Network objects can retrieve data from separate Geometries as a single array with values in the correct locations, Phase objects can retrieve data from Physics objects as follows:
End of explanation
g1 = phys1['throat.hydraulic_conductance'] # Save this for later
g2 = phys2['throat.hydraulic_conductance'] # Save this for later
Explanation: Each Physics applies to the same subset for pores and throats as the Geometries so its values are distributed spatially, but each Physics is also associated with a single Phase object. Consequently, only a Phase object can to request all of the values within the domain pertaining to itself.
In other words, a Network object cannot aggregate the Physics data because it doesn't know which Phase is referred to. For instance, when asking for 'throat.hydraulic_conductance' it could refer to water or air conductivity, so it can only be requested by water or air.
Pore-Scale Models: The Big Picture
Having created all the necessary objects with pore-scale models, it is now time to demonstrate why the OpenPNM pore-scale model approach is so powerful. First, let's inspect the current value of hydraulic conductance in throat 1 on phys1 and phys2:
End of explanation
geom1['pore.seed'] = np.random.rand(geom1.Np)
geom2['pore.seed'] = np.random.rand(geom2.Np)
water['pore.temperature'] = 370 # K
Explanation: Now, let's alter the Geometry objects by assigning new random seeds, and adjust the temperature of water.
End of explanation
geom1.regenerate_models()
geom2.regenerate_models()
Explanation: So far we have not run the regenerate command on any of these objects, which means that the above changes have not yet been applied to all the dependent properties. Let's do this and examine what occurs at each step:
End of explanation
water.regenerate_models()
Explanation: These two lines trigger the re-calculation of all the size related models on each Geometry object.
End of explanation
print(np.all(phys1['throat.hydraulic_conductance'] == g1)) # g1 was saved above
print(np.all(phys2['throat.hydraulic_conductance'] == g2) ) # g2 was saved above
Explanation: This line causes the viscosity to be recalculated at the new temperature. Let's confirm that the hydraulic conductance has NOT yet changed since we have not yet regenerated the Physics objects' models:
End of explanation
phys1.regenerate_models()
phys2.regenerate_models()
print(np.all(phys1['throat.hydraulic_conductance'] != g1))
print(np.all(phys2['throat.hydraulic_conductance'] != g2))
Explanation: Finally, if we regenerate phys1 and phys2 we can see that the hydraulic conductance will be updated to reflect the new sizes on the Geometries and the new temperature on the Phase:
End of explanation
alg = op.algorithms.StokesFlow(network=pn, phase=water)
Explanation: Determine Permeability Tensor by Changing Inlet and Outlet Boundary Conditions
The :ref:getting started tutorial <getting_started> already demonstrated the process of performing a basic permeability simulation. In this tutorial, we'll perform the simulation in all three perpendicular dimensions to obtain the permeability tensor of our heterogeneous anisotropic material.
End of explanation
alg.set_value_BC(values=202650, pores=pn.pores('right'))
alg.set_value_BC(values=101325, pores=pn.pores('left'))
alg.run()
Explanation: Set boundary conditions for flow in the X-direction:
End of explanation
Q = alg.rate(pores=pn.pores('right'))
Explanation: The resulting pressure field can be seen using Paraview:
<img src="http://i.imgur.com/ugX0LFG.png" style="width: 60%" align="left"/>
To determine the permeability coefficient we must find the flow rate through the network to use in Darcy's law. The StokesFlow class (and all analogous transport algorithms) possess a rate method that calculates the net transport through a given set of pores:
End of explanation
mu = np.mean(water['pore.viscosity'])
Explanation: To find K, we need to solve Darcy's law: Q = KA/(mu*L)(P_in - P_out). This requires knowing the viscosity and macroscopic network dimensions:
End of explanation
L = 20 * 0.0001
A = 20 * 10 * (0.0001**2)
Explanation: The dimensions of the network can be determined manually from the shape and spacing specified during its generation:
End of explanation
Kxx = Q * mu * L / (A * 101325)
Explanation: The pressure drop was specified as 1 atm when setting boundary conditions, so Kxx can be found as:
End of explanation
alg.set_value_BC(values=202650, pores=pn.pores('front'))
alg.set_value_BC(values=101325, pores=pn.pores('back'))
alg.run()
Explanation: We can either create 2 new Algorithm objects to perform the simulations in the other two directions, or reuse alg by adjusting the boundary conditions and re-running it.
End of explanation
Q = alg.rate(pores=pn.pores('front'))
Kyy = Q * mu * L / (A * 101325)
Explanation: The first call to set_boundary_conditions used the overwrite mode, which replaces all existing boundary conditions on the alg object with the specified values. The second call uses the merge mode which adds new boundary conditions to any already present, which is the default behavior.
A new value for the flow rate must be recalculated, but all other parameters are equal to the X-direction:
End of explanation
alg.set_value_BC(values=202650, pores=pn.pores('top'))
alg.set_value_BC(values=101325, pores=pn.pores('bottom'))
alg.run()
Q = alg.rate(pores=pn.pores('top'))
L = 10 * 0.0001
A = 20 * 20 * (0.0001**2)
Kzz = Q * mu * L / (A * 101325)
Explanation: The values of Kxx and Kyy should be nearly identical since both these two directions are parallel to the small surface pores. For the Z-direction:
End of explanation
print(Kxx, Kyy, Kzz)
Explanation: The permeability in the Z-direction is about half that in the other two directions due to the constrictions caused by the small surface pores.
End of explanation |
11,422 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sentiment analysis model using deep learning
| Python Code::
import tensorflow as tf
model = tf.keras.model.Sequential()
model.add(tf.keras.layers.Embedding(n_most_words,n_dim,input_length = X_train.shape[1]))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Conv1D(64, 3, padding = 'same', activation = 'relu'))
model.add(tf.keras.layers.LSTM(64,dropout=0.25,recurrent_dropout=0.25))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Dense(50,activation='relu'))
model.add(tf.keras.layers.Dense(3,activation='softmax'))
|
11,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https
Step2: dm_control
More detailed instructions in this tutorial.
Institutional MuJoCo license.
Step4: Machine-locked MuJoCo license.
Step5: Imports
Step6: Data
Step7: Dataset and environment
Step8: D4PG learner
Step9: Training loop
Step10: Evaluation | Python Code:
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
!git clone https://github.com/deepmind/deepmind-research.git
%cd deepmind-research
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
RL Unplugged: Offline D4PG - DM control
Guide to training an Acme D4PG agent on DM control data.
<a href="https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/rl_unplugged/dm_control_suite_d4pg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Installation
End of explanation
#@title Edit and run
mjkey =
REPLACE THIS LINE WITH YOUR MUJOCO LICENSE KEY
.strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL deps
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Fetch MuJoCo binaries from Roboti
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Install dm_control
!pip install dm_control
Explanation: dm_control
More detailed instructions in this tutorial.
Institutional MuJoCo license.
End of explanation
#@title Add your MuJoCo License and run
mjkey =
.strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL dependencies
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Get MuJoCo binaries
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Install dm_control
!pip install dm_control[locomotion_mazes]
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
Explanation: Machine-locked MuJoCo license.
End of explanation
import collections
import copy
from typing import Mapping, Sequence
import acme
from acme import specs
from acme.agents.tf import actors
from acme.agents.tf import d4pg
from acme.tf import networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
from acme.wrappers import single_precision
from acme.tf import utils as tf2_utils
import numpy as np
from rl_unplugged import dm_control_suite
import sonnet as snt
import tensorflow as tf
Explanation: Imports
End of explanation
task_name = 'cartpole_swingup' #@param
tmp_path = '/tmp/dm_control_suite'
gs_path = 'gs://rl_unplugged/dm_control_suite'
!mkdir -p {tmp_path}/{task_name}
!gsutil cp {gs_path}/{task_name}/* {tmp_path}/{task_name}
num_shards_str, = !ls {tmp_path}/{task_name}/* | wc -l
num_shards = int(num_shards_str)
Explanation: Data
End of explanation
batch_size = 10 #@param
task = dm_control_suite.ControlSuite(task_name)
environment = task.environment
environment_spec = specs.make_environment_spec(environment)
dataset = dm_control_suite.dataset(
'/tmp',
data_path=task.data_path,
shapes=task.shapes,
uint8_features=task.uint8_features,
num_threads=1,
batch_size=batch_size,
num_shards=num_shards)
def discard_extras(sample):
return sample._replace(data=sample.data[:5])
dataset = dataset.map(discard_extras).batch(batch_size)
Explanation: Dataset and environment
End of explanation
# Create the networks to optimize.
action_spec = environment_spec.actions
action_size = np.prod(action_spec.shape, dtype=int)
policy_network = snt.Sequential([
tf2_utils.batch_concat,
networks.LayerNormMLP(layer_sizes=(300, 200, action_size)),
networks.TanhToSpec(spec=environment_spec.actions)])
critic_network = snt.Sequential([
networks.CriticMultiplexer(
observation_network=tf2_utils.batch_concat,
action_network=tf.identity,
critic_network=networks.LayerNormMLP(
layer_sizes=(400, 300),
activate_final=True)),
# Value-head gives a 51-atomed delta distribution over state-action values.
networks.DiscreteValuedHead(vmin=-150., vmax=150., num_atoms=51)])
# Create the target networks
target_policy_network = copy.deepcopy(policy_network)
target_critic_network = copy.deepcopy(critic_network)
# Create variables.
tf2_utils.create_variables(network=policy_network,
input_spec=[environment_spec.observations])
tf2_utils.create_variables(network=critic_network,
input_spec=[environment_spec.observations,
environment_spec.actions])
tf2_utils.create_variables(network=target_policy_network,
input_spec=[environment_spec.observations])
tf2_utils.create_variables(network=target_critic_network,
input_spec=[environment_spec.observations,
environment_spec.actions])
# The learner updates the parameters (and initializes them).
learner = d4pg.D4PGLearner(
policy_network=policy_network,
critic_network=critic_network,
target_policy_network=target_policy_network,
target_critic_network=target_critic_network,
dataset=dataset,
discount=0.99,
target_update_period=100)
Explanation: D4PG learner
End of explanation
for _ in range(100):
learner.step()
Explanation: Training loop
End of explanation
# Create a logger.
logger = loggers.TerminalLogger(label='evaluation', time_delta=1.)
# Create an environment loop.
loop = acme.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedFeedForwardActor(policy_network),
logger=logger)
loop.run(5)
Explanation: Evaluation
End of explanation |
11,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above
Step6: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question
Step11: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question | Python Code:
import graphlab
graphlab.product_key.set_product_key("C0C2-04B4-D94B-70F6-8771-86F9-C6E1-E122")
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/kc_house_data.gl')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
import math
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predict = model.predict(data)
# Then compute the residuals/errors
residuals = []
for i in range(0, len(predict)):
error = outcome[i] - predict[i]
residuals.append(math.pow(error,2))
# Then square and add them up
RSS = reduce(lambda x,y : x + y, residuals)
return(RSS)
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
from math import log
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = train_data['bedrooms'] * train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms'] * test_data['bathrooms']
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))
train_data['lat_plus_long'] = train_data['lat'] + train_data['long']
test_data['lat_plus_long'] = test_data['lat'] + test_data['long']
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
print sum(test_data['bedrooms_squared'])/len(test_data['bedrooms_squared'])
print sum(test_data['bed_bath_rooms'])/len(test_data['bed_bath_rooms'])
print sum(test_data['log_sqft_living'])/len(test_data['log_sqft_living'])
print sum(test_data['lat_plus_long'])/len(test_data['lat_plus_long'])
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target = 'price', features = model_1_features,
validation_set = None)
model_2 = graphlab.linear_regression.create(train_data, target = 'price', features = model_2_features,
validation_set = None)
model_3 = graphlab.linear_regression.create(train_data, target = 'price', features = model_3_features,
validation_set = None)
# Examine/extract each model's coefficients:
model_1_weight_summary = model_1.get("coefficients")
print model_1_weight_summary
model_2_weight_summary = model_2.get("coefficients")
print model_2_weight_summary
model_3_weight_summary = model_3.get("coefficients")
print model_3_weight_summary
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
# Compute the RSS on TRAINING data for each of the three models and record the values:
rss_model_1_train = get_residual_sum_of_squares(model_1, train_data, train_data['price'])
print rss_model_1_train
rss_model_2_train = get_residual_sum_of_squares(model_2, train_data, train_data['price'])
print rss_model_2_train
rss_model_3_train = get_residual_sum_of_squares(model_3, train_data, train_data['price'])
print rss_model_3_train
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
# Compute the RSS on TESTING data for each of the three models and record the values:
rss_model_1_test = get_residual_sum_of_squares(model_1, test_data, test_data['price'])
print rss_model_1_test
rss_model_2_test = get_residual_sum_of_squares(model_2, test_data, test_data['price'])
print rss_model_2_test
rss_model_3_test = get_residual_sum_of_squares(model_3, test_data, test_data['price'])
print rss_model_3_test
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
Now compute the RSS on on TEST data for each of the three models.
End of explanation |
11,425 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two input arrays x and y of the same shape. I need to run each of their elements with matching indices through a function, then store the result at those indices in a third array z. What is the most pythonic way to accomplish this? Right now I have four four loops - I'm sure there is an easier way. | Problem:
import numpy as np
x = [[2, 2, 2],
[2, 2, 2],
[2, 2, 2]]
y = [[3, 3, 3],
[3, 3, 3],
[3, 3, 1]]
x_new = np.array(x)
y_new = np.array(y)
z = x_new + y_new |
11,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
11,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
scikit-learn API
X
Step1: Model complexity, overfitting, underfitting
Pipelines
Step2: Scoring metrics
Step3: Data Wrangling | Python Code:
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score
digits = load_digits()
X, y = digits.data / 16., digits.target
cross_val_score(LogisticRegression(), X, y, cv=5)
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
grid = GridSearchCV(LogisticRegression(), param_grid={'C': np.logspace(-3, 2, 6)})
grid.fit(X_train, y_train)
grid.score(X_test, y_test)
Explanation: Summary
scikit-learn API
X : data, 2d numpy array or scipy sparse matrix of shape (n_samples, n_features)
y : targets, 1d numpy array of shape (n_samples,)
<table>
<tr style="border:None; font-size:20px; padding:10px;"><th colspan=2>``model.fit(X_train, [y_train])``</td></tr>
<tr style="border:None; font-size:20px; padding:10px;"><th>``model.predict(X_test)``</th><th>``model.transform(X_test)``</th></tr>
<tr style="border:None; font-size:20px; padding:10px;"><td>Classification</td><td>Preprocessing</td></tr>
<tr style="border:None; font-size:20px; padding:10px;"><td>Regression</td><td>Dimensionality Reduction</td></tr>
<tr style="border:None; font-size:20px; padding:10px;"><td>Clustering</td><td>Feature Extraction</td></tr>
<tr style="border:None; font-size:20px; padding:10px;"><td> </td><td>Feature selection</td></tr>
</table>
Model evaluation and parameter selection
End of explanation
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import SelectKBest
pipe = make_pipeline(SelectKBest(k=59), LogisticRegression())
pipe.fit(X_train, y_train)
pipe.score(X_test, y_test)
Explanation: Model complexity, overfitting, underfitting
Pipelines
End of explanation
cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5)
cross_val_score(LogisticRegression(C=.01), X, y == 3, cv=5, scoring="roc_auc")
Explanation: Scoring metrics
End of explanation
from sklearn.preprocessing import OneHotEncoder
X = np.array([[15.9, 1], # from Tokyo
[21.5, 2], # from New York
[31.3, 0], # from Paris
[25.1, 2], # from New York
[63.6, 1], # from Tokyo
[14.4, 1], # from Tokyo
])
y = np.array([0, 1, 1, 1, 0, 0])
encoder = OneHotEncoder(categorical_features=[1], sparse=False)
pipe = make_pipeline(encoder, LogisticRegression())
pipe.fit(X, y)
pipe.score(X, y)
Explanation: Data Wrangling
End of explanation |
11,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
11,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Network Traffic Forecasting (using time series data)
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demonstrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to do univariate forecasting (predict only 1 series), and multivariate forecasting (predicts more than 1 series at the same time) using Project Chronos.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
Step2: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
Step3: Below are some example records of the data
Step4: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 2 parts
Step5: Plot the data to see how the KPI's look like
Step6: Feature Engineering & Data Preperation
For feature engineering, we use year, month, week, day of week and hour as features in addition to the target KPI values.
For data preperation, we impute the data to handle missing data and scale the data.
We generate a built-in TSDataset to complete the whole processing.
Step7: Time series forecasting
Univariate forecasting
For univariate forecasting, we forecast AvgRate only. We need to roll the data on corresponding target column with tsdataset.
Step8: For univariate forcasting, we use LSTMForecaster for forecasting.
Step9: First we initiate a LSTMForecaster.
* feature_dim should match the training data input feature, so we just use the last dimension of train data shape.
* target_dim equals the variate num we want to predict. We set target_dim=1 for univariate forecasting.
Step10: Then we use fit to train the model. Wait sometime for it to finish.
Step11: After training is finished. You can use the forecaster to do prediction and evaluation.
Step12: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
Step13: calculate the symetric mean absolute percentage error.
Step14: multivariate forecasting
For multivariate forecasting, we forecast AvgRate and total at the same time. We need to roll the data on corresponding target column with tsdataset.
Step15: For multivariate forecasting, we use MTNetForecaster for forecasting.
Step16: First, we initialize a mtnet_forecaster according to input data shape. The lookback length is equal to (long_series_num+1)*series_length Details refer to chronos docs.
Step17: MTNet needs to preprocess the X into another format, so we call MTNetForecaster.preprocess_input on train_x and test_x.
Step18: Now we train the model and wait till it finished.
Step19: Use the model for prediction and inverse the scaling of the prediction results
Step20: plot actual and prediction values for AvgRate KPI
Step21: plot actual and prediction values for total bytes KPI | Python Code:
def plot_predict_actual_values(date, y_pred, y_test, ylabel):
plot the predicted values and actual values (for the test data)
fig, axs = plt.subplots(figsize=(16,6))
axs.plot(date, y_pred, color='red', label='predicted values')
axs.plot(date, y_test, color='blue', label='actual values')
axs.set_title('the predicted values and actual values (for the test data)')
plt.xlabel('test datetime')
plt.ylabel(ylabel)
plt.legend(loc='upper left')
plt.show()
Explanation: Network Traffic Forecasting (using time series data)
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demonstrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to do univariate forecasting (predict only 1 series), and multivariate forecasting (predicts more than 1 series at the same time) using Project Chronos.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
raw_df = pd.read_csv("data/data.csv")
Explanation: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
End of explanation
raw_df.head()
Explanation: Below are some example records of the data
End of explanation
df = pd.DataFrame(pd.to_datetime(raw_df.StartTime))
# we can find 'AvgRate' is of two scales: 'Mbps' and 'Gbps'
raw_df.AvgRate.str[-4:].unique()
# Unify AvgRate value
df['AvgRate'] = raw_df.AvgRate.apply(lambda x:float(x[:-4]) if x.endswith("Mbps") else float(x[:-4])*1000)
df["total"] = raw_df["total"]
df.head()
Explanation: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 2 parts:
1. Convert string datetime to TimeStamp
2. Unify the measurement scale for AvgRate value - some uses Mbps, some uses Gbps
End of explanation
ax = df.plot(y='AvgRate',figsize=(16,6), title="AvgRate of network traffic data")
ax = df.plot(y='total',figsize=(16,6), title="total bytes of network traffic data")
Explanation: Plot the data to see how the KPI's look like
End of explanation
from zoo.chronos.data import TSDataset
from sklearn.preprocessing import StandardScaler
# we look back one week data which is of the frequency of 2h.
look_back = 84
# specify the number of steps to be predicted,one day is selected by default.
horizon = 1
tsdata_train, _, tsdata_test = TSDataset.from_pandas(df, dt_col="StartTime", target_col=["AvgRate","total"], with_split=True, test_ratio=0.1)
standard_scaler = StandardScaler()
for tsdata in [tsdata_train, tsdata_test]:
tsdata.gen_dt_feature()\
.impute(mode="last")\
.scale(standard_scaler, fit=(tsdata is tsdata_train))
Explanation: Feature Engineering & Data Preperation
For feature engineering, we use year, month, week, day of week and hour as features in addition to the target KPI values.
For data preperation, we impute the data to handle missing data and scale the data.
We generate a built-in TSDataset to complete the whole processing.
End of explanation
for tsdata in [tsdata_train, tsdata_test]:
tsdata.roll(lookback=look_back, horizon=horizon, target_col="AvgRate")
x_train, y_train = tsdata_train.to_numpy()
x_test, y_test = tsdata_test.to_numpy()
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Explanation: Time series forecasting
Univariate forecasting
For univariate forecasting, we forecast AvgRate only. We need to roll the data on corresponding target column with tsdataset.
End of explanation
from zoo.chronos.forecaster.lstm_forecaster import LSTMForecaster
Explanation: For univariate forcasting, we use LSTMForecaster for forecasting.
End of explanation
# build model
forecaster = LSTMForecaster(past_seq_len=x_train.shape[1],
input_feature_num=x_train.shape[-1],
output_feature_num=y_train.shape[-1],
hidden_dim=16,
layer_num=2,
lr=0.001)
Explanation: First we initiate a LSTMForecaster.
* feature_dim should match the training data input feature, so we just use the last dimension of train data shape.
* target_dim equals the variate num we want to predict. We set target_dim=1 for univariate forecasting.
End of explanation
%%time
forecaster.fit(data=(x_train, y_train), batch_size=1024, epochs=50)
Explanation: Then we use fit to train the model. Wait sometime for it to finish.
End of explanation
# make prediction
y_pred = forecaster.predict(x_test)
Explanation: After training is finished. You can use the forecaster to do prediction and evaluation.
End of explanation
y_pred_unscale = tsdata_test.unscale_numpy(y_pred)
y_test_unscale = tsdata_test.unscale_numpy(y_test)
Explanation: Since we have used standard scaler to scale the input data (including the target values), we need to inverse the scaling on the predicted values too.
End of explanation
from zoo.orca.automl.metrics import Evaluator
# evaluate with sMAPE
print("sMAPE is", Evaluator.evaluate("smape", y_test_unscale, y_pred_unscale))
# evaluate with mean_squared_error
print("mean_squared error is", Evaluator.evaluate("mse", y_test_unscale, y_pred_unscale))
Explanation: calculate the symetric mean absolute percentage error.
End of explanation
for tsdata in [tsdata_train, tsdata_test]:
tsdata.roll(lookback=look_back, horizon=horizon, target_col=["AvgRate","total"])
x_train_m, y_train_m = tsdata_train.to_numpy()
x_test_m, y_test_m = tsdata_test.to_numpy()
y_train_m, y_test_m = y_train_m[:, 0, :], y_test_m[:, 0, :]
x_train_m.shape, y_train_m.shape, x_test_m.shape, y_test_m.shape
Explanation: multivariate forecasting
For multivariate forecasting, we forecast AvgRate and total at the same time. We need to roll the data on corresponding target column with tsdataset.
End of explanation
from zoo.chronos.forecaster.mtnet_forecaster import MTNetForecaster
Explanation: For multivariate forecasting, we use MTNetForecaster for forecasting.
End of explanation
mtnet_forecaster = MTNetForecaster(target_dim=y_train_m.shape[-1],
feature_dim=x_train_m.shape[-1],
long_series_num=6,
series_length=12,
ar_window_size=6,
cnn_height=4
)
Explanation: First, we initialize a mtnet_forecaster according to input data shape. The lookback length is equal to (long_series_num+1)*series_length Details refer to chronos docs.
End of explanation
# mtnet requires reshape of input x before feeding into model.
x_train_mtnet = mtnet_forecaster.preprocess_input(x_train_m)
x_test_mtnet = mtnet_forecaster.preprocess_input(x_test_m)
Explanation: MTNet needs to preprocess the X into another format, so we call MTNetForecaster.preprocess_input on train_x and test_x.
End of explanation
%%time
hist = mtnet_forecaster.fit(x = x_train_mtnet, y = y_train_m, batch_size=1024, epochs=20)
Explanation: Now we train the model and wait till it finished.
End of explanation
y_pred_m = mtnet_forecaster.predict(x_test_mtnet)
y_pred_m_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_pred_m, axis=1))[:, 0, :]
y_test_m_unscale = tsdata_test.unscale_numpy(np.expand_dims(y_test_m, axis=1))[:, 0, :]
from zoo.orca.automl.metrics import Evaluator
# evaluate with sMAPE
print("sMAPE is", Evaluator.evaluate("smape", y_test_m_unscale, y_pred_m_unscale, multioutput="raw_values"))
# evaluate with mean_squared_error
print("mean_squared error is", Evaluator.evaluate("mse", y_test_m_unscale, y_pred_m_unscale, multioutput="raw_values"))
Explanation: Use the model for prediction and inverse the scaling of the prediction results
End of explanation
multi_target_value = ["AvgRate","total"]
test_date=df[-y_test_m_unscale.shape[0]:].index
plot_predict_actual_values(test_date, y_pred_m_unscale[:,0], y_test_m_unscale[:,0], ylabel=multi_target_value[0])
Explanation: plot actual and prediction values for AvgRate KPI
End of explanation
plot_predict_actual_values(test_date, y_pred_m_unscale[:,1], y_test_m_unscale[:,1], ylabel=multi_target_value[1])
Explanation: plot actual and prediction values for total bytes KPI
End of explanation |
11,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How rider usage varies with temperature when binning months
Setting up the data
Step1: Getting the data into groupby objects and getting a correlation table
Step2: Comments on above
As we can see from the correlations, ridership goes up with temperature in the winter and goes down with increasing temperature in the summer. Riders do not seem to appreciate especially cold or hot temperatures, except for the month of May, when riders seem to not be especially encouraged or discouraged by temperature.
This is further illustrated by regrouping via season | Python Code:
import numpy as np
import pandas as pd
import datetime
from pandas import Series, DataFrame
stations = pd.read_table('stations.tsv')
usage = pd.read_table('usage_2012.tsv')
weather = pd.read_table('daily_weather.tsv')
def change_seasons():
weather.loc[weather["season_code"] == 1, "season_desc"] = 'Winter'
weather.loc[weather["season_code"] == 2, "season_desc"] = 'Spring'
weather.loc[weather["season_code"] == 3, "season_desc"] = 'Summer'
weather.loc[weather["season_code"] == 4, "season_desc"] = 'Fall'
def convert_dates():
for i in weather.index:
weather.ix[i, 'date'] = datetime.datetime.strptime(
str(weather.ix[i, 'date']), "%Y-%m-%d").date()
def add_months():
for i in weather.index:
weather.ix[i, 'month'] = weather.ix[i, 'date'].month
change_seasons()
convert_dates()
add_months()
Explanation: How rider usage varies with temperature when binning months
Setting up the data
End of explanation
months = weather[['month', 'subjective_temp', 'total_riders']].groupby('month')
corrdf = months.corr()
# Doing some NA val cleanup
del corrdf['month']
corrdf = corrdf.dropna()
# And now done
print corrdf
Explanation: Getting the data into groupby objects and getting a correlation table
End of explanation
seasons = weather[['season_desc', 'subjective_temp', 'total_riders']].groupby('season_desc')
corrdf = seasons.corr()
# Doing some NA val cleanup
# del corrdf['season_desc']
corrdf = corrdf.dropna()
# And now done
print corrdf
Explanation: Comments on above
As we can see from the correlations, ridership goes up with temperature in the winter and goes down with increasing temperature in the summer. Riders do not seem to appreciate especially cold or hot temperatures, except for the month of May, when riders seem to not be especially encouraged or discouraged by temperature.
This is further illustrated by regrouping via season
End of explanation |
11,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probabilistic Programming in Python using PyMC
Authors
Step1: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
Step2: Model Specification
Specifying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above.
First, we import the components we will need from PyMC.
Step3: Now we build our model, which we will present in full first, then explain each part line-by-line.
Step4: The first line,
python
basic_model = Model()
creates a new Model object which is a container for the model random variables.
Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement
Step5: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship
Step6: By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
Step7: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Most techniques for finding the MAP estimate also only find a local optimum (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.
Sampling methods
Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution.
To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers
Step8: The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows
Step9: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.
Step10: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
Step11: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
In addition, the summary function provides a text-based output of common posterior statistics
Step12: Case study 1
Step13: Model Specification
As with the linear regression example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions
Step14: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.
Fitting
Before we draw samples from the posterior, it is prudent to find a decent starting value by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s).
To do the sampling, we do a short initial run to put us in a volume of high probability, then start again at the new starting point. trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling.
Step15: We can check our samples by looking at the traceplot for nu and sigma.
Step16: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
Step17: As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.
Case study 2
Step18: Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
In our model,
$$
\begin{aligned}
D_t &\sim \text{Pois}(r_t), r_t= \begin{cases}
l, & \text{if } t \lt s \
r, & \text{if } t \ge s
\end{cases} \
s &\sim \text{Unif}(t_l, t_h)\
e &\sim \text{exp}(1)\
l &\sim \text{exp}(1)
\end{aligned}
$$
the parameters are defined as follows
Step19: The logic for the rate random variable,
python
rate = switch(switchpoint >= year, early_rate, late_rate)
is implemented using switch, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments.
Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. All we need to do to handle the missing values is ensure we sample this random variable as well.
Unfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements adaptive Metropolis-Hastings, because it is designed to handle discrete values.
We sample with both samplers at once by passing them to the sample function in a list. Each new sample is generated by first applying step1 then step2.
Step20: In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error.
Step21: Arbitrary deterministics
Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator.
Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types.
Step22: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Arbitrary distributions
Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014).
```python
import theano.tensor as T
from pymc3 import DensityDist, Uniform
with Model() as model
Step23: Generalized Linear Models
Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module.
The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example
Step24: The model can then be very concisely specified in one line of code.
Step25: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
Step26: Backends
PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends
Step27: The stored trace can then later be loaded using the load command | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Initialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
Explanation: Probabilistic Programming in Python using PyMC
Authors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck
Abstract
Probabilistic Programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamliltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.
Introduction
Probabilistic programming (PP) allows flexible specification of Bayesian statistical models in code. PyMC3 is a new, open-source PP framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
While most of PyMC3's user-facing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile them to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration.
Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.
Installation
Running PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO.
PyMC3 can be installed using pip (https://pip.pypa.io/en/latest/installing.html):
pip install git+https://github.com/pymc-devs/pymc3
PyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib.
To take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed. These are not automatically installed, but can be installed by:
pip install patsy pandas
The source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage.
A Motivating Example: Linear Regression
To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes $Y$ as normally-distributed observations with an expected value $\mu$ that is a linear function of two predictor variables, $X_1$ and $X_2$.
$$\begin{aligned}
Y &\sim \mathcal{N}(\mu, \sigma^2) \
\mu &= \alpha + \beta_1 X_1 + \beta_2 X_2
\end{aligned}$$
where $\alpha$ is the intercept, and $\beta_i$ is the coefficient for covariate $X_i$, while $\sigma$ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. We choose zero-mean normal priors with variance of 100 for both regression coefficients, which corresponds to weak information regarding the true parameter values. We choose a half-normal distribution (normal distribution bounded at zero) as the prior for $\sigma$.
$$\begin{aligned}
\alpha &\sim \mathcal{N}(0, 100) \
\beta_i &\sim \mathcal{N}(0, 100) \
\sigma &\sim \lvert\mathcal{N}(0, 1){\rvert}
\end{aligned}$$
Generating data
We can simulate some artificial data from this model using only NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure.
End of explanation
%matplotlib inline
fig, axes = plt.subplots(1, 2, sharex=True, figsize=(10,4))
axes[0].scatter(X1, Y)
axes[1].scatter(X2, Y)
axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');
Explanation: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
End of explanation
from pymc3 import Model, Normal, HalfNormal
Explanation: Model Specification
Specifying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above.
First, we import the components we will need from PyMC.
End of explanation
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
Explanation: Now we build our model, which we will present in full first, then explain each part line-by-line.
End of explanation
help(Normal) #try help(Model), help(Uniform) or help(basic_model)
Explanation: The first line,
python
basic_model = Model()
creates a new Model object which is a container for the model random variables.
Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement:
python
with basic_model:
This creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model right after we create them. If you try to create a new random variable without a with model: statement, it will raise an error since there is no obvious model for the variable to be added to.
The first three statements in the context manager:
python
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
create a stochastic random variables with a Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10 for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, $\sigma$. These are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and partly random (or stochastic).
We call the Normal constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu, the mean, and sd, the standard deviation, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and many others, are available in PyMC3.
The beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7) makes random variable that takes on 5 by 7 matrix values).
Detailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function.
End of explanation
from pymc3 import find_MAP
map_estimate = find_MAP(model=basic_model)
print(map_estimate)
Explanation: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship:
python
mu = alpha + beta[0]*X1 + beta[1]*X2
This creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their values may be.
PyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexed-into to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided.
The final line of the model, defines Y_obs, the sampling distribution of the outcomes in the dataset.
python
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
This is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object.
Notice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables.
Model fitting
Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior estimates analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods.
Maximum a posteriori methods
The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find_MAP function.
Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values.
End of explanation
from scipy import optimize
map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell)
print(map_estimate)
Explanation: By default, find_MAP uses the Broyden–Fletcher–Goldfarb–Shanno (BFGS) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
End of explanation
from pymc3 import NUTS, sample
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# draw 2000 posterior samples
trace = sample(2000, start=start)
Explanation: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Most techniques for finding the MAP estimate also only find a local optimum (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.
Sampling methods
Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution.
To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis. These step methods can be assigned manually, or assigned automatically by PyMC3. Auto-assignment is based on the attributes of each variable in the model. In general:
Binary variables will be assigned to BinaryMetropolis
Discrete variables will be assigned to Metropolis
Continuous variables will be assigned to NUTS
Auto-assignment can be overriden for any subset of variables by specifying them manually prior to sampling.
Gradient-based sampling methods
PyMC3 has the standard sampling algorithms like adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful on models that have many continuous parameters, a situation where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentiation of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables.
NUTS requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, although NUTS uses it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often.
Fortunately NUTS can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_MAP) to NUTS, it will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to make a guess for a good scaling vector, which often results in a good value. The MAP estimate is often a good point to use to initiate sampling. It is also possible to supply your own vector or scaling matrix to NUTS, though this is a more advanced use. If you wish to modify a Hessian at a specific point to use as your scaling matrix or vector, you can use find_hessian or find_hessian_diag.
For our basic linear regression example in basic_model, we will use NUTS to sample 2000 draws from the posterior using the MAP as the starting point and scaling point. This must also be performed inside the context of the model.
End of explanation
trace['alpha'][-5:]
Explanation: The sample function runs the step method(s) assigned (or passed) to it for the given number of iterations and returns a Trace object containing the samples collected, in the order they were collected. The trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows:
End of explanation
from pymc3 import Slice
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = Slice(vars=[sigma])
# draw 5000 posterior samples
trace = sample(5000, step=step, start=start)
Explanation: If we wanted to use the slice sampling algorithm to sigma instead of NUTS (which was assigned automatically), we could have specified this as the step argument for sample.
End of explanation
from pymc3 import traceplot
traceplot(trace);
Explanation: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
End of explanation
from pymc3 import summary
summary(trace)
Explanation: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
In addition, the summary function provides a text-based output of common posterior statistics:
End of explanation
import pandas as pd
returns = pd.read_csv('data/SP500.csv', index_col=0, parse_dates=True)
print(len(returns))
returns.plot(figsize=(10, 6))
plt.ylabel('daily returns in %');
Explanation: Case study 1: Stochastic volatility
We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient.
The Model
Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).
$$\begin{aligned}
\sigma &\sim exp(50) \
\nu &\sim exp(.1) \
s_i &\sim \mathcal{N}(s_{i-1}, \sigma^{-2}) \
log(y_i) &\sim t(\nu, 0, exp(-2 s_i))
\end{aligned}$$
Here, $y$ is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process $s$. The individual $s_i$ are the individual daily log volatilities in the latent log volatility process.
The Data
Our data consist of daily returns of the S&P 500 during the 2008 financial crisis.
End of explanation
from pymc3 import Exponential, StudentT, exp, Deterministic
from pymc3.distributions.timeseries import GaussianRandomWalk
with Model() as sp500_model:
nu = Exponential('nu', 1./10, testval=5.)
sigma = Exponential('sigma', 1./.02, testval=.1)
s = GaussianRandomWalk('s', sigma**-2, shape=len(returns))
volatility_process = Deterministic('volatility_process', exp(-2*s))
r = StudentT('r', nu, lam=1/volatility_process, observed=returns['S&P500'])
Explanation: Model Specification
As with the linear regression example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the $ \nu $ and $\sigma$ priors, the student-t (T) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities.
In PyMC3, variables with purely positive priors like Exponential are transformed with a log transform. This makes sampling more robust. Behind the scenes, a variable in the unconstrained space (named "variableName_log") is added to the model for sampling. In this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. Variables with priors that constrain them on two sides, like Beta or Uniform, are also transformed to be unconstrained but with a log odds transform.
Although, unlike model specification in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a "test value") using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden.
The vector of latent volatilities s is given a prior distribution by GaussianRandomWalk. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector.
End of explanation
import scipy
with sp500_model:
start = find_MAP(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b)
step = NUTS(scaling=start)
trace = sample(100, step, progressbar=False)
# Start next run at the last sampled position.
step = NUTS(scaling=trace[-1], gamma=.25)
trace = sample(2000, step, start=trace[-1], progressbar=False)
Explanation: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.
Fitting
Before we draw samples from the posterior, it is prudent to find a decent starting value by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s).
To do the sampling, we do a short initial run to put us in a volume of high probability, then start again at the new starting point. trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling.
End of explanation
traceplot(trace, [nu, sigma]);
Explanation: We can check our samples by looking at the traceplot for nu and sigma.
End of explanation
fig, ax = plt.subplots(figsize=(15, 8))
returns.plot(ax=ax)
ax.plot(returns.index, 1/np.exp(trace['s',::30].T), 'r', alpha=.03);
ax.set(title='volatility_process', xlabel='time', ylabel='volatility');
ax.legend(['S&P500', 'stochastic volatility process'])
Explanation: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
End of explanation
disaster_data = np.ma.masked_values([4, 5, 4, 0, 1, 4, 3, 4, 0, 6, 3, 3, 4, 0, 2, 6,
3, 3, 5, 4, 5, 3, 1, 4, 4, 1, 5, 5, 3, 4, 2, 5,
2, 2, 3, 4, 2, 1, 3, -999, 2, 1, 1, 1, 1, 3, 0, 0,
1, 0, 1, 1, 0, 0, 3, 1, 0, 3, 2, 2, 0, 1, 1, 1,
0, 1, 0, 1, 0, 0, 0, 2, 1, 0, 0, 0, 1, 1, 0, 2,
3, 3, 1, -999, 2, 1, 1, 1, 1, 2, 4, 2, 0, 0, 1, 4,
0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1], value=-999)
year = np.arange(1851, 1962)
plt.plot(year, disaster_data, 'o', markersize=8);
plt.ylabel("Disaster count")
plt.xlabel("Year")
Explanation: As you can see, the model correctly infers the increase in volatility during the 2008 financial crash. Moreover, note that this model is quite complex because of its high dimensionality and dependency-structure in the random walk distribution. NUTS as implemented in PyMC3, however, correctly infers the posterior distribution with ease.
Case study 2: Coal mining disasters
Consider the following time series of recorded coal mining disasters in the UK from 1851 to 1962 (Jarrett, 1979). The number of disasters is thought to have been affected by changes in safety regulations during this period. Unfortunately, we also have pair of years with missing data, identified as missing by a NumPy MaskedArray using -999 as the marker value.
Next we will build a model for this series and attempt to estimate when the change occurred. At the same time, we will see how to handle missing data, use multiple samplers and sample from discrete random variables.
End of explanation
from pymc3 import DiscreteUniform, Poisson, switch
with Model() as disaster_model:
switchpoint = DiscreteUniform('switchpoint', lower=year.min(), upper=year.max(), testval=1900)
# Priors for pre- and post-switch rates number of disasters
early_rate = Exponential('early_rate', 1)
late_rate = Exponential('late_rate', 1)
# Allocate appropriate Poisson rates to years before and after current
rate = switch(switchpoint >= year, early_rate, late_rate)
disasters = Poisson('disasters', rate, observed=disaster_data)
Explanation: Occurrences of disasters in the time series is thought to follow a Poisson process with a large rate parameter in the early part of the time series, and from one with a smaller rate in the later part. We are interested in locating the change point in the series, which perhaps is related to changes in mining safety regulations.
In our model,
$$
\begin{aligned}
D_t &\sim \text{Pois}(r_t), r_t= \begin{cases}
l, & \text{if } t \lt s \
r, & \text{if } t \ge s
\end{cases} \
s &\sim \text{Unif}(t_l, t_h)\
e &\sim \text{exp}(1)\
l &\sim \text{exp}(1)
\end{aligned}
$$
the parameters are defined as follows:
* $D_t$: The number of disasters in year $t$
* $r_t$: The rate parameter of the Poisson distribution of disasters in year $t$.
* $s$: The year in which the rate parameter changes (the switchpoint).
* $e$: The rate parameter before the switchpoint $s$.
* $l$: The rate parameter after the switchpoint $s$.
* $t_l$, $t_h$: The lower and upper boundaries of year $t$.
This model is built much like our previous models. The major differences are the introduction of discrete variables with the Poisson and discrete-uniform priors and the novel form of the deterministic random variable rate.
End of explanation
from pymc3 import Metropolis
with disaster_model:
step1 = NUTS([early_rate, late_rate])
# Use Metropolis for switchpoint, and missing values since it accommodates discrete variables
step2 = Metropolis([switchpoint, disasters.missing_values[0]] )
trace = sample(10000, step=[step1, step2])
Explanation: The logic for the rate random variable,
python
rate = switch(switchpoint >= year, early_rate, late_rate)
is implemented using switch, a Theano function that works like an if statement. It uses the first argument to switch between the next two arguments.
Missing values are handled transparently by passing a MaskedArray or a pandas.DataFrame with NaN values to the observed argument when creating an observed stochastic random variable. Behind the scenes, another random variable, disasters.missing_values is created to model the missing values. All we need to do to handle the missing values is ensure we sample this random variable as well.
Unfortunately because they are discrete variables and thus have no meaningful gradient, we cannot use NUTS for sampling switchpoint or the missing disaster observations. Instead, we will sample using a Metroplis step method, which implements adaptive Metropolis-Hastings, because it is designed to handle discrete values.
We sample with both samplers at once by passing them to the sample function in a list. Each new sample is generated by first applying step1 then step2.
End of explanation
traceplot(trace);
Explanation: In the trace plot below we can see that there's about a 10 year span that's plausible for a significant change in safety, but a 5 year span that contains most of the probability mass. The distribution is jagged because of the jumpy relationship between the year switchpoint and the likelihood and not due to sampling error.
End of explanation
import theano.tensor as T
from theano.compile.ops import as_op
@as_op(itypes=[T.lscalar], otypes=[T.lscalar])
def crazy_modulo3(value):
if value > 0:
return value % 3
else :
return (-value + 1) % 3
with Model() as model_deterministic:
a = Poisson('a', 1)
b = crazy_modulo3(a)
Explanation: Arbitrary deterministics
Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator.
Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types.
End of explanation
from pymc3.distributions import Continuous
class Beta(Continuous):
def __init__(self, mu, *args, **kwargs):
super(Beta, self).__init__(*args, **kwargs)
self.mu = mu
self.mode = mu
def logp(self, value):
mu = self.mu
return beta_logp(value - mu)
@as_op(itypes=[T.dscalar], otypes=[T.dscalar])
def beta_logp(value):
return -1.5 * np.log(1 + (value)**2)
with Model() as model:
beta = Beta('slope', mu=0, testval=0)
Explanation: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Arbitrary distributions
Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014).
```python
import theano.tensor as T
from pymc3 import DensityDist, Uniform
with Model() as model:
alpha = Uniform('intercept', -100, 100)
# Create custom densities
beta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0)
eps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1)
# Create likelihood
like = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y)
```
For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op.
Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary.
End of explanation
# Convert X and Y to a pandas DataFrame
import pandas
df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y})
Explanation: Generalized Linear Models
Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module.
The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example:
End of explanation
from pymc3.glm import glm
with Model() as model_glm:
glm('y ~ x1 + x2', df)
trace = sample(5000)
Explanation: The model can then be very concisely specified in one line of code.
End of explanation
from pymc3.glm.families import Binomial
df_logistic = pandas.DataFrame({'x1': X1, 'y': Y > np.median(Y)})
with Model() as model_glm_logistic:
glm('y ~ x1', df_logistic, family=Binomial())
Explanation: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
End of explanation
from pymc3.backends import SQLite
with Model() as model_glm_logistic:
glm('y ~ x1', df_logistic, family=Binomial())
backend = SQLite('trace.sqlite')
start = find_MAP()
step = NUTS(scaling=start)
trace = sample(5000, step=step, start=start, trace=backend)
summary(trace, vars=['x1'])
Explanation: Backends
PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends:
By default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the sqlite backend:
End of explanation
from pymc3.backends.sqlite import load
with basic_model:
trace_loaded = load('trace.sqlite')
Explanation: The stored trace can then later be loaded using the load command:
End of explanation |
11,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2> Plot a horizontal map of gridded radar data</h2>
<h4>This script loads in a binary file of gridded tail Doppler radar from the NOAA P-3 produced by the windsyn program (NOAA NSSL) - a software package that performs a quasi-dual-Doppler analysis and outputs gridded data product.</h4>
Step1: <b>Set variables</b>
Step2: <ul><b>Set up some characteristics for plotting. </b>
<li>Use Cylindrical Equidistant Area map projection.</li>
<li>Set the spacing of the barbs and X-axis time step for labels.</li>
<li>Set the start and end times for subsetting.</li>
<li>Set the lon/lat spacing for the basemap</li>
</ul>
Step3: <b>Create a map plot</b> | Python Code:
# Load the needed packages
from glob import glob
import os
import matplotlib.pyplot as plt
from awot.io import read_p3_radar
from awot.graph.common import create_basemap
from awot.graph import RadarHorizontalPlot
from awot.graph import FlightLevel
%matplotlib inline
Explanation: <h2> Plot a horizontal map of gridded radar data</h2>
<h4>This script loads in a binary file of gridded tail Doppler radar from the NOAA P-3 produced by the windsyn program (NOAA NSSL) - a software package that performs a quasi-dual-Doppler analysis and outputs gridded data product.</h4>
End of explanation
# Set some required information
# Choose the date of flight and module name
yymmdd, modn = '030610', '0528hn'
# Set the project name
Project="BAMEX"
# Set Directory path for data
fDir = "/Users/guy/data/bamex/radar"
# Construct the full path name for windsyn NetCDF file
P3Radf = fDir+modn+"hn"
Explanation: <b>Set variables</b>
End of explanation
# Set map projection to use
proj = 'cea'
Wbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec)
# Choose the X-axis time step (in seconds) where major labels will be
XlabStride = 3600
start_time = "2003-06-10 05:28:00"
end_time = "2003-06-10 05:38:00"
corners = [-96.5, 40.0, -95., 41.5]
# Set the lon/lat spacing for basemap
dLon = 0.3
dLat = 0.3
Explanation: <ul><b>Set up some characteristics for plotting. </b>
<li>Use Cylindrical Equidistant Area map projection.</li>
<li>Set the spacing of the barbs and X-axis time step for labels.</li>
<li>Set the start and end times for subsetting.</li>
<li>Set the lon/lat spacing for the basemap</li>
</ul>
End of explanation
# Creating axes outside seems to screw up basemap
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
# Get the tail radar data
# When the binary file_format is selected AWOT looks for a .dpw and .hdr file,
# both of which are produced by windsyn and are needed for program to work properly
r = read_p3_radar.read_windsyn_binary(os.path.join(fDir,modn))
# Set the map for plotting
bm = create_basemap(proj=proj, resolution='l', area_thresh=1.,corners=corners,
lat_spacing=dLat, lon_spacing=dLon, ax=ax)
# Create a RadarGrid
rgp = RadarHorizontalPlot(r, basemap=bm)
rgp.plot_cappi('reflectivity', 2., vmin=15., vmax=60.,
color_bar=True, cb_pad="10%")#, ax=fl.basemap.ax)
# Creating axes outside seems to screw up basemap
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
# Get the tail radar data
# When the binary file_format is selected AWOT looks for a .dpw and .hdr file,
# both of which are produced by windsyn and are needed for program to work properly
r = read_p3_radar.read_windsyn_binary(os.path.join(fDir,modn))
# Set the map for plotting
bm = create_basemap(proj=proj, resolution='l', area_thresh=1.,corners=corners,
lat_spacing=dLat, lon_spacing=dLon, ax=ax)
# Create a RadarGrid
rgp = RadarHorizontalPlot(r, basemap=bm)
rgp.plot_cappi('reflectivity', 2., vmin=15., vmax=60., color_bar=True, cb_pad="10%",
save_kmz=True)
Explanation: <b>Create a map plot</b>
End of explanation |
11,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loss Functions
Custom fastai loss functions
Step1: Wrapping a general loss function inside of BaseLoss provides extra functionalities to your loss functions
Step3: Focal Loss is the same as cross entropy except easy-to-classify observations are down-weighted in the loss calculation. The strength of down-weighting is proportional to the size of the gamma parameter. Put another way, the larger gamma the less the easy-to-classify observations contribute to the loss.
Step4: On top of the formula we define
Step5: We present a general Dice loss for segmentation tasks. It is commonly used together with CrossEntropyLoss or FocalLoss in kaggle competitions. This is very similar to the DiceMulti metric, but to be able to derivate through, we replace the argmax activation by a softmax and compare this with a one-hot encoded target mask. This function also adds a smooth parameter to help numerical stabilities in the intersection over union division. If your network has problem learning with this DiceLoss, try to set the square_in_union parameter in the DiceLoss constructor to True.
Step6: As a test case for the dice loss consider satellite image segmentation. Let us say we have three classes
Step7: Nearly everything is background in this example, and we have a thin river at the left of the image as well as a thin road in the middle of the image. If all our data looks similar to this, we say that there is a class imbalance, meaning that some classes (like river and road) appear relatively infrequently. If our model just predicted "background" (i.e. the value 0) for all pixels, it would be correct for most pixels. But this would be a bad model and the diceloss should reflect that
Step8: Our dice score should be around 1/3 here, because the "background" class is predicted correctly (and that for nearly every pixel), but the other two clases are never predicted correctly. Dice score of 1/3 means dice loss of 1 - 1/3 = 2/3
Step9: If the model would predict everything correctly, the dice loss should be zero
Step10: You could easily combine this loss with FocalLoss defining a CombinedLoss, to balance between global (Dice) and local (Focal) features on the target mask.
Step11: Export - | Python Code:
#|export
class BaseLoss():
"Same as `loss_cls`, but flattens input and target."
activation=decodes=noops
def __init__(self,
loss_cls, # Uninitialized PyTorch-compatible loss
*args,
axis:int=-1, # Class axis
flatten:bool=True, # Flatten `inp` and `targ` before calculating loss
floatify:bool=False, # Convert `targ` to `float`
is_2d:bool=True, # Whether `flatten` keeps one or two channels when applied
**kwargs
):
store_attr("axis,flatten,floatify,is_2d")
self.func = loss_cls(*args,**kwargs)
functools.update_wrapper(self, self.func)
def __repr__(self) -> str: return f"FlattenedLoss of {self.func}"
@property
def reduction(self) -> str: return self.func.reduction
@reduction.setter
def reduction(self, v:str):
"Sets the reduction style (typically 'mean', 'sum', or 'none')"
self.func.reduction = v
def _contiguous(self, x:Tensor) -> TensorBase:
"Move `self.axis` to the last dimension and ensure tensor is contigous for `Tensor` otherwise just return"
return TensorBase(x.transpose(self.axis,-1).contiguous()) if isinstance(x,torch.Tensor) else x
def __call__(self,
inp:(Tensor,list), # Predictions from a `Learner`
targ:(Tensor,list), # Actual y label
**kwargs
) -> TensorBase: # `loss_cls` calculated on `inp` and `targ`
inp,targ = map(self._contiguous, (inp,targ))
if self.floatify and targ.dtype!=torch.float16: targ = targ.float()
if targ.dtype in [torch.int8, torch.int16, torch.int32]: targ = targ.long()
if self.flatten: inp = inp.view(-1,inp.shape[-1]) if self.is_2d else inp.view(-1)
return self.func.__call__(inp, targ.view(-1) if self.flatten else targ, **kwargs)
def to(self, device:torch.device):
"Move the loss function to a specified `device`"
if isinstance(self.func, nn.Module): self.func.to(device)
Explanation: Loss Functions
Custom fastai loss functions
End of explanation
#|export
@delegates()
class CrossEntropyLossFlat(BaseLoss):
"Same as `nn.CrossEntropyLoss`, but flattens input and target."
y_int = True # y interpolation
@use_kwargs_dict(keep=True, weight=None, ignore_index=-100, reduction='mean')
def __init__(self,
*args,
axis:int=-1, # Class axis
**kwargs
):
super().__init__(nn.CrossEntropyLoss, *args, axis=axis, **kwargs)
def decodes(self, x:Tensor) -> Tensor:
"Converts model output to target format"
return x.argmax(dim=self.axis)
def activation(self, x:Tensor) -> Tensor:
"`nn.CrossEntropyLoss`'s fused activation function applied to model output"
return F.softmax(x, dim=self.axis)
tst = CrossEntropyLossFlat(reduction='none')
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
#nn.CrossEntropy would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.CrossEntropyLoss()(output,target))
#Associated activation is softmax
test_eq(tst.activation(output), F.softmax(output, dim=-1))
#This loss function has a decodes which is argmax
test_eq(tst.decodes(output), output.argmax(dim=-1))
#In a segmentation task, we want to take the softmax over the channel dimension
tst = CrossEntropyLossFlat(axis=1)
output = torch.randn(32, 5, 128, 128)
target = torch.randint(0, 5, (32, 128, 128))
_ = tst(output, target)
test_eq(tst.activation(output), F.softmax(output, dim=1))
test_eq(tst.decodes(output), output.argmax(dim=1))
#|hide
#cuda
tst = CrossEntropyLossFlat(weight=torch.ones(10))
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tst.to(device)
output = torch.randn(32, 10, device=device)
target = torch.randint(0, 10, (32,), device=device)
_ = tst(output, target)
Explanation: Wrapping a general loss function inside of BaseLoss provides extra functionalities to your loss functions:
- flattens the tensors before trying to take the losses since it's more convenient (with a potential tranpose to put axis at the end)
- a potential activation method that tells the library if there is an activation fused in the loss (useful for inference and methods such as Learner.get_preds or Learner.predict)
- a potential <code>decodes</code> method that is used on predictions in inference (for instance, an argmax in classification)
The args and kwargs will be passed to loss_cls during the initialization to instantiate a loss function. axis is put at the end for losses like softmax that are often performed on the last axis. If floatify=True, the targs will be converted to floats (useful for losses that only accept float targets like BCEWithLogitsLoss), and is_2d determines if we flatten while keeping the first dimension (batch size) or completely flatten the input. We want the first for losses like Cross Entropy, and the second for pretty much anything else.
End of explanation
#|export
class FocalLoss(Module):
y_int=True # y interpolation
def __init__(self,
gamma:float=2.0, # Focusing parameter. Higher values down-weight easy examples' contribution to loss
weight:Tensor=None, # Manual rescaling weight given to each class
reduction:str='mean' # PyTorch reduction to apply to the output
):
"Applies Focal Loss: https://arxiv.org/pdf/1708.02002.pdf"
store_attr()
def forward(self, inp:Tensor, targ:Tensor) -> Tensor:
"Applies focal loss based on https://arxiv.org/pdf/1708.02002.pdf"
ce_loss = F.cross_entropy(inp, targ, weight=self.weight, reduction="none")
p_t = torch.exp(-ce_loss)
loss = (1 - p_t)**self.gamma * ce_loss
if self.reduction == "mean":
loss = loss.mean()
elif self.reduction == "sum":
loss = loss.sum()
return loss
class FocalLossFlat(BaseLoss):
Same as CrossEntropyLossFlat but with focal paramter, `gamma`. Focal loss is introduced by Lin et al.
https://arxiv.org/pdf/1708.02002.pdf. Note the class weighting factor in the paper, alpha, can be
implemented through pytorch `weight` argument passed through to F.cross_entropy.
y_int = True # y interpolation
@use_kwargs_dict(keep=True, weight=None, reduction='mean')
def __init__(self,
*args,
gamma:float=2.0, # Focusing parameter. Higher values down-weight easy examples' contribution to loss
axis:int=-1, # Class axis
**kwargs
):
super().__init__(FocalLoss, *args, gamma=gamma, axis=axis, **kwargs)
def decodes(self, x:Tensor) -> Tensor:
"Converts model output to target format"
return x.argmax(dim=self.axis)
def activation(self, x:Tensor) -> Tensor:
"`F.cross_entropy`'s fused activation function applied to model output"
return F.softmax(x, dim=self.axis)
#Compare focal loss with gamma = 0 to cross entropy
fl = FocalLossFlat(gamma=0)
ce = CrossEntropyLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
test_close(fl(output, target), ce(output, target))
#Test focal loss with gamma > 0 is different than cross entropy
fl = FocalLossFlat(gamma=2)
test_ne(fl(output, target), ce(output, target))
#In a segmentation task, we want to take the softmax over the channel dimension
fl = FocalLossFlat(gamma=0, axis=1)
ce = CrossEntropyLossFlat(axis=1)
output = torch.randn(32, 5, 128, 128)
target = torch.randint(0, 5, (32, 128, 128))
test_close(fl(output, target), ce(output, target), eps=1e-4)
test_eq(fl.activation(output), F.softmax(output, dim=1))
test_eq(fl.decodes(output), output.argmax(dim=1))
#|export
@delegates()
class BCEWithLogitsLossFlat(BaseLoss):
"Same as `nn.BCEWithLogitsLoss`, but flattens input and target."
@use_kwargs_dict(keep=True, weight=None, reduction='mean', pos_weight=None)
def __init__(self,
*args,
axis:int=-1, # Class axis
floatify:bool=True, # Convert `targ` to `float`
thresh:float=0.5, # The threshold on which to predict
**kwargs
):
if kwargs.get('pos_weight', None) is not None and kwargs.get('flatten', None) is True:
raise ValueError("`flatten` must be False when using `pos_weight` to avoid a RuntimeError due to shape mismatch")
if kwargs.get('pos_weight', None) is not None: kwargs['flatten'] = False
super().__init__(nn.BCEWithLogitsLoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
self.thresh = thresh
def decodes(self, x:Tensor) -> Tensor:
"Converts model output to target format"
return x>self.thresh
def activation(self, x:Tensor) -> Tensor:
"`nn.BCEWithLogitsLoss`'s fused activation function applied to model output"
return torch.sigmoid(x)
tst = BCEWithLogitsLossFlat()
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
#nn.BCEWithLogitsLoss would fail with those two tensors, but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
output = torch.randn(32, 5)
target = torch.randint(0,2,(32, 5))
#nn.BCEWithLogitsLoss would fail with int targets but not our flattened version.
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
tst = BCEWithLogitsLossFlat(pos_weight=torch.ones(10))
output = torch.randn(32, 5, 10)
target = torch.randn(32, 5, 10)
_ = tst(output, target)
test_fail(lambda x: nn.BCEWithLogitsLoss()(output,target))
#Associated activation is sigmoid
test_eq(tst.activation(output), torch.sigmoid(output))
#|export
@use_kwargs_dict(weight=None, reduction='mean')
def BCELossFlat(
*args,
axis:int=-1, # Class axis
floatify:bool=True, # Convert `targ` to `float`
**kwargs
):
"Same as `nn.BCELoss`, but flattens input and target."
return BaseLoss(nn.BCELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = BCELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.BCELoss()(output,target))
#|export
@use_kwargs_dict(reduction='mean')
def MSELossFlat(
*args,
axis:int=-1, # Class axis
floatify:bool=True, # Convert `targ` to `float`
**kwargs
):
"Same as `nn.MSELoss`, but flattens input and target."
return BaseLoss(nn.MSELoss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
tst = MSELossFlat()
output = torch.sigmoid(torch.randn(32, 5, 10))
target = torch.randint(0,2,(32, 5, 10))
_ = tst(output, target)
test_fail(lambda x: nn.MSELoss()(output,target))
#|hide
#cuda
#Test losses work in half precision
if torch.cuda.is_available():
output = torch.sigmoid(torch.randn(32, 5, 10)).half().cuda()
target = torch.randint(0,2,(32, 5, 10)).half().cuda()
for tst in [BCELossFlat(), MSELossFlat()]: _ = tst(output, target)
#|export
@use_kwargs_dict(reduction='mean')
def L1LossFlat(
*args,
axis=-1, # Class axis
floatify=True, # Convert `targ` to `float`
**kwargs
):
"Same as `nn.L1Loss`, but flattens input and target."
return BaseLoss(nn.L1Loss, *args, axis=axis, floatify=floatify, is_2d=False, **kwargs)
#|export
class LabelSmoothingCrossEntropy(Module):
y_int = True # y interpolation
def __init__(self,
eps:float=0.1, # The weight for the interpolation formula
weight:Tensor=None, # Manual rescaling weight given to each class passed to `F.nll_loss`
reduction:str='mean' # PyTorch reduction to apply to the output
):
store_attr()
def forward(self, output:Tensor, target:Tensor) -> Tensor:
"Apply `F.log_softmax` on output then blend the loss/num_classes(`c`) with the `F.nll_loss`"
c = output.size()[1]
log_preds = F.log_softmax(output, dim=1)
if self.reduction=='sum': loss = -log_preds.sum()
else:
loss = -log_preds.sum(dim=1) #We divide by that size at the return line so sum and not mean
if self.reduction=='mean': loss = loss.mean()
return loss*self.eps/c + (1-self.eps) * F.nll_loss(log_preds, target.long(), weight=self.weight, reduction=self.reduction)
def activation(self, out:Tensor) -> Tensor:
"`F.log_softmax`'s fused activation function applied to model output"
return F.softmax(out, dim=-1)
def decodes(self, out:Tensor) -> Tensor:
"Converts model output to target format"
return out.argmax(dim=-1)
lmce = LabelSmoothingCrossEntropy()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
test_close(lmce(output.flatten(0,1), target.flatten()), lmce(output.transpose(-1,-2), target))
Explanation: Focal Loss is the same as cross entropy except easy-to-classify observations are down-weighted in the loss calculation. The strength of down-weighting is proportional to the size of the gamma parameter. Put another way, the larger gamma the less the easy-to-classify observations contribute to the loss.
End of explanation
#|export
@delegates()
class LabelSmoothingCrossEntropyFlat(BaseLoss):
"Same as `LabelSmoothingCrossEntropy`, but flattens input and target."
y_int = True
@use_kwargs_dict(keep=True, eps=0.1, reduction='mean')
def __init__(self,
*args,
axis:int=-1, # Class axis
**kwargs
):
super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs)
def activation(self, out:Tensor) -> Tensor:
"`LabelSmoothingCrossEntropy`'s fused activation function applied to model output"
return F.softmax(out, dim=-1)
def decodes(self, out:Tensor) -> Tensor:
"Converts model output to target format"
return out.argmax(dim=-1)
#These two should always equal each other since the Flat version is just passing data through
lmce = LabelSmoothingCrossEntropy()
lmce_flat = LabelSmoothingCrossEntropyFlat()
output = torch.randn(32, 5, 10)
target = torch.randint(0, 10, (32,5))
test_close(lmce(output.transpose(-1,-2), target), lmce_flat(output,target))
Explanation: On top of the formula we define:
- a reduction attribute, that will be used when we call Learner.get_preds
- weight attribute to pass to BCE.
- an activation function that represents the activation fused in the loss (since we use cross entropy behind the scenes). It will be applied to the output of the model when calling Learner.get_preds or Learner.predict
- a <code>decodes</code> function that converts the output of the model to a format similar to the target (here indices). This is used in Learner.predict and Learner.show_results to decode the predictions
End of explanation
#|export
class DiceLoss:
"Dice loss for segmentation"
def __init__(self,
axis:int=1, # Class axis
smooth:float=1e-6, # Helps with numerical stabilities in the IoU division
reduction:str="sum", # PyTorch reduction to apply to the output
square_in_union:bool=False # Squares predictions to increase slope of gradients
):
store_attr()
def __call__(self, pred:Tensor, targ:Tensor) -> Tensor:
"One-hot encodes targ, then runs IoU calculation then takes 1-dice value"
targ = self._one_hot(targ, pred.shape[self.axis])
pred, targ = TensorBase(pred), TensorBase(targ)
assert pred.shape == targ.shape, 'input and target dimensions differ, DiceLoss expects non one-hot targs'
pred = self.activation(pred)
sum_dims = list(range(2, len(pred.shape)))
inter = torch.sum(pred*targ, dim=sum_dims)
union = (torch.sum(pred**2+targ, dim=sum_dims) if self.square_in_union
else torch.sum(pred+targ, dim=sum_dims))
dice_score = (2. * inter + self.smooth)/(union + self.smooth)
loss = 1- dice_score
if self.reduction == 'mean':
loss = loss.mean()
elif self.reduction == 'sum':
loss = loss.sum()
return loss
@staticmethod
def _one_hot(
x:Tensor, # Non one-hot encoded targs
classes:int, # The number of classes
axis:int=1 # The axis to stack for encoding (class dimension)
) -> Tensor:
"Creates one binary mask per class"
return torch.stack([torch.where(x==c, 1, 0) for c in range(classes)], axis=axis)
def activation(self, x:Tensor) -> Tensor:
"Activation function applied to model output"
return F.softmax(x, dim=self.axis)
def decodes(self, x:Tensor) -> Tensor:
"Converts model output to target format"
return x.argmax(dim=self.axis)
dl = DiceLoss()
_x = tensor( [[[1, 0, 2],
[2, 2, 1]]])
_one_hot_x = tensor([[[[0, 1, 0],
[0, 0, 0]],
[[1, 0, 0],
[0, 0, 1]],
[[0, 0, 1],
[1, 1, 0]]]])
test_eq(dl._one_hot(_x, 3), _one_hot_x)
dl = DiceLoss()
model_output = tensor([[[[2., 1.],
[1., 5.]],
[[1, 2.],
[3., 1.]],
[[3., 0],
[4., 3.]]]])
target = tensor([[[2, 1],
[2, 0]]])
dl_out = dl(model_output, target)
test_eq(dl.decodes(model_output), target)
dl = DiceLoss(reduction="mean")
#identical masks
model_output = tensor([[[.1], [.1], [100.]]])
target = tensor([[2]])
test_close(dl(model_output, target), 0)
#50% intersection
model_output = tensor([[[.1, 100.], [.1, .1], [100., .1]]])
target = tensor([[2, 1]])
test_close(dl(model_output, target), .66, eps=0.01)
Explanation: We present a general Dice loss for segmentation tasks. It is commonly used together with CrossEntropyLoss or FocalLoss in kaggle competitions. This is very similar to the DiceMulti metric, but to be able to derivate through, we replace the argmax activation by a softmax and compare this with a one-hot encoded target mask. This function also adds a smooth parameter to help numerical stabilities in the intersection over union division. If your network has problem learning with this DiceLoss, try to set the square_in_union parameter in the DiceLoss constructor to True.
End of explanation
target = torch.zeros(100,100)
target[:,5] = 1
target[:,50] = 2
plt.imshow(target);
Explanation: As a test case for the dice loss consider satellite image segmentation. Let us say we have three classes: Background (0), River (1) and Road (2). Let us look at a specific target
End of explanation
model_output_all_background = torch.zeros(3, 100,100)
# assign probability 1 to class 0 everywhere
# to get probability 1, we just need a high model output before softmax gets applied
model_output_all_background[0,:,:] = 100
# add a batch dimension
model_output_all_background = torch.unsqueeze(model_output_all_background,0)
target = torch.unsqueeze(target,0)
Explanation: Nearly everything is background in this example, and we have a thin river at the left of the image as well as a thin road in the middle of the image. If all our data looks similar to this, we say that there is a class imbalance, meaning that some classes (like river and road) appear relatively infrequently. If our model just predicted "background" (i.e. the value 0) for all pixels, it would be correct for most pixels. But this would be a bad model and the diceloss should reflect that
End of explanation
test_close(dl(model_output_all_background, target), 0.67, eps=0.01)
Explanation: Our dice score should be around 1/3 here, because the "background" class is predicted correctly (and that for nearly every pixel), but the other two clases are never predicted correctly. Dice score of 1/3 means dice loss of 1 - 1/3 = 2/3:
End of explanation
correct_model_output = torch.zeros(3, 100,100)
correct_model_output[0,:,:] = 100
correct_model_output[0,:,5] = 0
correct_model_output[0,:,50] = 0
correct_model_output[1,:,5] = 100
correct_model_output[2,:,50] = 100
correct_model_output = torch.unsqueeze(correct_model_output, 0)
test_close(dl(correct_model_output, target), 0)
#|hide
#cuda
#Test DicceLoss work in half precision
if torch.cuda.is_available():
output = torch.randn(32, 4, 5, 10).half().cuda()
target = torch.randint(0,2,(32, 5, 10)).half().cuda()
_ = dl(output, target)
Explanation: If the model would predict everything correctly, the dice loss should be zero:
End of explanation
class CombinedLoss:
"Dice and Focal combined"
def __init__(self, axis=1, smooth=1., alpha=1.):
store_attr()
self.focal_loss = FocalLossFlat(axis=axis)
self.dice_loss = DiceLoss(axis, smooth)
def __call__(self, pred, targ):
return self.focal_loss(pred, targ) + self.alpha * self.dice_loss(pred, targ)
def decodes(self, x): return x.argmax(dim=self.axis)
def activation(self, x): return F.softmax(x, dim=self.axis)
cl = CombinedLoss()
output = torch.randn(32, 4, 5, 10)
target = torch.randint(0,2,(32, 5, 10))
_ = cl(output, target)
Explanation: You could easily combine this loss with FocalLoss defining a CombinedLoss, to balance between global (Dice) and local (Focal) features on the target mask.
End of explanation
#|hide
from nbdev.export import *
notebook2script()
Explanation: Export -
End of explanation |
11,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames McClure
Title
Step1: Table 1 - Target Information for Ophiuchus Sources
Step2: Table 2 - Spectral Type Information for the Entire Sample
Step3: Merge the two catalogs
Step4: Save data | Python Code:
import warnings
warnings.filterwarnings("ignore")
from astropy.io import ascii
import pandas as pd
Explanation: ApJdataFrames McClure
Title: THE EVOLUTIONARY STATE OF THE PRE-MAIN SEQUENCE POPULATION IN OPHIUCHUS: A LARGE INFRARED SPECTROGRAPH SURVEY
Authors: McClure et al.
Data is from this paper:
http://iopscience.iop.org/0067-0049/188/1/75/
End of explanation
tbl1 = pd.read_csv("http://iopscience.iop.org/0067-0049/188/1/75/suppdata/apjs330182t1_ascii.txt",
sep="\t", na_values=" ... ", skiprows=[0,1,2], skipfooter=1, usecols=range(9))
tbl1.head()
Explanation: Table 1 - Target Information for Ophiuchus Sources
End of explanation
tbl2 = pd.read_csv("http://iopscience.iop.org/0067-0049/188/1/75/suppdata/apjs330182t2_ascii.txt",
sep="\t", na_values=" ... ", skiprows=[0,1,2,4], skipfooter=4)
del tbl2["Unnamed: 13"]
tbl2.head()
Explanation: Table 2 - Spectral Type Information for the Entire Sample
End of explanation
tbl1_2_merge = pd.merge(tbl1[["Name", "R.A. (J2000)", "Decl. (J2000)"]], tbl2, how="outer")
tbl1_2_merge.tail()
Explanation: Merge the two catalogs
End of explanation
lowAv = nonBinary = tbl1_2_merge['A_V'] < 10.0
nonBinary = tbl1_2_merge['Mult.'] != tbl1_2_merge['Mult.']
classIII = tbl1_2_merge['Class'] == 'III'
wtts = tbl1_2_merge['TT Type'] == 'WTTS'
diskless = tbl1_2_merge['State'] == 'Photosphere'
for val in [lowAv, nonBinary, classIII, wtts, diskless]:
print(val.sum())
sample = nonBinary & diskless
sample.sum()
tbl1_2_merge.to_csv('../data/McClure2010/tbl1_2_merge_all.csv', index=False)
tbl1_2_merge.columns
tbl1_2_merge[wtts]
! mkdir ../data/McClure2010
tbl1_2_merge.to_csv("../data/McClure2010/tbl1_2_merge.csv", index=False, sep='\t')
Explanation: Save data
End of explanation |
11,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 01
Import
Step2: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
Step3: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
Step5: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step6: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 01
Import
End of explanation
def print_sum(a, b):
Print the sum of the arguments a and b.
return a+b
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
interact(print_sum, a=(-10.0,10.1,0.1), b=(-8.0,8.0,2));
assert True # leave this for grading the print_sum exercise
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
def print_string(s, length=False):
Print the string s and optionally its length.
return s, len(s)
raise NotImplementedError()
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
interact(print_string, s="Hello World!", length=True);
assert True # leave this for grading the print_string exercise
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation |
11,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CH82 Model
The following tries to reproduce Fig 8 from Hawkes, Jalali, Colquhoun (1992).
First we create the $Q$-matrix for this particular model. Please note that the units are different from other publications.
Step1: We first reproduce the top tow panels showing $\mathrm{det} W(s)$ for open and shut times.
These quantities can be accessed using dcprogs.likelihood.DeterminantEq. The plots are done using a standard plotting function from the dcprogs.likelihood package as well.
Step2: Then we want to plot the panels c and d showing the excess shut and open-time probability densities$(\tau = 0.2)$. To do this we need to access each exponential that makes up the approximate survivor function. We could use
Step3: The list components above contain 2-tuples with the weight (as a matrix) and the exponant (or root) for each exponential component in $^{A}R_{\mathrm{approx}}(t)$. We could then create python functions pdf(t) for each exponential component, as is done below for the first root
Step4: The initial occupancies, as well as the $Q_{AF}e^{-Q_{FF}\tau}$ factor are obtained directly from the object implementing the weight, root = components[1]
missed event likelihood $^{e}G(t)$.
However, there is a convenience function that does all the above in the package. Since it is generally of little use, it is not currently exported to the dcprogs.likelihood namespace. So we create below a plotting function that uses it.
Step5: Finally, we create the last plot (e), and throw in an (f) for good measure. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from dcprogs.likelihood import QMatrix
tau = 1e-4
qmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ],
[ 2./3., -1502./3., 0, 500, 0 ],
[ 15, 0, -2065, 50, 2000 ],
[ 0, 15000, 4000, -19000, 0 ],
[ 0, 0, 10, 0, -10 ] ], 2)
qmatrix.matrix /= 1000.0
Explanation: CH82 Model
The following tries to reproduce Fig 8 from Hawkes, Jalali, Colquhoun (1992).
First we create the $Q$-matrix for this particular model. Please note that the units are different from other publications.
End of explanation
from dcprogs.likelihood import plot_roots, DeterminantEq
fig, ax = plt.subplots(1, 2, figsize=(7,5))
plot_roots(DeterminantEq(qmatrix, 0.2), ax=ax[0])
ax[0].set_xlabel('Laplace $s$')
ax[0].set_ylabel('$\\mathrm{det} ^{A}W(s)$')
plot_roots(DeterminantEq(qmatrix, 0.2).transpose(), ax=ax[1])
ax[1].set_xlabel('Laplace $s$')
ax[1].set_ylabel('$\\mathrm{det} ^{F}W(s)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
Explanation: We first reproduce the top tow panels showing $\mathrm{det} W(s)$ for open and shut times.
These quantities can be accessed using dcprogs.likelihood.DeterminantEq. The plots are done using a standard plotting function from the dcprogs.likelihood package as well.
End of explanation
from dcprogs.likelihood import ApproxSurvivor
approx = ApproxSurvivor(qmatrix, tau)
components = approx.af_components
print(components[:1])
Explanation: Then we want to plot the panels c and d showing the excess shut and open-time probability densities$(\tau = 0.2)$. To do this we need to access each exponential that makes up the approximate survivor function. We could use:
End of explanation
from dcprogs.likelihood import MissedEventsG
weight, root = components[1]
eG = MissedEventsG(qmatrix, tau)
# Note: the sum below is equivalent to a scalar product with u_F
coefficient = sum(np.dot(eG.initial_occupancies, np.dot(weight, eG.af_factor)))
pdf = lambda t: coefficient * exp((t)*root)
Explanation: The list components above contain 2-tuples with the weight (as a matrix) and the exponant (or root) for each exponential component in $^{A}R_{\mathrm{approx}}(t)$. We could then create python functions pdf(t) for each exponential component, as is done below for the first root:
End of explanation
from dcprogs.likelihood._methods import exponential_pdfs
def plot_exponentials(qmatrix, tau, x=None, ax=None, nmax=2, shut=False):
from dcprogs.likelihood import missed_events_pdf
if ax is None:
fig, ax = plt.subplots(1,1)
if x is None: x = np.arange(0, 5*tau, tau/10)
pdf = missed_events_pdf(qmatrix, tau, nmax=nmax, shut=shut)
graphb = [x, pdf(x+tau), '-k']
functions = exponential_pdfs(qmatrix, tau, shut=shut)
plots = ['.r', '.b', '.g']
together = None
for f, p in zip(functions[::-1], plots):
if together is None: together = f(x+tau)
else: together = together + f(x+tau)
graphb.extend([x, together, p])
ax.plot(*graphb)
fig, ax = plt.subplots(1,2, figsize=(7,5))
ax[0].set_xlabel('time $t$ (ms)')
ax[0].set_ylabel('Excess open-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
plot_exponentials(qmatrix, 0.2, shut=False, ax=ax[0])
plot_exponentials(qmatrix, 0.2, shut=True, ax=ax[1])
ax[1].set_xlabel('time $t$ (ms)')
ax[1].set_ylabel('Excess shut-time probability density $f_{\\bar{\\tau}=0.2}(t)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
Explanation: The initial occupancies, as well as the $Q_{AF}e^{-Q_{FF}\tau}$ factor are obtained directly from the object implementing the weight, root = components[1]
missed event likelihood $^{e}G(t)$.
However, there is a convenience function that does all the above in the package. Since it is generally of little use, it is not currently exported to the dcprogs.likelihood namespace. So we create below a plotting function that uses it.
End of explanation
fig, ax = plt.subplots(1,2, figsize=(7,5))
ax[0].set_xlabel('time $t$ (ms)')
ax[0].set_ylabel('Excess open-time probability density $f_{\\bar{\\tau}=0.5}(t)$')
plot_exponentials(qmatrix, 0.5, shut=False, ax=ax[0])
plot_exponentials(qmatrix, 0.5, shut=True, ax=ax[1])
ax[1].set_xlabel('time $t$ (ms)')
ax[1].set_ylabel('Excess shut-time probability density $f_{\\bar{\\tau}=0.5}(t)$')
ax[1].yaxis.tick_right()
ax[1].yaxis.set_label_position("right")
fig.tight_layout()
from dcprogs.likelihood import QMatrix, MissedEventsG
tau = 1e-4
qmatrix = QMatrix([[ -3050, 50, 3000, 0, 0 ],
[ 2./3., -1502./3., 0, 500, 0 ],
[ 15, 0, -2065, 50, 2000 ],
[ 0, 15000, 4000, -19000, 0 ],
[ 0, 0, 10, 0, -10 ] ], 2)
eG = MissedEventsG(qmatrix, tau, 2, 1e-8, 1e-8)
meG = MissedEventsG(qmatrix, tau)
t = 3.5* tau
print(eG.initial_CHS_occupancies(t) - meG.initial_CHS_occupancies(t))
Explanation: Finally, we create the last plot (e), and throw in an (f) for good measure.
End of explanation |
11,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spam detection
The main aim of this project is to build a machine learning classifier that is able to automatically detect
spammy articles, based on their content.
Step1: Modeling
We tried out various models and selected the best performing models (with the best performing parameter settings for each model). At the end, we retained 3 models which are
Step2: logistic regression
Step3: random forest
Step4: Combination 1
We decided to try combining these models in order to construct a better and more consistent one.
voting system
Step5: customizing
Step6: Here you can see that benefited from the good behavior of the logistic regression and the random forest. By contrast,
we couldn't do the same with the naive bayse, because, this makes as missclassify a lot of OK articles, which leads to
a low precision.
Step7: Combination 2
Now, we would like the capture more of the not-OK articles. To this end, we decided to include a few false positives
in the training datasets. In order so in an intelligent way and to select some representative samples, we first
analyzed these false positives.
Step8: This means that we have two big clusters of false positives (green and red). We have chosen to pick up
randomly 50 samples of each cluster.
Step9: Now we do the prediction again
random forest
Step10: logistic regression
Step11: Naive Bayse
Step12: Voting
Step13: Customizing | Python Code:
! sh bootstrap.sh
from sklearn.cluster import KMeans
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
from sklearn.utils import shuffle
from sklearn.metrics import f1_score
from sklearn.cross_validation import KFold
from sklearn.metrics import recall_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
#Load train dataset
df = pd.read_csv("enwiki.draft_quality.201608-201701.feature_labels.tsv", sep="\t")
#shuffle the dataframe so that samples will be randomly distributed in Cross Validation Folds
df = shuffle(df)
#Replace strings with integers : 1 for OK and 0 for Not OK
df["draft_quality"] = df["draft_quality"].replace({"OK" : 1, "vandalism" : 0, "spam" : 0, "attack" : 0})
#═df["Markup_chars_thresholded"]=(df['feature.(wikitext.revision.markup_chars / max(wikitext.revision.chars, 1))']>0.35)*1
#Put features and labels on differents dataframes
X=df.drop(["draft_quality"], 1)
Y=df["draft_quality"]
df2 = pd.read_csv("enwiki.draft_quality.50k_stratified.feature_labels.tsv", sep="\t")
df2["draft_quality"] = df2["draft_quality"].replace({"OK" : 1, "vandalism" : 0, "spam" : 0, "attack" : 0})
#df2["Markup_chars_thresholded"]=(df2['feature.(wikitext.revision.markup_chars / max(wikitext.revision.chars, 1))']>0.35)*1
X2=df2.drop(["draft_quality"], 1)
Y2=df2["draft_quality"]
X2=np.array(X2)
Y2=np.array(Y2)
X=np.array(X)
Y=np.array(Y)
#lenghts of boths datasets
print(len(X))
print(len(X2))
Explanation: Spam detection
The main aim of this project is to build a machine learning classifier that is able to automatically detect
spammy articles, based on their content.
End of explanation
weights=np.array([0.7,1-0.7])
clf = BernoulliNB(alpha=22, class_prior=weights)
clf.fit(X2, Y2)
prediction_nb=clf.predict(X)
confusion=confusion_matrix(Y, prediction_nb, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: Modeling
We tried out various models and selected the best performing models (with the best performing parameter settings for each model). At the end, we retained 3 models which are:
1. Naïve Bayes Gaussian
2. Random forest
3. Logistic regression
Naïve Bayes Gaussian
End of explanation
clf2 = LogisticRegression(penalty='l1', random_state=0, class_weight={1:0.1, 0: 0.9})
clf2.fit(X2, Y2)
prediction_lr=clf2.predict(X)
confusion=confusion_matrix(Y, prediction_lr, labels=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: logistic regression
End of explanation
clf3 = RandomForestClassifier(n_estimators=2, min_samples_leaf=1, random_state=25, class_weight={1:0.9, 0: 0.1})
clf3.fit(X2, Y2)
prediction_rf=clf3.predict(X)
confusion=confusion_matrix(Y, prediction_rf, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: random forest
End of explanation
#Here we construct our voting function
def voting(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==pred2[i]:
final_prediction[i]=pred1[i]
elif pred1[i]==pred3[i]:
final_prediction[i]=pred1[i]
elif pred2[i]==pred3[i]:
final_prediction[i]=pred2[i]
return final_prediction
#Here we make the prediction using voting function (with the three models defined above)
prediction= voting(prediction_lr, prediction_nb, prediction_rf)
from sklearn.metrics import confusion_matrix
confusion=confusion_matrix(Y, prediction, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: Combination 1
We decided to try combining these models in order to construct a better and more consistent one.
voting system
End of explanation
#Since we are inerested in negatives (not-OK) we will analyze how many times a model detects a not-OK article while
#the others don't
def get_missclasified_indexes(pred1, Y_true, Class):
index_list=[]
a=0
b=1
if Class=="negative":
a=1
b=0
for i in range(len(pred1)):
if pred1[i]==a and Y_true[i]==b:
index_list.append(i)
return index_list
false_negative_indexes=get_missclasified_indexes(prediction, Y, "negative")
print(len(prediction[false_negative_indexes]))
print(np.sum(prediction_nb[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_rf[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_lr[false_negative_indexes]!=prediction[false_negative_indexes]))
##Here we define our function based on the results above
def voting_customized(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==0:
final_prediction[i]=0
else:
final_prediction[i]=pred3[i]
return final_prediction
#making a prediction with our new function
prediction= voting_customized(prediction_lr, prediction_nb, prediction_rf)
confusion=confusion_matrix(Y, prediction, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
false_negative_indexes=get_missclasified_indexes(prediction, Y, "negative")
print(len(prediction[false_negative_indexes]))
print(np.sum(prediction_nb[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_rf[false_negative_indexes]!=prediction[false_negative_indexes]))
print(np.sum(prediction_lr[false_negative_indexes]!=prediction[false_negative_indexes]))
Explanation: customizing
End of explanation
from sklearn.metricsicsicsics import roc_curve,auc
fpr = dict()
tpr = dict()
roc_auc = dict()
fpr, tpr, _ = roc_curve(y_test, y_score),
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0,1], [0,1] color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.
Explanation: Here you can see that benefited from the good behavior of the logistic regression and the random forest. By contrast,
we couldn't do the same with the naive bayse, because, this makes as missclassify a lot of OK articles, which leads to
a low precision.
End of explanation
from scipy.cluster.hierarchy import dendrogram, linkage
Z = linkage(X[false_negative_indexes], 'ward')
plt.figure(figsize=(25, 25))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90.,
leaf_font_size=11.,
)
plt.show()
Explanation: Combination 2
Now, we would like the capture more of the not-OK articles. To this end, we decided to include a few false positives
in the training datasets. In order so in an intelligent way and to select some representative samples, we first
analyzed these false positives.
End of explanation
#we perform a kmeans clustering with 2 clusters
kmeans = KMeans(n_clusters=2, random_state=0).fit(X[false_negative_indexes])
cluster_labels=kmeans.labels_
print(cluster_labels)
print(np.unique(cluster_labels))
#Picking up the sapmles from theclusters and adding them to the training dataset.
false_negatives_cluster0=[]
false_negatives_cluster1=[]
for i in range(1,11):
random.seed(a=i)
false_negatives_cluster0.append(random.choice([w for index_w, w in enumerate(false_negative_indexes) if cluster_labels[index_w] == 0]))
for i in range(1,11):
random.seed(a=i)
false_negatives_cluster1.append(random.choice([w for index_w, w in enumerate(false_negative_indexes) if cluster_labels[index_w] == 1]))
#adding 1st cluster's samples
Y2=np.reshape(np.dstack(Y2), (len(Y2),1))
temp_arr=np.array([Y[false_negatives_cluster0]])
temp_arr=np.reshape(np.dstack(temp_arr), (10,1))
X2_new = np.vstack((X2, X[false_negatives_cluster0]))
Y2_new=np.vstack((Y2, temp_arr))
# Second
temp_arr2=np.array([Y[false_negatives_cluster1]])
temp_arr2=np.reshape(np.dstack(temp_arr2), (10,1))
X2_new = np.vstack((X2_new, X[false_negatives_cluster1]))
Y2_new=np.vstack((Y2_new, temp_arr2))
Explanation: This means that we have two big clusters of false positives (green and red). We have chosen to pick up
randomly 50 samples of each cluster.
End of explanation
Y2_new=np.reshape(np.dstack(Y2_new), (len(Y2_new),))
clf3.fit(X2_new, Y2_new)
prediction_rf_new=clf3.predict(X)
confusion=confusion_matrix(Y, prediction_rf_new, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: Now we do the prediction again
random forest
End of explanation
clf2.fit(X2_new, Y2_new)
prediction_lr_new=clf2.predict(X)
confusion=confusion_matrix(Y, prediction_lr_new, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: logistic regression
End of explanation
from sklearn.naive_bayes import BernoulliNB
weights=np.array([0.7,1-0.7])
clf = BernoulliNB(alpha=22, class_prior=weights)
clf.fit(X2_new, Y2_new)
prediction_nb_new=clf.predict(X)
confusion=confusion_matrix(Y, prediction_nb_new, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: Naive Bayse
End of explanation
prediction= voting(prediction_lr_new, prediction_nb_new, prediction_rf_new)
confusion=confusion_matrix(Y, prediction, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
Explanation: Voting
End of explanation
def voting_customized2(pred1, pred2, pred3):
final_prediction=np.zeros_like(pred1)
for i in range(len(pred1)):
if pred1[i]==0:
final_prediction[i]=0
else:
final_prediction[i]=pred2[i]
return final_prediction
prediction= voting_customized2(prediction_lr_new, prediction_nb_new, prediction_rf_new)
confusion=confusion_matrix(Y, prediction, labels=None, sample_weight=None)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
#Load train dataset
df = pd.read_csv("./enwiki.draft_quality.50k_stratified.feature_labels.tsv", sep="\t")
#shuffle the dataframe so that samples will be randomly distributed in Cross Validation Folds
df = shuffle(df)
#Replace strings with integers : 1 for OK and 0 for Not OK
df["draft_quality"] = df["draft_quality"].replace({"OK" : 1, "vandalism" : 0, "spam" : 0, "attack" : 0})
#═df["Markup_chars_thresholded"]=(df['feature.(wikitext.revision.markup_chars / max(wikitext.revision.chars, 1))']>0.35)*1
#Put features and labels on differents dataframes
X=df.drop(["draft_quality"], 1)
Y=df["draft_quality"]
from sklearn.ensemble import AdaBoostClassifier
clf = AdaBoostClassifier(n_estimators=300)
clf.fit(X, Y)
prediction=clf.predict(X)
confusion=confusion_matrix(Y, prediction)
print(confusion)
recall=confusion[0,0]/(confusion[0,0]+confusion[0,1])
precision=confusion[0,0]/(confusion[0,0]+confusion[1,0])
print("Over all the not-OK articles included in the dataset, we detect:")
print(recall)
print("Over all the articles predicted as being not-OK, only this proportion is really not-OK:")
print(precision)
2948
cpt = 0
for idx, item in enumerate (Y.values):
if item != prediction[idx] and item == 0: # false positive ie SPAM uncatch
cpt += 1
#print (X.values[idx]
print (cpt)
Explanation: Customizing
End of explanation |
11,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Missionaries and Infidels
We illustrate the notion of a search problem with the following example, which is also known as the
<a href="https
Step1: $\texttt{no_problem}(m, i)$ is true if there is no problem on either side.
$m$ and $i$ are the number of missionaries and infidels on the left shore.
Hence there are $3-m$ missionaries and $3-i$ infidels on the right shore.
Step2: A state is represented as a triple. The triple $(m, i, b)$ specifies that there are
- $m$ missionaries,
- $i$ infidels, and
- $b$ boats
on the western shore of the river. This implies that there are
$3 - m$ missionaries, $3 - i$ infidels, and $1 - b$ boats on the eastern shore.
The function next_states takes a given state and computes the set of states that can be reached from state by one crossing of the river.
Step3: Initially, all missionaries, all infidels and the boat are on the left shore.
The goal is to have everybody on the right shore, hence the numbers on the left shore
should all be $0$.
Step4: In order to compute a solution of this search problem, we have to %run the notebook Breadth-First-Search.iypnb.
Printing the Solution
To begin with, we display the transition relation that is generated by the function next_states. To this end, we need the module graphviz.
Step6: The function dot_graph(R) turns a given binary relation R into a graph.
Step7: The function call createRelation(start) computes the transition relation. It assumes that all states are reachable from start.
Step8: The function call printPath(Path) prints the solution of the search problem. | Python Code:
problem = lambda m, i: 0 < m < i
Explanation: Missionaries and Infidels
We illustrate the notion of a search problem with the following example, which is also known as the
<a href="https://en.wikipedia.org/wiki/Missionaries_and_cannibals_problem">missionaries and cannibals problem</a>:
Three missionaries and three infidels have to cross a river that runs from the north to the south.
Initially, both the missionaries and the infidels are on the western shore. There is just one small boat and
that boat can carry at most two passengers. Both the missionaries and the infidels can steer the boat.
However, if at any time the missionaries are confronted with a majority of infidels on either shore of the
river, then the missionaries have a problem. Below is an artist's rendition of the problem.
$\texttt{problem}(m, i)$ is True if there is a problem on a shore that has $m$ missionaries and $i$ infidels.
For a problem to arise, the number $m$ of missionaries needs to be greater than $0$ but less than the number $i$ of
infidels.
End of explanation
no_problem = lambda m, i: not problem(m, i) and not problem(3 - m, 3 - i)
Explanation: $\texttt{no_problem}(m, i)$ is true if there is no problem on either side.
$m$ and $i$ are the number of missionaries and infidels on the left shore.
Hence there are $3-m$ missionaries and $3-i$ infidels on the right shore.
End of explanation
def next_states(state):
m, i, b = state
if b == 1:
return { (m-mb, i-ib, 0) for mb in range(m+1)
for ib in range(i+1)
if 1 <= mb + ib <= 2 and no_problem(m-mb, i-ib)
}
else:
return { (m+mb, i+ib, 1) for mb in range(3-m+1)
for ib in range(3-i+1)
if 1 <= mb + ib <= 2 and no_problem(m+mb, i+ib)
}
Explanation: A state is represented as a triple. The triple $(m, i, b)$ specifies that there are
- $m$ missionaries,
- $i$ infidels, and
- $b$ boats
on the western shore of the river. This implies that there are
$3 - m$ missionaries, $3 - i$ infidels, and $1 - b$ boats on the eastern shore.
The function next_states takes a given state and computes the set of states that can be reached from state by one crossing of the river.
End of explanation
start = (3, 3, 1)
goal = (0, 0, 0)
Explanation: Initially, all missionaries, all infidels and the boat are on the left shore.
The goal is to have everybody on the right shore, hence the numbers on the left shore
should all be $0$.
End of explanation
import graphviz as gv
def tripleToStr(t):
return '(' + str(t[0]) + ',' + str(t[1]) + ',' + str(t[2]) + ')'
Explanation: In order to compute a solution of this search problem, we have to %run the notebook Breadth-First-Search.iypnb.
Printing the Solution
To begin with, we display the transition relation that is generated by the function next_states. To this end, we need the module graphviz.
End of explanation
def dot_graph(R):
This function takes binary relation R as inputs and shows this relation as
a graph using the module graphviz.
dot = gv.Digraph()
dot.attr(rankdir='LR')
Nodes = { tripleToStr(a) for (a,b) in R } | { tripleToStr(b) for (a,b) in R }
for n in Nodes:
dot.node(n)
for (x, y) in R:
dot.edge(tripleToStr(x), tripleToStr(y))
return dot
Explanation: The function dot_graph(R) turns a given binary relation R into a graph.
End of explanation
def createRelation(start):
oldM = set()
M = { start }
R = set()
while True:
oldM = M.copy()
M |= { y for x in M
for y in next_states(x)
}
if M == oldM:
break
return { (x, y) for x in M
for y in next_states(x)
}
Explanation: The function call createRelation(start) computes the transition relation. It assumes that all states are reachable from start.
End of explanation
def printPath(Path):
print("Solution:\n")
for i in range(len(Path) - 1):
m1, k1, b1 = Path[i]
m2, k2, b2 = Path[i+1]
printState(m1, k1, b1)
printBoat(m1, k1, b1, m2, k2, b2)
m, k, b = Path[-1]
printState(m, k, b)
def printState(m, k, b):
print( fillCharsRight(m * "M", 6) +
fillCharsRight(k * "K", 6) +
fillCharsRight(b * "B", 3) + " |~~~~~| " +
fillCharsLeft((3 - m) * "M", 6) +
fillCharsLeft((3 - k) * "K", 6) +
fillCharsLeft((1 - b) * "B", 3)
)
def printBoat(m1, k1, b1, m2, k2, b2):
if b1 == 1:
if m1 < m2:
print("Error in printBoat: negative number of missionaries in the boat!")
return
if k1 < k2:
print("Error in printBoat: negative number of infidels in the boat!")
return
print(19*" " + "> " + fillCharsBoth((m1-m2)*"M" + " " + (k1-k2)*"K", 3) + " >")
else:
if m1 > m2:
print("Error in printBoat: negative number of missionaries in the boat!")
return
if k1 > k2:
print("Error in printBoat: negative number of infidels in the boat!")
return
print(19*" " + "< " + fillCharsBoth((m2-m1)*"M" + " " + (k2-k1)*"K", 3) + " <")
def fillCharsLeft(x, n):
s = str(x)
m = n - len(s)
return m * " " + s
def fillCharsRight(x, n):
s = str(x)
m = n - len(s)
return s + m * " "
def fillCharsBoth(x, n):
s = str(x)
ml = (n - len(s)) // 2
mr = (n + 1 - len(s)) // 2
return ml * " " + s + mr * " "
Explanation: The function call printPath(Path) prints the solution of the search problem.
End of explanation |
11,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Optimization
In this example, we'll be performing a simple optimization of single-objective functions using the global-best optimizer in pyswarms.single.GBestPSO and the local-best optimizer in pyswarms.single.LBestPSO. This aims to demonstrate the basic capabilities of the library when applied to benchmark problems.
Step1: Optimizing a function
First, let's start by optimizing the sphere function. Recall that the minima of this function can be located at f(0,0..,0) with a value of 0. In case you don't remember the characteristics of a given function, simply call help(<function>).
For now let's just set some arbitrary parameters in our optimizers. There are, at minimum, three steps to perform optimization
Step2: We can see that the optimizer was able to find a good minima as shown above. You can control the verbosity of the output using the verbose argument, and the number of steps to be printed out using the print_step argument.
Now, let's try this one using local-best PSO
Step3: Optimizing a function with bounds
Another thing that we can do is to set some bounds into our solution, so as to contain our candidate solutions within a specific range. We can do this simply by passing a bounds parameter, of type tuple, when creating an instance of our swarm. Let's try this using the global-best PSO with the Rastrigin function (rastrigin in pyswarms.utils.functions.single_obj).
Recall that the Rastrigin function is bounded within [-5.12, 5.12]. If we pass an unbounded swarm into this function, then a ValueError might be raised. So what we'll do is to create a bound within the specified range. There are some things to remember when specifying a bound
Step4: Basic Optimization with Arguments
Here, we will run a basic optimization using an objective function that needs parameterization. We will use the single.GBestPSO and a version of the rosenbrock function to demonstrate
Step5: Using Arguments
Arguments can either be passed in using a tuple or a dictionary, using the kwargs={} paradigm. First lets optimize the Rosenbrock function using keyword arguments. Note in the definition of the Rosenbrock function above, there were two arguments that need to be passed other than the design variables, and one optional keyword argument, a, b, and c, respectively
Step6: It is also possible to pass a dictionary of key word arguments by using ** decorator when passing the dict
Step7: Any key word arguments in the objective function can be left out as they will be passed the default as defined in the prototype. Note here, c is not passed into the function. | Python Code:
# Import modules
import numpy as np
# Import PySwarms
import pyswarms as ps
from pyswarms.utils.functions import single_obj as fx
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Basic Optimization
In this example, we'll be performing a simple optimization of single-objective functions using the global-best optimizer in pyswarms.single.GBestPSO and the local-best optimizer in pyswarms.single.LBestPSO. This aims to demonstrate the basic capabilities of the library when applied to benchmark problems.
End of explanation
%%time
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
cost, pos = optimizer.optimize(fx.sphere, iters=1000)
Explanation: Optimizing a function
First, let's start by optimizing the sphere function. Recall that the minima of this function can be located at f(0,0..,0) with a value of 0. In case you don't remember the characteristics of a given function, simply call help(<function>).
For now let's just set some arbitrary parameters in our optimizers. There are, at minimum, three steps to perform optimization:
Set the hyperparameters to configure the swarm as a dict.
Create an instance of the optimizer by passing the dictionary along with the necessary arguments.
Call the optimize() method and have it store the optimal cost and position in a variable.
The optimize() method returns a tuple of values, one of which includes the optimal cost and position after optimization. You can store it in a single variable and just index the values, or unpack it using several variables at once.
End of explanation
%%time
# Set-up hyperparameters
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9, 'k': 2, 'p': 2}
# Call instance of PSO
optimizer = ps.single.LocalBestPSO(n_particles=10, dimensions=2, options=options)
# Perform optimization
cost, pos = optimizer.optimize(fx.sphere, iters=1000)
Explanation: We can see that the optimizer was able to find a good minima as shown above. You can control the verbosity of the output using the verbose argument, and the number of steps to be printed out using the print_step argument.
Now, let's try this one using local-best PSO:
End of explanation
# Create bounds
max_bound = 5.12 * np.ones(2)
min_bound = - max_bound
bounds = (min_bound, max_bound)
%%time
# Initialize swarm
options = {'c1': 0.5, 'c2': 0.3, 'w':0.9}
# Call instance of PSO with bounds argument
optimizer = ps.single.GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)
# Perform optimization
cost, pos = optimizer.optimize(fx.rastrigin, iters=1000)
Explanation: Optimizing a function with bounds
Another thing that we can do is to set some bounds into our solution, so as to contain our candidate solutions within a specific range. We can do this simply by passing a bounds parameter, of type tuple, when creating an instance of our swarm. Let's try this using the global-best PSO with the Rastrigin function (rastrigin in pyswarms.utils.functions.single_obj).
Recall that the Rastrigin function is bounded within [-5.12, 5.12]. If we pass an unbounded swarm into this function, then a ValueError might be raised. So what we'll do is to create a bound within the specified range. There are some things to remember when specifying a bound:
A bound should be of type tuple with length 2.
It should contain two numpy.ndarrays so that we have a (min_bound, max_bound)
Obviously, all values in the max_bound should always be greater than the min_bound. Their shapes should match the dimensions of the swarm.
What we'll do now is to create a 10-particle, 2-dimensional swarm. This means that we have to set our maximum and minimum boundaries with the shape of 2. In case we want to initialize an n-dimensional swarm, we then have to set our bounds with the same shape n. A fast workaround for this would be to use the numpy.ones function multiplied by a constant.
End of explanation
# import modules
import numpy as np
# create a parameterized version of the classic Rosenbrock unconstrained optimzation function
def rosenbrock_with_args(x, a, b, c=0):
f = (a - x[:, 0]) ** 2 + b * (x[:, 1] - x[:, 0] ** 2) ** 2 + c
return f
Explanation: Basic Optimization with Arguments
Here, we will run a basic optimization using an objective function that needs parameterization. We will use the single.GBestPSO and a version of the rosenbrock function to demonstrate
End of explanation
from pyswarms.single.global_best import GlobalBestPSO
# instatiate the optimizer
x_max = 10 * np.ones(2)
x_min = -1 * x_max
bounds = (x_min, x_max)
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
optimizer = GlobalBestPSO(n_particles=10, dimensions=2, options=options, bounds=bounds)
# now run the optimization, pass a=1 and b=100 as a tuple assigned to args
cost, pos = optimizer.optimize(rosenbrock_with_args, 1000, a=1, b=100, c=0)
Explanation: Using Arguments
Arguments can either be passed in using a tuple or a dictionary, using the kwargs={} paradigm. First lets optimize the Rosenbrock function using keyword arguments. Note in the definition of the Rosenbrock function above, there were two arguments that need to be passed other than the design variables, and one optional keyword argument, a, b, and c, respectively
End of explanation
kwargs={"a": 1.0, "b": 100.0, 'c':0}
cost, pos = optimizer.optimize(rosenbrock_with_args, 1000, **kwargs)
Explanation: It is also possible to pass a dictionary of key word arguments by using ** decorator when passing the dict
End of explanation
cost, pos = optimizer.optimize(rosenbrock_with_args, 1000, a=1, b=100)
Explanation: Any key word arguments in the objective function can be left out as they will be passed the default as defined in the prototype. Note here, c is not passed into the function.
End of explanation |
11,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
L'objectif de ce script est d'illustrer graphiquement l'évolution du taux implicite de la TICPE depuis 1993. On étudie ce taux pour le diesel, et pour les carburants sans plombs.
Import de modules généraux
Step1: Import de fonctions et de modules spécifiques à Openfisca
Step2: Appel des paramètres de la législation et des prix
Step3: Création d'une dataframe contenant ces paramètres
Step4: A partir des paramètres, calcul des taux de taxation implicites
Step5: Réalisation des graphiques | Python Code:
from pandas import concat
%matplotlib inline
Explanation: L'objectif de ce script est d'illustrer graphiquement l'évolution du taux implicite de la TICPE depuis 1993. On étudie ce taux pour le diesel, et pour les carburants sans plombs.
Import de modules généraux
End of explanation
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar_list
from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_accises import get_accises_carburants
from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_tva import get_tva_taux_plein
from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_prix_carburants import \
get_prix_carburants
Explanation: Import de fonctions et de modules spécifiques à Openfisca
End of explanation
ticpe = ['ticpe_gazole', 'ticpe_super9598']
accise_diesel = get_accises_carburants(ticpe)
prix_ttc = ['diesel_ttc', 'super_95_ttc']
prix_carburants = get_prix_carburants(prix_ttc)
tva_taux_plein = get_tva_taux_plein()
Explanation: Appel des paramètres de la législation et des prix
End of explanation
df_taux_implicite = concat([accise_diesel, prix_carburants, tva_taux_plein], axis = 1)
df_taux_implicite.rename(columns = {'value': 'taux plein tva'}, inplace = True)
Explanation: Création d'une dataframe contenant ces paramètres
End of explanation
df_taux_implicite['taux_implicite_diesel'] = (
df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva']) /
(df_taux_implicite['prix diesel ttc'] -
(df_taux_implicite['accise ticpe gazole'] * (1 + df_taux_implicite['taux plein tva'])))
)
df_taux_implicite['taux_implicite_sp95'] = (
df_taux_implicite['accise ticpe super9598'] * (1 + df_taux_implicite['taux plein tva']) /
(df_taux_implicite['prix super 95 ttc'] -
(df_taux_implicite['accise ticpe super9598'] * (1 + df_taux_implicite['taux plein tva'])))
)
df_taux_implicite = df_taux_implicite.dropna()
Explanation: A partir des paramètres, calcul des taux de taxation implicites
End of explanation
graph_builder_bar_list(df_taux_implicite['taux_implicite_diesel'], 1, 1)
graph_builder_bar_list(df_taux_implicite['taux_implicite_sp95'], 1, 1)
Explanation: Réalisation des graphiques
End of explanation |
11,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommender Systems
In this project, we build a movie recommender system. We read a dataset of movie ratings by users, then we select other movies that a specific user would be interesting in based on his previous choice.
Step1: Read the data
Step2: Get movie titles
Step3: Merged dataframes
Step4: Exploratory Data Analysis
Step5: Create a ratings dataframe with average rating and number of ratings
Step6: Number of ratings column
Step7: Data Visualization
Step8: Recommending Similar Movies
Step9: Most rated movies
Step10: We choose two movies
Step11: Now let's grab the user ratings for those two movies
Step12: Using corrwith() method to get correlations between two pandas series
Step13: Clear data by removing NaN values and using a DataFrame instead of a series
Step14: Filtering out movies that have less than 100 reviews (this value was chosen based off the histogram). This is needed to get more accurate results
Step15: Now sort the values
Step16: The same for the comedy Liar Liar | Python Code:
import numpy as np
import pandas as pd
Explanation: Recommender Systems
In this project, we build a movie recommender system. We read a dataset of movie ratings by users, then we select other movies that a specific user would be interesting in based on his previous choice.
End of explanation
column_names = ['user_id', 'item_id', 'rating', 'timestamp']
df = pd.read_csv('u.data', sep='\t', names=column_names)
df.head()
Explanation: Read the data
End of explanation
movie_titles = pd.read_csv("Movie_Id_Titles")
movie_titles.head()
Explanation: Get movie titles
End of explanation
df = pd.merge(df,movie_titles,on='item_id')
df.head()
Explanation: Merged dataframes
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
%matplotlib inline
Explanation: Exploratory Data Analysis
End of explanation
df.groupby('title')['rating'].mean().sort_values(ascending=False).head()
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
ratings = pd.DataFrame(df.groupby('title')['rating'].mean())
ratings.head()
Explanation: Create a ratings dataframe with average rating and number of ratings
End of explanation
ratings['num of ratings'] = pd.DataFrame(df.groupby('title')['rating'].count())
ratings.head()
Explanation: Number of ratings column
End of explanation
plt.figure(figsize=(10,4))
ratings['num of ratings'].hist(bins=70)
plt.figure(figsize=(10,4))
ratings['rating'].hist(bins=70)
sns.jointplot(x='rating',y='num of ratings',data=ratings,alpha=0.5)
Explanation: Data Visualization: Histogram
End of explanation
moviemat = df.pivot_table(index='user_id',columns='title',values='rating')
moviemat.head()
Explanation: Recommending Similar Movies
End of explanation
ratings.sort_values('num of ratings',ascending=False).head(10)
Explanation: Most rated movies
End of explanation
ratings.head()
Explanation: We choose two movies: starwars, a sci-fi movie. And Liar Liar, a comedy.
End of explanation
starwars_user_ratings = moviemat['Star Wars (1977)']
liarliar_user_ratings = moviemat['Liar Liar (1997)']
starwars_user_ratings.head()
Explanation: Now let's grab the user ratings for those two movies:
End of explanation
similar_to_starwars = moviemat.corrwith(starwars_user_ratings)
similar_to_liarliar = moviemat.corrwith(liarliar_user_ratings)
Explanation: Using corrwith() method to get correlations between two pandas series:
End of explanation
corr_starwars = pd.DataFrame(similar_to_starwars,columns=['Correlation'])
corr_starwars.dropna(inplace=True)
corr_starwars.head()
corr_starwars.sort_values('Correlation',ascending=False).head(10)
Explanation: Clear data by removing NaN values and using a DataFrame instead of a series
End of explanation
corr_starwars = corr_starwars.join(ratings['num of ratings'])
corr_starwars.head()
Explanation: Filtering out movies that have less than 100 reviews (this value was chosen based off the histogram). This is needed to get more accurate results
End of explanation
corr_starwars[corr_starwars['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Explanation: Now sort the values
End of explanation
corr_liarliar = pd.DataFrame(similar_to_liarliar,columns=['Correlation'])
corr_liarliar.dropna(inplace=True)
corr_liarliar = corr_liarliar.join(ratings['num of ratings'])
corr_liarliar[corr_liarliar['num of ratings']>100].sort_values('Correlation',ascending=False).head()
Explanation: The same for the comedy Liar Liar:
End of explanation |
11,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering the subsampled 1.3 M cells
The data consists in 20K Neurons, downsampled from 1.3 Million Brain Cells from E18 Mice and is freely available from 10x Genomics (here).
Step1: Run standard preprocessing steps, see here.
Step2: Now compare this with the reference clustering of PAGA preprint, Suppl. Fig. 12, available from here. | Python Code:
import numpy as np
import pandas as pd
import scanpy.api as sc
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.set_figure_params(dpi=70) # dots (pixels) per inch determine size of inline figures
sc.logging.print_versions()
adata = sc.read_10x_h5('./data/1M_neurons_neuron20k.h5')
adata.var_names_make_unique()
adata
Explanation: Clustering the subsampled 1.3 M cells
The data consists in 20K Neurons, downsampled from 1.3 Million Brain Cells from E18 Mice and is freely available from 10x Genomics (here).
End of explanation
sc.pp.recipe_zheng17(adata)
sc.tl.pca(adata)
sc.pp.neighbors(adata)
sc.tl.umap(adata)
sc.tl.louvain(adata)
sc.tl.paga(adata)
sc.pl.paga_compare(adata, edges=True, threshold=0.05)
Explanation: Run standard preprocessing steps, see here.
End of explanation
anno = pd.read_csv('/Users/alexwolf/Dropbox/1M/louvain.csv.gz', compression='gzip', header=None, index_col=0)
anno.columns = ['louvain_ref']
adata.obs['louvain_ref'] = anno.loc[adata.obs.index]['louvain_ref'].astype(str)
sc.pl.umap(adata, color=['louvain_ref'], legend_loc='on data')
adata.write('./write/subsampled.h5ad')
Explanation: Now compare this with the reference clustering of PAGA preprint, Suppl. Fig. 12, available from here.
End of explanation |
11,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimos Cuadrados
Por
Step1: 2- Aplique paso a paso el método de mínimos cuadrados de tal forma que le permita obtener la mejor curva lineal de ajuste de los datos anteriores, y determine las incertezas asociadas a los parámetros. Determine los coeficientes de correlación $ \chi ^{2} $ y $ R ^{2} $. Reporte correctamente los parámetros con su incertidumbre y concluya sobre la conveniencia de la regresión lineal a partir de las correlaciones obtenidas.
a) $$a_{0}= \frac{(\sum x_{i}^{2})\sum y_{i} - (\sum x_{i})(\sum x_{i}y_{i})}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
Step2: b) $$a_{1}= \frac{n\sum x_{i}y_{i} - (\sum x_{i})(\sum y_{i})}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
Step3: c) $$y= a_{0}+a_{1}x$$
Step4: $$S_{y} = \sqrt{\frac{1}{n-2}\sum_{i=1}^{n}(y_{i}-a_{0}-a_{1}x_{i})^{2}}$$
$$S_{my} = \frac{S_{y}}{n^{1/2}}$$
$$S_{ma0}^{2}= \frac{S_{my}^{2}\sum x_{i}^{2}}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
$$ S_{ma1}^{2}= \frac{n S_{my}^{2}}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
Step5: $$a_{0}\pm S_{ma0} $$ $$ a_{1}\pm S_{ma1} $$
Step6: 3- Grafique todas las posibles curvas de la regresión lineal teniendo en cuenta el error determinado para los parámetros. Concluya al respecto.
Step7: $ \chi ^{2} $ | Python Code:
########################################################
## Librerias para el trabajo
########################################################
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
data1= np.loadtxt('datos.csv',delimiter=',') #datos para regresion lineal
X1=data1[:,0]
Y1=data1[:,1]
print
print 'grafica preliminar de los puntos: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'o')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
Explanation: Minimos Cuadrados
Por: Alejandro Mesa Gómez, C.C. : 1017228006
A-Mínimos cuadrados
1- Grafique los datos (x,y) de la tabla.
|X | Y |
|------|------|
|0.0 | 1.95 |
|0.5 | 2.21 |
|1.0 | 3.07 |
|1.5 | 3.90 |
|2.0 | 4.43 |
|2.5 | 5.20 |
|3.0 | 4.02 |
|3.5 | 5.38 |
|4.0 | 6.59 |
|4.5 | 5.86 |
|5.0 | 6.57|
|5.5 | 6.36 |
|6.0 | 6.67 |
End of explanation
#n:
n=len(X1)
#suma xi cuadrado:
suma_xi2=0
for i in xrange(0,n):
suma_xi2+=(X1[i]*X1[i])
#suma yi cuadrado:
suma_yi2=0
for i in xrange(0,n):
suma_yi2+=(Y1[i]*Y1[i])
#suma xi simple:
suma_xi=0
for i in xrange(0,n):
suma_xi+=(X1[i])
#suma yi simple:
suma_yi=0
for i in xrange(0,n):
suma_yi+=(Y1[i])
#suma xi*yi:
suma_xiyi=0
for i in xrange(0,n):
suma_xiyi+=(X1[i]*Y1[i])
a0=((suma_xi2*suma_yi)-(suma_xi*suma_xiyi))/(n*suma_xi2-(suma_xi*suma_xi))
print 'a0 = %.1f'%a0
Explanation: 2- Aplique paso a paso el método de mínimos cuadrados de tal forma que le permita obtener la mejor curva lineal de ajuste de los datos anteriores, y determine las incertezas asociadas a los parámetros. Determine los coeficientes de correlación $ \chi ^{2} $ y $ R ^{2} $. Reporte correctamente los parámetros con su incertidumbre y concluya sobre la conveniencia de la regresión lineal a partir de las correlaciones obtenidas.
a) $$a_{0}= \frac{(\sum x_{i}^{2})\sum y_{i} - (\sum x_{i})(\sum x_{i}y_{i})}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
End of explanation
a1=((n*suma_xiyi)-(suma_xi*suma_yi))/(n*suma_xi2-(suma_xi*suma_xi))
print 'a1 = %.1f'%a1
Explanation: b) $$a_{1}= \frac{n\sum x_{i}y_{i} - (\sum x_{i})(\sum y_{i})}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
End of explanation
x=np.linspace(X1[0],X1[-1],n)
y=(a0 +a1*x)
print
print 'grafica de los puntos con ajuste: '
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'ro')
ax.plot(x,y,'b-')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
Explanation: c) $$y= a_{0}+a_{1}x$$
End of explanation
#desviacion estandar
Sy=0
for i in xrange(0,len(y)):
Sy+= (y[i]-a0-a1*x[i])**2
#print Sy
Sy*=(1/(len(y)-2.))
#print Sy
Sy=Sy**(1/2)
print 'Sy %.1f'%Sy
#error en y
raiz= np.sqrt(n)
Smy=Sy/(raiz)
print 'Smy %.1f'%Smy
#error en a0
S2_ma0=(Smy*Smy*suma_xi2)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma0
#error en a0
S2_ma1=(Smy*Smy*n)/(n*suma_xi2-(suma_xi*suma_xi))
print 'S2_ma0 %f'%S2_ma1
Explanation: $$S_{y} = \sqrt{\frac{1}{n-2}\sum_{i=1}^{n}(y_{i}-a_{0}-a_{1}x_{i})^{2}}$$
$$S_{my} = \frac{S_{y}}{n^{1/2}}$$
$$S_{ma0}^{2}= \frac{S_{my}^{2}\sum x_{i}^{2}}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
$$ S_{ma1}^{2}= \frac{n S_{my}^{2}}{n\sum x_{i}^{2} - (\sum x_{i})^{2}}$$
End of explanation
print 'a0 ± sma0: %f ± %f'%(a0,np.sqrt(S2_ma0))
print 'a1 ± sma1: %f ± %f'%(a1,np.sqrt(S2_ma1))
Explanation: $$a_{0}\pm S_{ma0} $$ $$ a_{1}\pm S_{ma1} $$
End of explanation
err_a0= np.sqrt(S2_ma0)
err_a1= np.sqrt(S2_ma1)
y=(a0 +a1*x)
y1=((a0+err_a0) +(a1+err_a1)*x)
y2=((a0-err_a0) +(a1-err_a1)*x)
y3=((a0+err_a0) +(a1-err_a1)*x)
y4=((a0-err_a0) +(a1+err_a1)*x)
print
print 'grafica de los puntos con ajustes y errores: '
print 'es facil observar que todas las curvas posibles variando los errores, están muy cerca de la curva "perfecta", lo cual implica que el ajuste es bastante bueno y los datos tienen muy bajo error'
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(X1,Y1,'ro')
ax.plot(x,y,'b-')
ax.plot(x,y1,'-*')
ax.plot(x,y2,'-*')
ax.plot(x,y3,'--')
ax.plot(x,y4,'--')
ax.set_xlim(xmin=0.0, xmax=8)
ax.set_ylim(ymin=0.0, ymax=8)
plt.show()
Explanation: 3- Grafique todas las posibles curvas de la regresión lineal teniendo en cuenta el error determinado para los parámetros. Concluya al respecto.
End of explanation
#chi2
chi2=0
for i in xrange(0,n):
chi2=((y[i]-Y1[i])**2)/Y1[i]
print 'chi^2 = ',chi2
# r2
b=a1
bprima=a1=((n*suma_xiyi)-(suma_xi*suma_yi))/(n*suma_yi2-(suma_yi*suma_yi))
r2=b*bprima
print 'r^2 = ',r2
Explanation: $ \chi ^{2} $ : $$ \chi ^{2} = \sum_{i}^{n} \frac{(Y_{observada}-Y_{teorica})^{2}}{Y_{teorica}}$$y $ R ^{2} $
End of explanation |
11,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinal Regression
Step1: Loading a stata data file from the UCLA website.This notebook is inspired by https
Step2: This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables
Step3: In our model, we have 3 exogenous variables(the $\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.
Those 3 estimations and their standard errors can be retrieved in the summary table.
Since there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate.
As explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows
Step4: Logit ordinal regression
Step5: Ordinal regression with a custom cumulative cLogLog distribution
Step6: Using formulas - treatment of endog
Pandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.
Step7: Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.
Step8: Using string values directly as dependent variable raises a ValueError.
Step9: Using formulas - no constant in model
The parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.
Patsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explicit intercept.
Consequently, there are two valid cases to get a design matrix without intercept.
specify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.
specify a model with an explicit intercept which statsmodels will remove.
Models with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.
In the following we look at an example with an additional categorical variable.
Step10: explicit intercept, that will be removed
Step11: implicit intercept creates overparameterized model
Specifying "0 +" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.
python
OrderedModel.from_formula("apply ~ 0 + pared + public + gpa + C(dummy)", data_student, distr='logit')
To see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.
The parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.
Some summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.
Step12: Binary Model compared to Logit
If there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.
The models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.
The implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.
Step13: We drop the middle category from the data and keep the two extreme categories.
Step14: The Logit model does not have a constant by default, we have to add it to our explanatory variables.
The results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.
The only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.
Step15: Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.
As example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate. | Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
from statsmodels.miscmodels.ordinal_model import OrderedModel
Explanation: Ordinal Regression
End of explanation
url = "https://stats.idre.ucla.edu/stat/data/ologit.dta"
data_student = pd.read_stata(url)
data_student.head(5)
data_student.dtypes
data_student['apply'].dtype
Explanation: Loading a stata data file from the UCLA website.This notebook is inspired by https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/ which is a R notebook from UCLA.
End of explanation
mod_prob = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr='probit')
res_prob = mod_prob.fit(method='bfgs')
res_prob.summary()
Explanation: This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables:
- their grade point average(gpa), a float between 0 and 4.
- pared, a binary that indicates if at least one parent went to graduate school.
- and public, a binary that indicates if the current undergraduate institution of the student is public or private.
apply, the target variable is categorical with ordered categories: unlikely < somewhat likely < very likely. It is a pd.Serie of categorical type, this is preferred over NumPy arrays.
The model is based on a numerical latent variable $y_{latent}$ that we cannot observe but that we can compute thanks to exogenous variables.
Moreover we can use this $y_{latent}$ to define $y$ that we can observe.
For more details see the the Documentation of OrderedModel, the UCLA webpage or this book.
Probit ordinal regression:
End of explanation
num_of_thresholds = 2
mod_prob.transform_threshold_params(res_prob.params[-num_of_thresholds:])
Explanation: In our model, we have 3 exogenous variables(the $\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.
Those 3 estimations and their standard errors can be retrieved in the summary table.
Since there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate.
As explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows:
End of explanation
mod_log = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr='logit')
res_log = mod_log.fit(method='bfgs', disp=False)
res_log.summary()
predicted = res_log.model.predict(res_log.params, exog=data_student[['pared', 'public', 'gpa']])
predicted
pred_choice = predicted.argmax(1)
print('Fraction of correct choice predictions')
print((np.asarray(data_student['apply'].values.codes) == pred_choice).mean())
Explanation: Logit ordinal regression:
End of explanation
# using a SciPy distribution
res_exp = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr=stats.expon).fit(method='bfgs', disp=False)
res_exp.summary()
# minimal definition of a custom scipy distribution.
class CLogLog(stats.rv_continuous):
def _ppf(self, q):
return np.log(-np.log(1 - q))
def _cdf(self, x):
return 1 - np.exp(-np.exp(x))
cloglog = CLogLog()
# definition of the model and fitting
res_cloglog = OrderedModel(data_student['apply'],
data_student[['pared', 'public', 'gpa']],
distr=cloglog).fit(method='bfgs', disp=False)
res_cloglog.summary()
Explanation: Ordinal regression with a custom cumulative cLogLog distribution:
In addition to logit and probit regression, any continuous distribution from SciPy.stats package can be used for the distr argument. Alternatively, one can define its own distribution simply creating a subclass from rv_continuous and implementing a few methods.
End of explanation
modf_logit = OrderedModel.from_formula("apply ~ 0 + pared + public + gpa", data_student,
distr='logit')
resf_logit = modf_logit.fit(method='bfgs')
resf_logit.summary()
Explanation: Using formulas - treatment of endog
Pandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.
End of explanation
data_student["apply_codes"] = data_student['apply'].cat.codes * 2 + 5
data_student["apply_codes"].head()
OrderedModel.from_formula("apply_codes ~ 0 + pared + public + gpa", data_student,
distr='logit').fit().summary()
resf_logit.predict(data_student.iloc[:5])
Explanation: Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.
End of explanation
data_student["apply_str"] = np.asarray(data_student["apply"])
data_student["apply_str"].head()
data_student.apply_str = pd.Categorical(data_student.apply_str, ordered=True)
data_student.public = data_student.public.astype(float)
data_student.pared = data_student.pared.astype(float)
OrderedModel.from_formula("apply_str ~ 0 + pared + public + gpa", data_student,
distr='logit')
Explanation: Using string values directly as dependent variable raises a ValueError.
End of explanation
nobs = len(data_student)
data_student["dummy"] = (np.arange(nobs) < (nobs / 2)).astype(float)
Explanation: Using formulas - no constant in model
The parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.
Patsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explicit intercept.
Consequently, there are two valid cases to get a design matrix without intercept.
specify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.
specify a model with an explicit intercept which statsmodels will remove.
Models with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.
In the following we look at an example with an additional categorical variable.
End of explanation
modfd_logit = OrderedModel.from_formula("apply ~ 1 + pared + public + gpa + C(dummy)", data_student,
distr='logit')
resfd_logit = modfd_logit.fit(method='bfgs')
print(resfd_logit.summary())
modfd_logit.k_vars
modfd_logit.k_constant
Explanation: explicit intercept, that will be removed:
Note "1 +" is here redundant because it is patsy's default.
End of explanation
modfd2_logit = OrderedModel.from_formula("apply ~ 0 + pared + public + gpa + C(dummy)", data_student,
distr='logit', hasconst=False)
resfd2_logit = modfd2_logit.fit(method='bfgs')
print(resfd2_logit.summary())
resfd2_logit.predict(data_student.iloc[:5])
resf_logit.predict()
Explanation: implicit intercept creates overparameterized model
Specifying "0 +" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.
python
OrderedModel.from_formula("apply ~ 0 + pared + public + gpa + C(dummy)", data_student, distr='logit')
To see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.
The parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.
Some summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.
End of explanation
from statsmodels.discrete.discrete_model import Logit
from statsmodels.tools.tools import add_constant
Explanation: Binary Model compared to Logit
If there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.
The models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.
The implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.
End of explanation
mask_drop = data_student['apply'] == "somewhat likely"
data2 = data_student.loc[~mask_drop, :]
# we need to remove the category also from the Categorical Index
data2['apply'].cat.remove_categories("somewhat likely", inplace=True)
data2.head()
mod_log = OrderedModel(data2['apply'],
data2[['pared', 'public', 'gpa']],
distr='logit')
res_log = mod_log.fit(method='bfgs', disp=False)
res_log.summary()
Explanation: We drop the middle category from the data and keep the two extreme categories.
End of explanation
ex = add_constant(data2[['pared', 'public', 'gpa']], prepend=False)
mod_logit = Logit(data2['apply'].cat.codes, ex)
res_logit = mod_logit.fit(method='bfgs', disp=False)
res_logit.summary()
Explanation: The Logit model does not have a constant by default, we have to add it to our explanatory variables.
The results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.
The only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.
End of explanation
res_logit_hac = mod_logit.fit(method='bfgs', disp=False, cov_type="hac", cov_kwds={"maxlags": 2})
res_log_hac = mod_log.fit(method='bfgs', disp=False, cov_type="hac", cov_kwds={"maxlags": 2})
res_logit_hac.bse.values - res_log_hac.bse
Explanation: Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.
As example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate.
End of explanation |
11,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License
Step1: The dinner party
Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?
Step2: The Gluten Problem
This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.
Here is a description of the study | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Hist, Pmf, Suite, Beta
import thinkplot
import numpy as np
Explanation: Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
# Solution goes here
# Solution goes here
Explanation: The dinner party
Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: The Gluten Problem
This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge.
Here is a description of the study:
"We studied 35 non-CD subjects (31 females) that were on a gluten-free diet (GFD), in a double-blind challenge study. Participants were randomised to receive either gluten-containing flour or gluten-free flour for 10 days, followed by a 2-week washout period and were then crossed over. The main outcome measure was their ability to identify which flour contained gluten.
"The gluten-containing flour was correctly identified by 12 participants (34%)..."
Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity."
This conclusion seems odd to me, because if none of the patients were sensitive to gluten, we would expect some of them to identify the gluten flour by chance. So the results are consistent with the hypothesis that none of the subjects are actually gluten sensitive.
We can use a Bayesian approach to interpret the results more precisely. But first we have to make some modeling decisions.
Of the 35 subjects, 12 identified the gluten flour based on resumption of symptoms while they were eating it. Another 17 subjects wrongly identified the gluten-free flour based on their symptoms, and 6 subjects were unable to distinguish. So each subject gave one of three responses. To keep things simple I follow the authors of the study and lump together the second two groups; that is, I consider two groups: those who identified the gluten flour and those who did not.
I assume (1) people who are actually gluten sensitive have a 95% chance of correctly identifying gluten flour under the challenge conditions, and (2) subjects who are not gluten sensitive have only a 40% chance of identifying the gluten flour by chance (and a 60% chance of either choosing the other flour or failing to distinguish).
Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval?
End of explanation |
11,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 1
Step1: This dataset is a debug dump from a Lustre filesystem. Typically these events occur due to code bugs (LBUG), heavy load, hardware problems, or misbehaving user application IO.
Let's analyze some of the log structure to determine what may have caused this debug dump.
Step2: Let's take a look at the first five lines of the debug log. This log is colon-delimited, and roughly corresponds to the following information
Step3: Now let's split each line of the RDD into lowercase "words".
Lambda functions are ubiquitous in Spark, I presume due to the functional programming underpinnings of Scala. They act on each partition in parallel, and operate on each line.
Step4: Notice that this map returns immediately; no actions have been taken- the DAG has been updated to prepare for transformations. I like to think of this as analogous to a page fault, but applying to a Directed Acyclic Graph.
Step5: Part 2
Step6: Did it work?
Step7: Now we issue an action to the RDD
Step8: And as a percent of the overall file?
Step9: Now let's determine the effect of the flatMap
Step10: Now filter out "words" longer than 2 characters.
Step11: To sort words by number of occurences we map each word of each line to a tuple
Step12: We utilize reduceByKey
Step13: We swap the order of the tuple's contents to sort by the number rather than words. The argument "False" passed to sortByKey instructs it to sort descending. | Python Code:
from pyspark import SparkConf, SparkContext
import re
Explanation: Example 1: Parallel Log Parsing with Map and Filter
Step 1: Data ingest and parsing
End of explanation
sc
partitions = 18
parlog = sc.textFile("/lustre/janus_scratch/dami9546/lustre_debug.out", partitions)
Explanation: This dataset is a debug dump from a Lustre filesystem. Typically these events occur due to code bugs (LBUG), heavy load, hardware problems, or misbehaving user application IO.
Let's analyze some of the log structure to determine what may have caused this debug dump.
End of explanation
parlog.take(5)
Explanation: Let's take a look at the first five lines of the debug log. This log is colon-delimited, and roughly corresponds to the following information:
0-1 describe subsystem ID
2
3 timestamp
4-6 PIDs
7 relevant code module
8 code line
9 function and message
End of explanation
words = parlog.map(lambda line: re.split('\W+', line.lower().strip()))
Explanation: Now let's split each line of the RDD into lowercase "words".
Lambda functions are ubiquitous in Spark, I presume due to the functional programming underpinnings of Scala. They act on each partition in parallel, and operate on each line.
End of explanation
words.take(2)
Explanation: Notice that this map returns immediately; no actions have been taken- the DAG has been updated to prepare for transformations. I like to think of this as analogous to a page fault, but applying to a Directed Acyclic Graph.
End of explanation
mfds = words.filter(lambda x: 'mfd' and 'change' in x)
Explanation: Part 2: Counting Occurences
My experience with Lustre affords me (some) insight into this- I know the system has been susceptible to MDS overloading due to applications creating tons of small files, or issuing lots of MDS RPCs. I want to look for all lines that contain mfd changes.
Let's apply a filter to this RDD. Let's create a new RDD that only contains lines with mfs changes.
End of explanation
mfds.take(2)
Explanation: Did it work?
End of explanation
mfds.count()
Explanation: Now we issue an action to the RDD: the DAG performs the lazily executed functions. In this case we count the number of lines in the mfds RDD.
End of explanation
'{0:0.2f}%'.format((mfds.count()/float(parlog.count()))*100)
Explanation: And as a percent of the overall file?
End of explanation
flatwords = parlog.flatMap(lambda line: re.split('\W+', line.lower().strip()))
Explanation: Now let's determine the effect of the flatMap: this behaves like map, but does not return a list for each line. Rather, it aggregates (flattens) the output into a single list.
End of explanation
longwords = flatwords.filter(lambda x: len(x) > 2 )
longwords.take(10)
Explanation: Now filter out "words" longer than 2 characters.
End of explanation
longwords = longwords.map(lambda word: (word, 1))
Explanation: To sort words by number of occurences we map each word of each line to a tuple: itself and 1. We will perform a reduction on these tuples to get counts.
End of explanation
longcount = longwords.reduceByKey(lambda a, b: a + b)
longcount.take(10)
Explanation: We utilize reduceByKey: this operation performs a function on identical keys. By default this will be the first element of the tuple. Since this will be the word, the behavior is desired.
Note that reduce operations are accumulators and must be associative.
End of explanation
longwords = longcount.map(lambda x: (x[1], x[0])).sortByKey(False)
longwords.take(20)
Explanation: We swap the order of the tuple's contents to sort by the number rather than words. The argument "False" passed to sortByKey instructs it to sort descending.
End of explanation |
11,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q1. Let's practice the seq2seq framework with a simple example. In this example, we will take the last state of the encoder as the initial state of the decoder. Complete the code.
Step1: Q2. At this time, we will use the Bahdanau attention mechanism. Complete the code. | Python Code:
# Inputs and outputs: ten digits
x = tf.placeholder(tf.int32, shape=(32, 10))
y = tf.placeholder(tf.int32, shape=(32, 10))
# One-hot encoding
enc_inputs = tf.one_hot(x, 10)
dec_inputs = tf.concat((tf.zeros_like(y[:, :1]), y[:, :-1]), -1)
dec_inputs = tf.one_hot(dec_inputs, 10)
# encoder
encoder_cell = tf.contrib.rnn.GRUCell(128)
memory, last_state = tf.nn.dynamic_rnn(encoder_cell, enc_inputs, dtype=tf.float32, scope="encoder")
# decoder
decoder_cell = tf.contrib.rnn.GRUCell(128)
outputs, _ = tf.nn.dynamic_rnn(decoder_cell, dec_inputs, initial_state=last_state, scope="decoder")
# Readout
logits = tf.layers.dense(outputs, 10)
preds = tf.argmax(logits, -1, output_type=tf.int32)
# Evaluation
hits = tf.reduce_sum(tf.to_float(tf.equal(preds, y)))
acc = hits / tf.to_float(tf.size(x))
# Loss and train
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
mean_loss = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001)
train_op = opt.minimize(mean_loss)
# Session
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
losses, accs = [], []
for step in range(2000):
# Data design
# We feed sequences of random digits in the `x`,
# and take its reverse as the target.
_x = np.random.randint(0, 10, size=(32, 10), dtype=np.int32)
_y = _x[:, ::-1] # Reverse
_, _loss, _acc = sess.run([train_op, mean_loss, acc], {x:_x, y:_y})
losses.append(_loss)
accs.append(_acc)
# Plot
plt.plot(losses, label="loss")
plt.plot(accs, label="accuracy")
plt.legend()
plt.grid()
plt.show()
Explanation: Q1. Let's practice the seq2seq framework with a simple example. In this example, we will take the last state of the encoder as the initial state of the decoder. Complete the code.
End of explanation
tf.reset_default_graph()
# Inputs and outputs: ten digits
x = tf.placeholder(tf.int32, shape=(32, 10))
y = tf.placeholder(tf.int32, shape=(32, 10))
# One-hot encoding
enc_inputs = tf.one_hot(x, 10)
dec_inputs = tf.concat((tf.zeros_like(y[:, :1]), y[:, :-1]), -1)
dec_inputs = tf.one_hot(dec_inputs, 10)
# encoder
encoder_cell = tf.contrib.rnn.GRUCell(128)
memory, last_state = tf.nn.dynamic_rnn(encoder_cell, enc_inputs, dtype=tf.float32, scope="encoder")
# decoder
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(128, memory)
decoder_cell = tf.contrib.rnn.GRUCell(128)
cell_with_attention = tf.contrib.seq2seq.AttentionWrapper(decoder_cell,
attention_mechanism,
attention_layer_size=256,
alignment_history=True,
output_attention=False)
outputs, state = tf.nn.dynamic_rnn(cell_with_attention, dec_inputs, dtype=tf.float32)
alignments = tf.transpose(state.alignment_history.stack(),[1,2,0])
# Readout
logits = tf.layers.dense(outputs, 10)
preds = tf.argmax(logits, -1, output_type=tf.int32)
# Evaluation
hits = tf.reduce_sum(tf.to_float(tf.equal(preds, y)))
acc = hits / tf.to_float(tf.size(x))
# Loss and train
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y)
mean_loss = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001)
train_op = opt.minimize(mean_loss)
# Session
def plot_alignment(alignment):
fig, ax = plt.subplots()
im=ax.imshow(alignment, cmap='Greys', interpolation='none')
fig.colorbar(im, ax=ax)
plt.xlabel('Decoder timestep')
plt.ylabel('Encoder timestep')
plt.show()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
losses, accs = [], []
for step in range(2000):
# Data design
# We feed sequences of random digits in the `x`,
# and take its reverse as the target.
_x = np.random.randint(0, 10, size=(32, 10), dtype=np.int32)
_y = _x[:, ::-1] # Reverse
_, _loss, _acc = sess.run([train_op, mean_loss, acc], {x:_x, y:_y})
losses.append(_loss)
accs.append(_acc)
if step % 100 == 0:
print("step=", step)
_alignments = sess.run(alignments, {x: _x, y: _y})
plot_alignment(_alignments[0])
# Plot
plt.plot(losses, label="loss")
plt.plot(accs, label="accuracy")
plt.legend()
plt.grid()
plt.show()
Explanation: Q2. At this time, we will use the Bahdanau attention mechanism. Complete the code.
End of explanation |
11,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step2: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook
Step4: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
Step5: Use interactive to build a user interface for exploing the draw_circle function
Step6: Use the display function to show the widgets created by interactive | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import SVG
Explanation: Interact Exercise 5
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
s =
<svg width="100" height="100">
<circle cx="50" cy="50" r="20" fill="aquamarine" />
</svg>
SVG(s)
Explanation: Interact with SVG display
SVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:
End of explanation
def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):
Draw an SVG circle.
Parameters
----------
width : int
The width of the svg drawing area in px.
height : int
The height of the svg drawing area in px.
cx : int
The x position of the center of the circle in px.
cy : int
The y position of the center of the circle in px.
r : int
The radius of the circle in px.
fill : str
The fill color of the circle.
# YOUR CODE HERE
s = "<svg width = '%s' height= '%s'> <circle cx='%s' cy='%s' r='%s' fill='%s' /> </svg>" %(width,height,cx,cy,r,fill)
display(SVG(s))
# return SVG(s)
draw_circle(cx=10, cy=10, r=10, fill='blue')
assert True # leave this to grade the draw_circle function
Explanation: Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.
End of explanation
# YOUR CODE HERE
w=interactive(draw_circle, width=fixed(300),height=fixed(300), cx=(0,300,1),cy=(0,300,1),r=(0,50,1),fill='red')
w
c = w.children
assert c[0].min==0 and c[0].max==300
assert c[1].min==0 and c[1].max==300
assert c[2].min==0 and c[2].max==50
assert c[3].value=='red'
Explanation: Use interactive to build a user interface for exploing the draw_circle function:
width: a fixed value of 300px
height: a fixed value of 300px
cx/cy: a slider in the range [0,300]
r: a slider in the range [0,50]
fill: a text area in which you can type a color's name
Save the return value of interactive to a variable named w.
End of explanation
# YOUR CODE HERE
from IPython.display import display
display(w)
assert True # leave this to grade the display of the widget
Explanation: Use the display function to show the widgets created by interactive:
End of explanation |
11,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eaton & Ree (2013) single-end RAD data set
Here we demonstrate a denovo assembly for an empirical RAD data set using the ipyrad Python API. This example was run on a workstation with 20 cores and takes about 10 minutes to assemble, but you should be able to run it on a 4-core laptop in ~30-60 minutes.
For our example data we will use the 13 taxa Pedicularis data set from Eaton and Ree (2013) (Open Access). This data set is composed of single-end 75bp reads from a RAD-seq library prepared with the PstI enzyme. The data set also serves as an example for several of our analysis cookbooks that demonstrate methods for analyzing RAD-seq data files. At the end of this notebook there are also several examples of how to use the ipyrad analysis tools to run downstream analyses in parallel.
The figure below shows the ingroup taxa from this study and their sampling locations. The study includes all species within a small monophyletic clade of Pedicularis, including multiple individuals from 5 species and several subspecies, as well as an outgroup species. The sampling essentially spans from population-level variation where species boundaries are unclear, to higher-level divergence where species boundaries are quite distinct. This is a common scale at which RAD-seq data are often very useful.
<img src="https
Step1: In contrast to the ipyrad CLI, the ipyrad API gives users much more fine-scale control over the parallelization of their analysis, but this also requires learning a little bit about the library that we use to do this, called ipyparallel. This library is designed for use with jupyter-notebooks to allow massive-scale multi-processing while working interactively.
Understanding the nuts and bolts of it might take a little while, but it is fairly easy to get started using it, especially in the way it is integrated with ipyrad. To start a parallel client to you must run the command-line program 'ipcluster'. This will essentially start a number of independent Python processes (kernels) which we can then send bits of work to do. The cluster can be stopped and restarted independently of this notebook, which is convenient for working on a cluster where connecting to many cores is not always immediately available.
Open a terminal and type the following command to start an ipcluster instance with N engines.
Step2: Download the data set (Pedicularis)
These data are archived on the NCBI sequence read archive (SRA) under accession id SRP021469. As part of the ipyrad analysis tools we have a wrapper around the SRAtools software that can be used to query NCBI and download sequence data based on accession IDs. Run the code below to download the fastq data files associated with this study. The data will be saved the specified directory which will be created if it does not already exist. The compressed file size of the data is a little over 1GB. If you pass your ipyclient to the .run() command below then the download will be parallelized.
Step3: Create an Assembly object
This object stores the parameters of the assembly and the organization of data files.
Step4: Set parameters for the Assembly. This will raise an error if any of the parameters are not allowed because they are the wrong type, or out of the allowed range.
Step5: Assemble the data set
Step6: Branch to create several final data sets with different parameter settings
Step7: View final stats
The .stats attribute shows a stats summary for each sample, and a number of stats dataframes can be accessed for each step from the .stats_dfs attribute of the Assembly.
Step8: Analysis tools
We have a lot more information about analysis tools in the ipyrad documentation. But here I'll show just a quick example of how you can easily access the data files for these assemblies and use them in downstream analysis software. The ipyrad analysis tools include convenient wrappers to make it easier to parallelize analyses of RAD-seq data. Please see the full documentation for the ipyrad.analysis tools in the ipyrad documentation for more details.
Step9: RAxML -- ML concatenation tree inference
Step10: tetrad -- quartet tree inference
Step11: STRUCTURE -- population cluster inference
Step12: TREEMIX -- ML tree & admixture co-inference
Step13: ABBA-BABA admixture inference
Step14: BPP -- species tree inference/delim
Step15: run BPP
You can either call 'write_bpp_files()' to write input files for this data set to be run in BPP, and then call BPP on those files, or you can use the '.run()' command to run the data files directly, and in parallel on the cluster. If you specify multiple reps then a different random sample of loci will be selected, and different random seeds applied to each replicate. | Python Code:
## conda install ipyrad -c ipyrad
## conda install toytree -c eaton-lab
## conda install sra-tools -c bioconda
## conda install entrez-direct -c bioconda
## imports
import ipyrad as ip
import ipyrad.analysis as ipa
import ipyparallel as ipp
Explanation: Eaton & Ree (2013) single-end RAD data set
Here we demonstrate a denovo assembly for an empirical RAD data set using the ipyrad Python API. This example was run on a workstation with 20 cores and takes about 10 minutes to assemble, but you should be able to run it on a 4-core laptop in ~30-60 minutes.
For our example data we will use the 13 taxa Pedicularis data set from Eaton and Ree (2013) (Open Access). This data set is composed of single-end 75bp reads from a RAD-seq library prepared with the PstI enzyme. The data set also serves as an example for several of our analysis cookbooks that demonstrate methods for analyzing RAD-seq data files. At the end of this notebook there are also several examples of how to use the ipyrad analysis tools to run downstream analyses in parallel.
The figure below shows the ingroup taxa from this study and their sampling locations. The study includes all species within a small monophyletic clade of Pedicularis, including multiple individuals from 5 species and several subspecies, as well as an outgroup species. The sampling essentially spans from population-level variation where species boundaries are unclear, to higher-level divergence where species boundaries are quite distinct. This is a common scale at which RAD-seq data are often very useful.
<img src="https://raw.githubusercontent.com/eaton-lab/eaton-lab.github.io/master/slides/slide-images/Eaton-Ree-2012-Ped-Fig1.png">
Setup (software and data files)
If you haven't done so yet, start by installing ipyrad using conda (see ipyrad installation instructions) as well as the packages in the cell below. This is easiest to do in a terminal. Then open a jupyter-notebook, like this one, and follow along with the tuturial by copying and executing the code in the cells, and adding your own documentation between them using markdown. Feel free to modify parameters to see their effects on the downstream results.
End of explanation
## ipcluster start --n=20
## connect to cluster
ipyclient = ipp.Client()
ipyclient.ids
Explanation: In contrast to the ipyrad CLI, the ipyrad API gives users much more fine-scale control over the parallelization of their analysis, but this also requires learning a little bit about the library that we use to do this, called ipyparallel. This library is designed for use with jupyter-notebooks to allow massive-scale multi-processing while working interactively.
Understanding the nuts and bolts of it might take a little while, but it is fairly easy to get started using it, especially in the way it is integrated with ipyrad. To start a parallel client to you must run the command-line program 'ipcluster'. This will essentially start a number of independent Python processes (kernels) which we can then send bits of work to do. The cluster can be stopped and restarted independently of this notebook, which is convenient for working on a cluster where connecting to many cores is not always immediately available.
Open a terminal and type the following command to start an ipcluster instance with N engines.
End of explanation
## download the Pedicularis data set from NCBI
sra = ipa.sratools(accession="SRP021469", workdir="fastqs-Ped")
sra.run(force=True, ipyclient=ipyclient)
Explanation: Download the data set (Pedicularis)
These data are archived on the NCBI sequence read archive (SRA) under accession id SRP021469. As part of the ipyrad analysis tools we have a wrapper around the SRAtools software that can be used to query NCBI and download sequence data based on accession IDs. Run the code below to download the fastq data files associated with this study. The data will be saved the specified directory which will be created if it does not already exist. The compressed file size of the data is a little over 1GB. If you pass your ipyclient to the .run() command below then the download will be parallelized.
End of explanation
## you must provide a name for the Assembly
data = ip.Assembly("pedicularis")
Explanation: Create an Assembly object
This object stores the parameters of the assembly and the organization of data files.
End of explanation
## set parameters
data.set_params("project_dir", "analysis-ipyrad")
data.set_params("sorted_fastq_path", "fastqs-Ped/*.fastq.gz")
data.set_params("clust_threshold", "0.90")
data.set_params("filter_adapters", "2")
data.set_params("max_Hs_consens", (5, 5))
data.set_params("trim_loci", (0, 5, 0, 0))
data.set_params("output_formats", "psvnkua")
## see/print all parameters
data.get_params()
Explanation: Set parameters for the Assembly. This will raise an error if any of the parameters are not allowed because they are the wrong type, or out of the allowed range.
End of explanation
## run steps 1 & 2 of the assembly
data.run("12")
## access the stats of the assembly (so far) from the .stats attribute
data.stats
## run steps 3-6 of the assembly
data.run("3456")
Explanation: Assemble the data set
End of explanation
## create a branch for outputs with min_samples = 4 (lots of missing data)
min4 = data.branch("min4")
min4.set_params("min_samples_locus", 4)
min4.run("7")
## create a branch for outputs with min_samples = 13 (no missing data)
min13 = data.branch("min13")
min13.set_params("min_samples_locus", 13)
min13.run("7")
## create a branch with no missing data for ingroups, but allow
## missing data in the outgroups by setting population assignments.
## The population min-sample values overrule the min-samples-locus param
pops = data.branch("min11-pops")
pops.populations = {
"ingroup": (11, [i for i in pops.samples if "prz" not in i]),
"outgroup" : (0, [i for i in pops.samples if "prz" in i]),
}
pops.run("7")
## create a branch with no missing data and with outgroups removed
nouts = data.branch("nouts_min11", subsamples=[i for i in pops.samples if "prz" not in i])
nouts.set_params("min_samples_locus", 11)
nouts.run("7")
Explanation: Branch to create several final data sets with different parameter settings
End of explanation
## we can access the stats summary as a pandas dataframes.
min4.stats
## or print the full stats file
cat $min4.stats_files.s7
## and we can access parts of the full stats outputs as dataframes
min4.stats_dfs.s7_samples
## compare this to the one above, coverage is more equal
min13.stats_dfs.s7_samples
## similarly, coverage is equal here among ingroups, but allows missing in outgroups
pops.stats_dfs.s7_samples
Explanation: View final stats
The .stats attribute shows a stats summary for each sample, and a number of stats dataframes can be accessed for each step from the .stats_dfs attribute of the Assembly.
End of explanation
import ipyrad as ip
import ipyrad.analysis as ipa
## you can re-load assemblies at a later time from their JSON file
min4 = ip.load_json("analysis-ipyrad/min4.json")
min13 = ip.load_json("analysis-ipyrad/min13.json")
nouts = ip.load_json("analysis-ipyrad/nouts_min11.json")
Explanation: Analysis tools
We have a lot more information about analysis tools in the ipyrad documentation. But here I'll show just a quick example of how you can easily access the data files for these assemblies and use them in downstream analysis software. The ipyrad analysis tools include convenient wrappers to make it easier to parallelize analyses of RAD-seq data. Please see the full documentation for the ipyrad.analysis tools in the ipyrad documentation for more details.
End of explanation
## conda install raxml -c bioconda
## conda install toytree -c eaton-lab
## create a raxml analysis object for the min13 data sets
rax = ipa.raxml(
name=min13.name,
data=min13.outfiles.phy,
workdir="analysis-raxml",
T=20,
N=100,
o=[i for i in min13.samples if "prz" in i],
)
## print the raxml command and call it
print rax.command
rax.run(force=True)
## access the resulting tree files
rax.trees
## plot a tree in the notebook with toytree
import toytree
tre = toytree.tree(rax.trees.bipartitions)
tre.draw(
width=350,
height=400,
node_labels=tre.get_node_values("support"),
);
Explanation: RAxML -- ML concatenation tree inference
End of explanation
## create a tetrad analysis object
tet = ipa.tetrad(
name=min4.name,
seqfile=min4.outfiles.snpsphy,
mapfile=min4.outfiles.snpsmap,
nboots=100,
)
## run tree inference
tet.run(ipyclient)
## access tree files
tet.trees
## plot results (just like above, but unrooted by default)
## the consensus tree here differs from the ML tree above.
import toytree
qtre = toytree.tree(tet.trees.nhx)
qtre.root(wildcard="prz")
qtre.draw(
width=350,
height=400,
node_labels=qtre.get_node_values("support"),
);
Explanation: tetrad -- quartet tree inference
End of explanation
## conda install structure clumpp -c ipyrad
## create a structure analysis object for the no-outgroup data set
struct = ipa.structure(
name=nouts.name,
data=nouts.outfiles.str,
mapfile=nouts.outfiles.snpsmap,
)
## set params for analysis (should be longer in real analyses)
struct.mainparams.burnin=1000
struct.mainparams.numreps=8000
## run structure across 10 random replicates of sampled unlinked SNPs
for kpop in [2, 3, 4, 5, 6]:
struct.run(kpop=kpop, nreps=10, ipyclient=ipyclient)
## wait for all of these jobs to finish
ipyclient.wait()
## collect results
tables = {}
for kpop in [2, 3, 4, 5, 6]:
tables[kpop] = struct.get_clumpp_table(kpop)
## custom sorting order
myorder = [
"41478_cyathophylloides",
"41954_cyathophylloides",
"29154_superba",
"30686_cyathophylla",
"33413_thamno",
"30556_thamno",
"35236_rex",
"40578_rex",
"35855_rex",
"39618_rex",
"38362_rex",
]
## import toyplot (packaged with toytree)
import toyplot
## plot bars for each K-value (mean of 10 reps)
for kpop in [2, 3, 4, 5, 6]:
table = tables[kpop]
table = table.ix[myorder]
## plot barplot w/ hover
canvas, axes, mark = toyplot.bars(
table,
title=[[i] for i in table.index.tolist()],
width=400,
height=200,
yshow=False,
style={"stroke": toyplot.color.near_black},
)
Explanation: STRUCTURE -- population cluster inference
End of explanation
## conda install treemix -c ipyrad
## group taxa into 'populations'
imap = {
"prz": ["32082_przewalskii", "33588_przewalskii"],
"cys": ["41478_cyathophylloides", "41954_cyathophylloides"],
"cya": ["30686_cyathophylla"],
"sup": ["29154_superba"],
"cup": ["33413_thamno"],
"tha": ["30556_thamno"],
"rck": ["35236_rex"],
"rex": ["35855_rex", "40578_rex"],
"lip": ["39618_rex", "38362_rex"],
}
## optional: loci will be filtered if they do not have data for at
## least N samples in each species. Minimums cannot be <1.
minmap = {
"prz": 2,
"cys": 2,
"cya": 1,
"sup": 1,
"cup": 1,
"tha": 1,
"rck": 1,
"rex": 2,
"lip": 2,
}
## sets a random number seed
import numpy
numpy.random.seed(12349876)
## create a treemix analysis object
tmix = ipa.treemix(
name=min13.name,
data=min13.outfiles.snpsphy,
mapfile=min13.outfiles.snpsmap,
imap=imap,
minmap=minmap,
)
## you can set additional parameter args here
tmix.params.root = "prz"
tmix.params.global_ = 1
## print the full params
tmix.params
## a dictionary for storing treemix objects
tdict = {}
## iterate over values of m
for rep in xrange(4):
for mig in xrange(4):
## create new treemix object copy
name = "mig-{}-rep-{}".format(mig, rep)
tmp = tmix.copy(name)
## set params on new object
tmp.params.m = mig
## run treemix analysis
tmp.run()
## store the treemix object
tdict[name] = tmp
import toyplot
## select a single result
tmp = tdict["mig-1-rep-1"]
## draw the tree similar to the Treemix plotting R code
## this code is rather new and will be expanded in the future.
canvas = toyplot.Canvas(width=350, height=350)
axes = canvas.cartesian(padding=25, margin=75)
axes = tmp.draw(axes)
import toyplot
import numpy as np
## plot many results
canvas = toyplot.Canvas(width=800, height=1200)
idx = 0
for mig in range(4):
for rep in range(4):
tmp = tdict["mig-{}-rep-{}".format(mig, rep)]
ax = canvas.cartesian(grid=(4, 4, idx), padding=25, margin=(25, 50, 100, 25))
ax = tmp.draw(ax)
idx += 1
Explanation: TREEMIX -- ML tree & admixture co-inference
End of explanation
bb = ipa.baba(
data=min4.outfiles.loci,
newick="analysis-raxml/RAxML_bestTree.min13",
)
## check params
bb.params
## generate all tests from the tree where 32082 is p4
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii"],
"p3": ["30556_thamno"],
}
)
## run the tests in parallel
bb.run(ipyclient=ipyclient)
bb.results_table.sort_values(by="Z", ascending=False).head()
## most significant result (more ABBA than BABA)
bb.tests[12]
## the next most signif (more BABA than ABBA)
bb.tests[27]
Explanation: ABBA-BABA admixture inference
End of explanation
## a dictionary mapping sample names to 'species' names
imap = {
"prz": ["32082_przewalskii", "33588_przewalskii"],
"cys": ["41478_cyathophylloides", "41954_cyathophylloides"],
"cya": ["30686_cyathophylla"],
"sup": ["29154_superba"],
"cup": ["33413_thamno"],
"tha": ["30556_thamno"],
"rck": ["35236_rex"],
"rex": ["35855_rex", "40578_rex"],
"lip": ["39618_rex", "38362_rex"],
}
## optional: loci will be filtered if they do not have data for at
## least N samples/individuals in each species.
minmap = {
"prz": 2,
"cys": 2,
"cya": 1,
"sup": 1,
"cup": 1,
"tha": 1,
"rck": 1,
"rex": 2,
"lip": 2,
}
## a tree hypothesis (guidetree) (here based on tetrad results)
## for the 'species' we've collapsed samples into.
newick = "((((((rex, lip), rck), tha), cup), (cys, (cya, sup))), prz);"
## initiata a bpp object
b = ipa.bpp(
name=min4.name,
locifile=min4.outfiles.alleles,
imap=imap,
minmap=minmap,
guidetree=newick,
)
## set some optional params, leaving others at their defaults
## you should definitely run these longer for real analyses
b.params.burnin = 1000
b.params.nsample = 2000
b.params.sampfreq = 20
## print params
b.params
## set some optional filters leaving others at their defaults
b.filters.maxloci=100
b.filters.minsnps=4
## print filters
b.filters
Explanation: BPP -- species tree inference/delim
End of explanation
b.write_bpp_files()
b.run()
## wait for all ipyclient jobs to finish
ipyclient.wait()
## check results
## parse the mcmc table with pandas library
import pandas as pd
btable = pd.read_csv(b.files.mcmcfiles[0], sep="\t", index_col=0)
btable.describe().T
Explanation: run BPP
You can either call 'write_bpp_files()' to write input files for this data set to be run in BPP, and then call BPP on those files, or you can use the '.run()' command to run the data files directly, and in parallel on the cluster. If you specify multiple reps then a different random sample of loci will be selected, and different random seeds applied to each replicate.
End of explanation |
11,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assigning particles unique IDs and removing particles from the simulation
For some applications, it is useful to keep track of which particle is which, and this can get jumbled up when particles are added or removed from the simulation. It can thefore be useful for particles to have unique IDs associated with them.
Let's set up a simple simulation with 10 bodies, and give them IDs in the order we add the particles
Step1: Now let's do a simple example where we do a short initial integration to isolate the particles that interest us for a longer simulation
Step2: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x > 0$ at the end of the preliminary integration. Let's first print out the particle ID and x position.
Step3: Next, let's use the remove() function to filter out particle. As an argument, we pass the corresponding index in the particles array.
Step4: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output).
By running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0
Step5: We see that the particles array is no longer sorted by ID. Note that the default keepSorted=1 only keeps things sorted (i.e., if they were sorted by ID to start with). If you custom-assign IDs out of order as you add particles, the default will simply preserve the original order.
You might also have been surprised that the above sim.remove(2, keepSorted=0) succeeded, since there was no id=2 left in the simulation. That's because remove() takes the index in the particles array, so we removed the 3rd particle (with id=4). If you'd like to remove a particle by id, use the id keyword, e.g. | Python Code:
import rebound
import numpy as np
def setupSimulation(Nplanets):
sim = rebound.Simulation()
sim.integrator = "ias15" # IAS15 is the default integrator, so we don't need this line
sim.add(m=1.,id=0)
for i in range(1,Nbodies):
sim.add(m=1e-5,x=i,vy=i**(-0.5),id=i)
sim.move_to_com()
return sim
Nbodies=10
sim = setupSimulation(Nbodies)
print([sim.particles[i].id for i in range(sim.N)])
Explanation: Assigning particles unique IDs and removing particles from the simulation
For some applications, it is useful to keep track of which particle is which, and this can get jumbled up when particles are added or removed from the simulation. It can thefore be useful for particles to have unique IDs associated with them.
Let's set up a simple simulation with 10 bodies, and give them IDs in the order we add the particles:
End of explanation
Noutputs = 1000
xs = np.zeros((Nbodies, Noutputs))
ys = np.zeros((Nbodies, Noutputs))
times = np.linspace(0.,50*2.*np.pi, Noutputs, endpoint=False)
for i, time in enumerate(times):
sim.integrate(time)
xs[:,i] = [sim.particles[j].x for j in range(Nbodies)]
ys[:,i] = [sim.particles[j].y for j in range(Nbodies)]
%matplotlib inline
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(15,5))
for i in range(Nbodies):
plt.plot(xs[i,:], ys[i,:])
ax.set_aspect('equal')
Explanation: Now let's do a simple example where we do a short initial integration to isolate the particles that interest us for a longer simulation:
End of explanation
print("ID\tx")
for i in range(Nbodies):
print("{0}\t{1}".format(i, xs[i,-1]))
Explanation: At this stage, we might be interested in particles that remained within some semimajor axis range, particles that were in resonance with a particular planet, etc. Let's imagine a simple (albeit arbitrary) case where we only want to keep particles that had $x > 0$ at the end of the preliminary integration. Let's first print out the particle ID and x position.
End of explanation
for i in reversed(range(1,Nbodies)):
if xs[i,-1] < 0:
sim.remove(i)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: Next, let's use the remove() function to filter out particle. As an argument, we pass the corresponding index in the particles array.
End of explanation
sim.remove(2, keepSorted=0)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: By default, the remove() function removes the i-th particle from the particles array, and shifts all particles with higher indices down by 1. This ensures that the original order in the particles array is preserved (e.g., to help with output).
By running through the planets in reverse order above, we are guaranteed that when a particle with index i gets removed, the particle replacing it doesn't need to also be removed (we already checked it).
If you have many particles and many removals (or you don't care about the ordering), you can save the reshuffling of all particles with higher indices with the flag keepSorted=0:
End of explanation
sim.remove(id=9)
print("Number of particles after cut = {0}".format(sim.N))
print("IDs of remaining particles = {0}".format([p.id for p in sim.particles]))
Explanation: We see that the particles array is no longer sorted by ID. Note that the default keepSorted=1 only keeps things sorted (i.e., if they were sorted by ID to start with). If you custom-assign IDs out of order as you add particles, the default will simply preserve the original order.
You might also have been surprised that the above sim.remove(2, keepSorted=0) succeeded, since there was no id=2 left in the simulation. That's because remove() takes the index in the particles array, so we removed the 3rd particle (with id=4). If you'd like to remove a particle by id, use the id keyword, e.g.
End of explanation |
11,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function ptrans
Synopse
Perform periodic translation in 1-D, 2-D or 3-D space.
g = ptrans(f, t)
OUTPUT
g
Step1: Examples
Step2: Example 1
Numeric examples in 2D and 3D.
Step3: Example 2
Image examples in 2D
Step4: Equation
For 2D case we have
$$ \begin{matrix}
t &=& (t_r t_c),\
g = f_t &=& f_{tr,tc},\
g(rr,cc) &=& f((rr-t_r)\ mod\ H, (cc-t_c) \ mod\ W), 0 \leq rr < H, 0 \leq cc < W,\
\mbox{where} & & \ a \ mod\ N &=& (a + k N) \ mod\ N, k \in Z.
\end{matrix} $$
The equation above can be extended to n-dimensional space. | Python Code:
def ptrans(f,t):
import numpy as np
g = np.empty_like(f)
if f.ndim == 1:
W = f.shape[0]
col = np.arange(W)
g = f[(col-t)%W]
elif f.ndim == 2:
H,W = f.shape
rr,cc = t
row,col = np.indices(f.shape)
g = f[(row-rr)%H, (col-cc)%W]
elif f.ndim == 3:
Z,H,W = f.shape
zz,rr,cc = t
z,row,col = np.indices(f.shape)
g = f[(z-zz)%Z, (row-rr)%H, (col-cc)%W]
return g
# implementation using periodic convolution
def ptrans2(f, t):
f, t = np.asarray(f), np.asarray(t).astype('int32')
h = np.zeros(2*np.abs(t) + 1)
t = t + np.abs(t)
h[tuple(t)] = 1
g = ia.pconv(f, h)
return g
def ptrans2d(f,t):
rr,cc = t
H,W = f.shape
r = rr%H
c = cc%W
g = np.empty_like(f)
g[:r,:c] = f[H-r:H,W-c:W]
g[:r,c:] = f[H-r:H,0:W-c]
g[r:,:c] = f[0:H-r,W-c:W]
g[r:,c:] = f[0:H-r,0:W-c]
return g
Explanation: Function ptrans
Synopse
Perform periodic translation in 1-D, 2-D or 3-D space.
g = ptrans(f, t)
OUTPUT
g: Image. Periodically translated image.
INPUT
f: Image ndarray. Image to be translated.
t: Tuple. (tz,tr,tc)
Description
Translate a 1-D, 2-D or 3-dimesional image periodically. This translation can be seen as a window view
displacement on an infinite tile wall where each tile is a copy of the original image. The
periodical translation is related to the periodic convolution and discrete Fourier transform.
Be careful when implementing this function using the mod, some mod implementations in C does not
follow the correct definition when the number is negative.
End of explanation
testing = (__name__ == '__main__')
if testing:
! jupyter nbconvert --to python ptrans.ipynb
import numpy as np
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Examples
End of explanation
if testing:
# 2D example
f = np.arange(15).reshape(3,5)
print("Original 2D image:\n",f,"\n\n")
print("Image translated by (0,0):\n",ia.ptrans(f, (0,0)).astype(int),"\n\n")
print("Image translated by (0,1):\n",ia.ptrans(f, (0,1)).astype(int),"\n\n")
print("Image translated by (-1,2):\n",ia.ptrans(f, (-1,2)).astype(int),"\n\n")
if testing:
# 3D example
f1 = np.arange(60).reshape(3,4,5)
print("Original 3D image:\n",f1,"\n\n")
print("Image translated by (0,0,0):\n",ia.ptrans(f1, (0,0,0)).astype(int),"\n\n")
print("Image translated by (0,1,0):\n",ia.ptrans(f1, (0,1,0)).astype(int),"\n\n")
print("Image translated by (-1,3,2):\n",ia.ptrans(f1, (-1,3,2)).astype(int),"\n\n")
Explanation: Example 1
Numeric examples in 2D and 3D.
End of explanation
if testing:
# 2D example
f = mpimg.imread('../data/cameraman.tif')
plt.imshow(f,cmap='gray'), plt.title('Original 2D image - Cameraman')
plt.imshow(ia.ptrans(f, np.array(f.shape)//3),cmap='gray'), plt.title('Cameraman periodically translated')
Explanation: Example 2
Image examples in 2D
End of explanation
if testing:
print('testing ptrans')
f = np.array([[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15]],'uint8')
print(repr(ia.ptrans(f, [-1,2]).astype(np.uint8)) == repr(np.array(
[[ 9, 10, 6, 7, 8],
[14, 15, 11, 12, 13],
[ 4, 5, 1, 2, 3]],'uint8')))
Explanation: Equation
For 2D case we have
$$ \begin{matrix}
t &=& (t_r t_c),\
g = f_t &=& f_{tr,tc},\
g(rr,cc) &=& f((rr-t_r)\ mod\ H, (cc-t_c) \ mod\ W), 0 \leq rr < H, 0 \leq cc < W,\
\mbox{where} & & \ a \ mod\ N &=& (a + k N) \ mod\ N, k \in Z.
\end{matrix} $$
The equation above can be extended to n-dimensional space.
End of explanation |
11,452 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Morphological Transformations Morphological Transformations are some simple operation based on the image shape. Morphological Transformations are normally performed on binary images. A kernal tells you how to change the value of any given pixel by combining it with different amounts of the neighbouring pixels.
| Python Code::
import cv2
%matplotlib notebook
%matplotlib inline
from matplotlib import pyplot as plt
img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE)
_,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV)
titles = ['images',"mask"]
images = [img,mask]
for i in range(2):
plt.subplot(1,2,i+1)
plt.imshow(images[i],"gray")
plt.title(titles[i])
plt.show()
|
11,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ported to Python (by Ilan Fridman Rojas) from original R implementation by Rasmus Bååth
Step2: The standard bootstrap method
Step5: The Bayesian bootstrap (with a Dirichlet prior)
(See
Step6: Test both the weighted statistic method and the weighted sampling methods
Step11: Define a function to compute confidence intervals and use it
Step12: Below we apply the -Bayesian, but could just as well be classical- bootstrap method to a linear regression by bootstrapping the data.
This is not the only way to apply the bootstrap. One could fix the regressors/covariates $x_i$, as well as the regression coefficients, $\mathbf{\beta}$, of the linear fit to the original dataset, and then bootstrap the residuals $y_i - \hat{y}_i(\mathbf{\beta})$. The former approach is expected to be more conservative (give larger confidence intervals) than the latter, which is more model dependent in that it fixes the coefficients and thereby implicitly assumes the linear model is essentially correct and only the random variation/noise of the data needs to be bootstrapped.
For more on this see the corresponding section in the original work
Step13: From this plot and the confidence interval on the slope we can confidently say that there is no evidence for a correlation between the two variables. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
data = pd.read_csv('american_presidents.csv', header=0, index_col=None)
data
data.describe()
data.plot(x='order',y='height_cm', color='blue')
data.plot('order', kind='hist', color='blue')
import random
import numpy.random as npr
Explanation: Ported to Python (by Ilan Fridman Rojas) from original R implementation by Rasmus Bååth:
http://www.sumsar.net/blog/2015/07/easy-bayesian-bootstrap-in-r/
End of explanation
def bootstrap(data, num_samples, statistic, alpha):
Returns the results from num_samples bootstrap samples for an input test statistic and its 100*(1-alpha)% confidence level interval.
# Generate the indices for the required number of permutations/(resamplings with replacement) required
idx = npr.randint(0, len(data), (num_samples, len(data)))
# Generate the multiple resampled data set from the original one
samples = data[idx]
# Apply the 'statistic' function given to each of the data sets produced by the resampling and order the resulting statistic by decreasing size.
stats = np.sort(statistic(samples, 1))
stat = stats.mean()
# Return the value of the computed statistic at the upper and lower percentiles specified by the alpha parameter given. These are, by definition, the boundaries of the Confidence Interval for that value of alpha. E.g. alpha=0.05 ==> CI 95%
low_ci = stats[int((alpha / 2.0) * num_samples)]
high_ci = stats[int((1 - alpha / 2.0) * num_samples)]
#sd = np.std(stat)
# To include Bessel's correction for unbiased standard deviation:
sd = np.std(stat, ddof=1)
# or manually:
# sd = np.sqrt(len(data) / (len(data) - 1)) * np.std(stats)
return stat, sd, low_ci, high_ci
Explanation: The standard bootstrap method
End of explanation
def bayes_bstrp(data, statistic, nbstrp, samplesize):
Implements the Bayesian bootstrap method.
def Dirichlet_sample(m,n):
Returns a matrix of values drawn from a Dirichlet distribution with parameters = 1.
'm' rows of values, with 'n' Dirichlet draws in each one.
# Draw from Gamma distribution
Dirichlet_params = np.ones(m*n) # Set Dirichlet distribution parameters
# https://en.wikipedia.org/wiki/Dirichlet_distribution#Gamma_distribution
Dirichlet_weights = np.asarray([random.gammavariate(a,1) for a in Dirichlet_params])
Dirichlet_weights = Dirichlet_weights.reshape(m,n) # Fold them (row by row) into a matrix
row_sums = Dirichlet_weights.sum(axis=1)
Dirichlet_weights = Dirichlet_weights / row_sums[:, np.newaxis] # Reweight each row to be normalised to 1
return Dirichlet_weights
Dirich_wgts_matrix = Dirichlet_sample(nbstrp, data.shape[0]) #Generate sample of Dirichlet weights
# If statistic can be directly computed using the weights (such as the mean), do this since it will be faster.
if statistic==np.mean or statistic==np.average:
results = np.asarray([np.average(data, weights=Dirich_wgts_matrix[i]) for i in xrange(nbstrp)])
return results
# Otherwise resort to sampling according to the Dirichlet weights and computing the statistic
else:
results = np.zeros(nbstrp)
for i in xrange(nbstrp): #Sample from data according to Dirichlet weights
weighted_sample = np.random.choice(data, samplesize, replace=True, p = Dirich_wgts_matrix[i])
results[i] = statistic(weighted_sample) #Compute the statistic for each sample
return results
Explanation: The Bayesian bootstrap (with a Dirichlet prior)
(See:
http://sumsar.net/blog/2015/04/the-non-parametric-bootstrap-as-a-bayesian-model/
and
http://projecteuclid.org/euclid.aos/1176345338
)
End of explanation
height_data = data['height_cm'].values
posterior_mean = bayes_bstrp(height_data, np.mean, nbstrp=10000, samplesize=1000)
print posterior_mean
posterior_median = bayes_bstrp(height_data, np.median, nbstrp=10000, samplesize=1000)
print posterior_median
Explanation: Test both the weighted statistic method and the weighted sampling methods
End of explanation
def CI(sample, alpha=0.05):
Returns the 100*(1-alpha)% confidence level interval for a test statistic computed on a bootstrap sample.
sample.sort()
num_samples = sample.shape[0]
low_ci = sample[int((alpha / 2.0) * num_samples)]
high_ci = sample[int((1 - alpha / 2.0) * num_samples)]
return [low_ci, high_ci]
meanCI = CI(posterior_mean, alpha=0.05)
print "The mean of the posterior is:\t{0:.4g}".format(posterior_mean.mean())
print "With confidence interval:\t[{0:.4g}, {1:.4g}]".format(meanCI[0],meanCI[1])
#print posterior_median.mean(), CI(posterior_median)
fig,ax =plt.subplots(2,1, sharex=True)
ax[0].hist(height_data, color='blue')
ax[0].set_xlabel('Heights of American Presidents (in cm)')
ax[0].set_ylabel('Frequency')
ax[1].hist(posterior_mean, color='blue')
ax[1].set_xlabel('Bayesian Bootstrap posterior of the mean (95% CI in red)')
ax[1].set_ylabel('Frequency')
ax[1].plot([meanCI[0], meanCI[1]], [0, 0], 'r', linewidth=8)
plt.show()
from scipy import stats
x = data['order'].values
y = data['height_cm'].values
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
print slope
print intercept
def bayes_bstrp1(data, statistic, nbstrp, samplesize):
Implements the Bayesian bootstrap method.
Input can be a 1D Numpy array, or for test statistics of two variables: a Pandas DataFrame with two columns: x,y
def Dirichlet_sample(m,n):
Returns a matrix of values drawn from a Dirichlet distribution with parameters = 1.
'm' rows of values, with 'n' Dirichlet draws in each one.
Dirichlet_params = np.ones(m*n) # Set Dirichlet distribution parameters
# https://en.wikipedia.org/wiki/Dirichlet_distribution#Gamma_distribution
Dirichlet_weights = np.asarray([random.gammavariate(a,1) for a in Dirichlet_params]) # Draw from Gamma distrib
Dirichlet_weights = Dirichlet_weights.reshape(m,n) # Fold them (row by row) into a matrix
row_sums = Dirichlet_weights.sum(axis=1)
Dirichlet_weights = Dirichlet_weights / row_sums[:, np.newaxis] # Reweight each row to be normalised to 1
return Dirichlet_weights
Dirich_wgts_matrix = Dirichlet_sample(nbstrp, data.shape[0]) #Generate sample of Dirichlet weights
if data.ndim==1:
# If statistic can be directly computed using the weights (such as the mean), do this since it will be faster.
if statistic==np.mean or statistic==np.average:
results = np.asarray([np.average(data, weights=Dirich_wgts_matrix[i]) for i in xrange(nbstrp)])
return results
# Otherwise resort to sampling according to the Dirichlet weights and computing the statistic
else:
results = np.zeros(nbstrp)
for i in xrange(nbstrp): #Sample from data according to Dirichlet weights
weighted_sample = np.random.choice(data, samplesize, replace=True, p = Dirich_wgts_matrix[i])
results[i] = statistic(weighted_sample) #Compute the statistic for each sample
return results
elif data.ndim>=2:
# If statistic can be directly computed using the weights (such as the mean), do this since it will be faster.
if statistic==np.mean or statistic==np.average:
results = np.asarray([np.average(data[data.columns[1]].values, weights=Dirich_wgts_matrix[i])
for i in xrange(nbstrp)])
return results
# Otherwise resort to sampling according to the Dirichlet weights and computing the statistic
else:
index_sample=np.zeros((nbstrp,samplesize))
results = []
for i in xrange(nbstrp): #Sample from data according to Dirichlet weights
# Now instead of sampling data points directly, we sample over their index (i.e. by row number)
# which is exactly equivalent, but which preserves the x,y pairings during the sampling
index_sample[i,:] = np.random.choice(np.arange(data.shape[0]), samplesize, replace=True,
p = Dirich_wgts_matrix[i])
# We index from the DataFrame this way because Pandas does not support slicing like this
# http://stackoverflow.com/questions/23686561/slice-a-pandas-dataframe-by-an-array-of-indices-and-column-names
results.append(statistic(data.values[index_sample[i].astype(int),0],
data.values[index_sample[i].astype(int),1]))
return np.array(results)
posterior_mean1 = bayes_bstrp1(height_data, np.mean, nbstrp=10000, samplesize=1000)
print posterior_mean1
posterior_median1 = bayes_bstrp(height_data, np.median, nbstrp=10000, samplesize=1000)
print posterior_median1
# Copy the columns containing x and y (in that order) into a new Pandas DataFrame, to be used for Bayesian bootstrap
test_df = data[['order','height_cm']]
linregres_posterior = bayes_bstrp1(test_df, stats.linregress, nbstrp=100, samplesize=60)
print linregres_posterior
# These 5 values are: slope, intercept, R, p_value, std_err
Explanation: Define a function to compute confidence intervals and use it
End of explanation
slopes = linregres_posterior[:,0]
slopemean = slopes.mean()
slopeCI = CI(slopes)
print "The mean slope and its 95% CI are:\t{0:.4g}\t\t[{1:.4g}, {2:.4g}]".format(slopemean,slopeCI[0],slopeCI[1])
intercepts = linregres_posterior[:,1]
interceptmean = intercept.mean()
interceptCI = CI(intercepts)
print "The mean intercept and its 95% CI are:\t{0:.4g}\t\t[{1:.4g}, {2:.4g}]".format(interceptmean,interceptCI[0],
interceptCI[1])
# Plot the data points
plt.scatter(data['order'].values, data['height_cm'].values)
# The linear function we will use to plot fit coefficients
def linfit(x,slope,intercept):
return slope*x + intercept
x = data['order'].values
y = data['height_cm'].values
# Choose linear regressions for 10 of the bootstrap samples at random and plot them
ids = npr.randint(0, linregres_posterior.shape[0], 10)
otherfits = [linfit(x, linregres_posterior[i,0], linregres_posterior[i,1]) for i in ids]
for i in otherfits:
plt.plot(x, i, color='#BBBBBB')
# The fit to the original data
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
plt.plot(x, linfit(x, slope, intercept), color='black', linewidth=2)
plt.xlim(0,x.max()+1)
plt.show()
Explanation: Below we apply the -Bayesian, but could just as well be classical- bootstrap method to a linear regression by bootstrapping the data.
This is not the only way to apply the bootstrap. One could fix the regressors/covariates $x_i$, as well as the regression coefficients, $\mathbf{\beta}$, of the linear fit to the original dataset, and then bootstrap the residuals $y_i - \hat{y}_i(\mathbf{\beta})$. The former approach is expected to be more conservative (give larger confidence intervals) than the latter, which is more model dependent in that it fixes the coefficients and thereby implicitly assumes the linear model is essentially correct and only the random variation/noise of the data needs to be bootstrapped.
For more on this see the corresponding section in the original work:
https://books.google.co.uk/books?id=gLlpIUxRntoC&lpg=PA113&ots=A8xyY7Lcz3&dq=regression%20bootstrap%20data%20or%20residuals&pg=PA113#v=onepage&q=regression%20bootstrap%20data%20or%20residuals&f=false
For a short, concise and clear explanation of the pros and cons of each way of bootstrapping regression models see:
http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/which-bootstrap-when.pdf
End of explanation
from statsmodels.nonparametric.smoothers_lowess import lowess
# for some odd reason this loess function takes the y values as the first argument and x as second
test_df = data[['height_cm', 'order']]
posterior_loess = bayes_bstrp1(test_df, lowess, nbstrp=100, samplesize=60)
print posterior_loess
x = data['order'].values
y = data['height_cm'].values
# To see all the loess curves found:
#for i in posterior_loess:
# plt.plot(i[:,0], i[:,1], color='#BBBBBB')
ids = npr.randint(0, posterior_loess.shape[0], 20)
for i in ids:
plt.plot(posterior_loess[i,:,0], posterior_loess[i,:,1], color='#BBBBBB')
plt.scatter(x, y)
original_loess = lowess(y, x)
plt.plot(original_loess[:,0], original_loess[:,1], color='black', linewidth=2)
plt.xlim(0,x.max()+1)
plt.show()
Explanation: From this plot and the confidence interval on the slope we can confidently say that there is no evidence for a correlation between the two variables.
End of explanation |
11,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MRI
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:19
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
11,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of how to find peaks in a synthetic image
Create a set of 2D Gaussians
Find the center of the Guassian to integer accuracy
Optimize the position using Gaussian fitting for each peak
Step1: Create a sample 2D Image
Gaussians are placed on a grid with some random small offsets
the variable coords are the known positions
these will not be known in a real experiment
Step2: Find the center pixel of each peak
uses ncempy.algo.peakFind.peakFind2D()
These will be integer values of the max peak positions.
Gaussian fitting will be used to find the smal random offsets
See end of notebook for an explanation as to how this works.
Step3: Use Gaussian fitting for sub-pixel fitting
Each peak is fit to a 2D Gaussian function
The average of the sigma values is printed
Step4: Plot to compare the known and fitted coordinates
coords are the expected positions we used to generate the image
coords_found are the peaks found with full pixel precision
optPeaks are the optimized peak positions using Gaussian fitting
Zoom in to peaks to see how well the fit worked
Step5: Find the error in the fitting
Gausssian fitting can be heavily influenced by the tails
Some error is expected.
Step6: How does peakFind2D work with the Roll?
A very confusing point is the indexing used in meshgrid
If you use indexing='ij' then the peak position needs to be plotted in matplotlib backwards (row,col)
If you change the meshgrid indexing='xy' then this issue is less confusing BUT....
Default indexing used to be 'ij' when I wrote this (and lots of other) code. So, now I stick with that convention.
Step7: Create a single 2D Gaussian peak
Step8: Roll the array 1 pixel in each direction
Compare the original and the rolled version
The peak will be moved by 1 pixel in each direction in each case
Here I ignore the next nearest neighbors (-1,-1) for simplicity. (peakFind.doubleRoll2D does not ignore these).
The peak will always be larger than the element-by-element comparison in each roll
Step9: Compare each rolled image
use logical and to find the pixel which was highest in every comparison
The local peak will be the only one left
Step10: Find the peak using where
We have a bool array above.
np.where will return the elements of the True values which correspond to the peak position(s) | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
# Import these from ncempy.algo
from ncempy.algo import gaussND
from ncempy.algo import peakFind
Explanation: Example of how to find peaks in a synthetic image
Create a set of 2D Gaussians
Find the center of the Guassian to integer accuracy
Optimize the position using Gaussian fitting for each peak
End of explanation
# Create coordinates with a random offset
coords = peakFind.lattice2D_2((1, 0), (0, 1), 2, 2, (0, 0), (5, 5))
coords += np.random.rand(coords.shape[0], coords.shape[1]) / 2.5
coords = np.array(coords)*30 + (100, 100)
print('Coords shape = {}'.format(coords.shape))
# Create an image with the coordinates as gaussians
kernel_shape = (11, 11)
simIm = peakFind.peaksToImage(coords, (512, 512), (1.75, 2.75), kernel_shape)
fg, ax = plt.subplots(1, 2, sharex=True,sharey=True)
ax[0].imshow(simIm)
ax[1].imshow(simIm)
ax[1].scatter(coords[:,1], coords[:,0],c='r',marker='.')
fg.tight_layout()
Explanation: Create a sample 2D Image
Gaussians are placed on a grid with some random small offsets
the variable coords are the known positions
these will not be known in a real experiment
End of explanation
coords_found = peakFind.peakFind2D(simIm, 0.5)
fg, ax = plt.subplots(1,1)
ax.imshow(simIm)
_ = ax.scatter(coords_found[:,1],coords_found[:,0],c='r',marker='x')
Explanation: Find the center pixel of each peak
uses ncempy.algo.peakFind.peakFind2D()
These will be integer values of the max peak positions.
Gaussian fitting will be used to find the smal random offsets
See end of notebook for an explanation as to how this works.
End of explanation
optPeaks, optI, fittingValues = peakFind.fit_peaks_gauss2D(simIm, coords_found, 5,
(1.5, 2.5), ((-1.5, -1.5,0,0),(1.5,1.5,3,3)))
# Plot the gaussian widths
f2, ax2 = plt.subplots(1, 2)
ax2[0].plot(optPeaks[:, 2],'go')
ax2[0].plot(optPeaks[:, 3],'ro')
ax2[0].set(title='Gaussian fit sigmas',xlabel='index sorted by peak intensity')
ax2[0].legend(labels=['width 0', 'width 1'])
stdMeans = np.mean(optPeaks[:, 2:4], axis=0)
# Print out the average of the fitted sigmas
print('Sigma means [s_0, s_1]: {}'.format(stdMeans))
# Plot the fitted center (relative from the intensity peak)
ax2[1].plot(fittingValues[:, 0], 'o')
ax2[1].plot(fittingValues[:, 1], 'o')
ax2[1].set(title="Gaussian fit relative centers", xlabel='index sorted by peak intensity')
_ = ax2[1].legend(labels=['center 0', 'center 1'])
ax2[1].set(ylim=(-0.5, 0.5))
ax2[1].set(yticks=(-0.5, -0.25, 0, 0.25, 0.5))
fg.tight_layout()
Explanation: Use Gaussian fitting for sub-pixel fitting
Each peak is fit to a 2D Gaussian function
The average of the sigma values is printed
End of explanation
fg, ax = plt.subplots(1,1)
ax.imshow(simIm)
ax.scatter(coords_found[:,1], coords_found[:,0],c='b',marker='o')
ax.scatter(optPeaks[:,1], optPeaks[:,0],c='r',marker='x')
ax.scatter(coords[:,1], coords[:,0],c='k',marker='+')
_ = ax.legend(['integer', 'optimized', 'expected'])
Explanation: Plot to compare the known and fitted coordinates
coords are the expected positions we used to generate the image
coords_found are the peaks found with full pixel precision
optPeaks are the optimized peak positions using Gaussian fitting
Zoom in to peaks to see how well the fit worked
End of explanation
# Plot the RMS error for each fitted peak
# First sort each set of coordinates to match them
err = []
for a, b in zip(coords[np.argsort(coords[:,0]),:], optPeaks[np.argsort(optPeaks[:,0]),0:2]):
err.append(np.sqrt(np.sum(a - b)**2))
fg, ax = plt.subplots(1, 1)
ax.plot(err)
_ = ax.set(xlabel='coorindate', ylabel='RMS error')
Explanation: Find the error in the fitting
Gausssian fitting can be heavily influenced by the tails
Some error is expected.
End of explanation
# Copy doubleRoll from ncempy.algo.peakFind
# to look at the algorithm
def doubleRoll(image,vec):
return np.roll(np.roll(image, vec[0], axis=0), vec[1], axis=1)
Explanation: How does peakFind2D work with the Roll?
A very confusing point is the indexing used in meshgrid
If you use indexing='ij' then the peak position needs to be plotted in matplotlib backwards (row,col)
If you change the meshgrid indexing='xy' then this issue is less confusing BUT....
Default indexing used to be 'ij' when I wrote this (and lots of other) code. So, now I stick with that convention.
End of explanation
known_peak = [6, 5]
YY, XX = np.meshgrid(range(0,12),range(0,12),indexing='ij')
gg = gaussND.gauss2D(XX,YY,known_peak[1], known_peak[0],1,1)
gg = np.round(gg,decimals=3)
plt.figure()
plt.imshow(gg)
Explanation: Create a single 2D Gaussian peak
End of explanation
# Compare only nearest neighbors
roll01 = gg > doubleRoll(gg, [0, 1])
roll10 = gg > doubleRoll(gg, [1, 0])
roll11 = gg > doubleRoll(gg, [1, 1])
roll_1_1 = gg > doubleRoll(gg, [-1, -1])
fg,ax = plt.subplots(2,2)
ax[0,0].imshow(roll01)
ax[0,1].imshow(roll10)
ax[1,0].imshow(roll11)
ax[1,1].imshow(roll_1_1)
for aa in ax.ravel():
aa.scatter(known_peak[1], known_peak[0])
ax[0,0].legend(['known peak position'])
Explanation: Roll the array 1 pixel in each direction
Compare the original and the rolled version
The peak will be moved by 1 pixel in each direction in each case
Here I ignore the next nearest neighbors (-1,-1) for simplicity. (peakFind.doubleRoll2D does not ignore these).
The peak will always be larger than the element-by-element comparison in each roll
End of explanation
final = roll01 & roll10 & roll11 & roll_1_1
fg,ax = plt.subplots(1,1)
ax.imshow(final)
ax.scatter(known_peak[1],known_peak[0])
Explanation: Compare each rolled image
use logical and to find the pixel which was highest in every comparison
The local peak will be the only one left
End of explanation
peak_position = np.array(np.where(final))
print(peak_position)
Explanation: Find the peak using where
We have a bool array above.
np.where will return the elements of the True values which correspond to the peak position(s)
End of explanation |
11,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: SNU
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
11,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steady-state simulation of organic light emitting cell
This is an example of steady-state simulation of the light emitting electrochemical cell.
It attempts to reproduce reference. Exact agreement is not achieved probably because of missing details regarding the electrode model. As in the reference, the temperature equal to 2500 K is assumed.
Step1: Model and parameters
Step2: Results
Distribution of cations and anions
Step3: Distribution of the electric field
Step4: Distribution of electrons and holes
Step5: Distribution of holes near to contact
Step6: Comparison between drift and diffusion parts of the current | Python Code:
from oedes.fvm import mesh1d
from oedes import progressbar, testing, init_notebook, models, context
init_notebook()
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
Explanation: Steady-state simulation of organic light emitting cell
This is an example of steady-state simulation of the light emitting electrochemical cell.
It attempts to reproduce reference. Exact agreement is not achieved probably because of missing details regarding the electrode model. As in the reference, the temperature equal to 2500 K is assumed.
End of explanation
params = {'T': 2500.,
'electron.mu': 1e-6,
'electron.energy': 0.,
'electron.N0': 5e26,
'hole.mu': 1e-6,
'hole.energy': -5.,
'hole.N0': 5e26,
'electrode0.workfunction': 2.5,
'electrode0.voltage': 2.,
'electrode1.workfunction': 2.5,
'electrode1.voltage': 0.,
'cation.mu': 1e-6,
'anion.mu': 1e-6,
'npi': 2e43,
'epsilon_r': 3.
}
L = 350e-9
mesh = mesh1d(L=L, epsilon_r=3.4)
cinit = 1.25e25
model = models.BaseModel()
models.std.electronic_device(model, mesh, 'pn')
cation, anion, ic = models.std.add_ions(model, mesh, zc=1, za=-1)
model.setUp()
xinit=ic(cinit=1e24)
c=context(model,x=xinit)
c.transient(params,1,1e-10, reltol=1, abstol=1e15, relfail=20.)
o = c.output()
m = mesh
Explanation: Model and parameters
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['cation.c'], '.-', label='cations')
plt.plot(m.cells['center'] / L - 0.5, o['anion.c'], '.-', label='anions')
testing.store(o['cation.c'], rtol=1e-6)
testing.store(o['anion.c'], rtol=1e-6)
plt.yscale('log')
plt.legend(loc=0, frameon=False)
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlabel('distance [reduced units]')
plt.xlim([-0.5, 0.5])
plt.ylim([1e23, 1e27])
Explanation: Results
Distribution of cations and anions
End of explanation
testing.store(o['E'], rtol=1e-6)
plt.plot(m.faces['center'] / L - 0.5, o['E'], '.-')
plt.yscale('log')
plt.ylim([1e4, 1e10])
plt.xlim([-0.5, 0.5])
plt.xlabel('distance [reduced units]')
plt.ylabel('electric field [$Vm^{-1}$]')
Explanation: Distribution of the electric field
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['hole.c'], '.-', label='holes')
plt.plot(m.cells['center'] / L - 0.5, o['electron.c'], '.-', label='electrons')
plt.plot(
m.cells['center'] /
L -
0.5,
o['R'] *
0.5e-7,
'.-',
label='recombination zone')
testing.store(o['hole.c'], rtol=1e-6)
testing.store(o['electron.c'], rtol=1e-6)
testing.store(o['R'], rtol=1e-6)
plt.xlabel('distance [reduced units]')
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlim([-0.5, 0.5])
plt.legend(loc=0, frameon=False)
Explanation: Distribution of electrons and holes
End of explanation
plt.plot(m.cells['center'] / L - 0.5, o['hole.c'], '.-', label='holes')
plt.xlabel('distance [reduced units]')
plt.ylabel('carrier density [$m^{-3}$]')
plt.xlim([-0.505, -0.4])
plt.legend(loc=0, frameon=False)
Explanation: Distribution of holes near to contact
End of explanation
testing.store(o['hole.jdrift'], rtol=1e-6)
testing.store(o['hole.jdiff'], rtol=1e-6)
testing.store(o['electron.jdrift'], rtol=1e-6)
testing.store(o['electron.jdiff'], rtol=1e-6)
plt.plot(
m.faces['center'] /
L -
0.5,
o['hole.jdrift'] /
np.amax(
o['hole.jdrift']),
'.-',
label='$j^p_{drift}$')
plt.plot(
m.faces['center'] /
L -
0.5,
o['hole.jdiff'] /
np.amax(
o['hole.jdrift']),
'.-',
label='$j^p_{diff}$')
plt.xlim([-0.6, 0.1])
plt.legend(loc=0, frameon=False)
plt.xlabel('distance [reduced units]')
plt.ylabel('normalized current density')
Explanation: Comparison between drift and diffusion parts of the current
End of explanation |
11,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 3
Imports
Step2: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
Step3: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
Step4: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
Step5: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
Step6: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation
Step7: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 3
Imports
End of explanation
def brownian(maxt, n):
Return one realization of a Brownian (Wiener) process with n steps and a max time of t.
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
Explanation: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
End of explanation
#t, w = numpy.empty(1000)
t, W = brownian(1.0, 1000)
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
End of explanation
plt.plot(t, W, "bo")
plt.ylabel('Position')
plt.xlabel('Time')
assert True # this is for grading
Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
End of explanation
dW = np.diff(W)
mean = np.mean(dW)
stdev = np.std(dW)
print mean
print stdev
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
End of explanation
def geo_brownian(t, W, X0, mu, sigma):
return X0*np.exp((mu-(sigma**2)/2)*t+sigma*W)
print geo_brownian(t,W,2,mean,stdev)
assert True # leave this for grading
Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function.
End of explanation
X = geo_brownian(t,W,1.0,0.5,0.3)
plt.plot(t,X, 'ro')
plt.ylabel('X(t)')
plt.xlabel('time')
assert True # leave this for grading
Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
End of explanation |
11,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: Suppose we want to get from A to B. Where can we go from the start state, A?
Step2: We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search
Step3: A couple of things to note
Step4: Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this
Step5: We can assign WORDS to be the set of all the words in this file
Step6: And define neighboring_words to return the set of all words that are a one-letter change away from a given word
Step7: For example
Step8: Now we can create word_neighbors as a dict of {word
Step9: Now the breadth_first function can be used to solve a word ladder problem
Step10: More General Search Algorithms
Now we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities
Step11: Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
Step12: Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
Step14: Search Tree Nodes
The solution to a search problem is now a linked list of Nodes, where each Node
includes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).
Step16: Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations
Step19: Search Problems
Problem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.
Step21: Two Location Vacuum World
Step26: Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
Step27: Visualization Output
Step29: Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
Step33: Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one. | Python Code:
romania = {
'A': ['Z', 'T', 'S'],
'B': ['F', 'P', 'G', 'U'],
'C': ['D', 'R', 'P'],
'D': ['M', 'C'],
'E': ['H'],
'F': ['S', 'B'],
'G': ['B'],
'H': ['U', 'E'],
'I': ['N', 'V'],
'L': ['T', 'M'],
'M': ['L', 'D'],
'N': ['I'],
'O': ['Z', 'S'],
'P': ['R', 'C', 'B'],
'R': ['S', 'C', 'P'],
'S': ['A', 'O', 'F', 'R'],
'T': ['A', 'L'],
'U': ['B', 'V', 'H'],
'V': ['U', 'I'],
'Z': ['O', 'A']}
Explanation: Note: This is not yet ready, but shows the direction I'm leaning in for Fourth Edition Search.
State-Space Search
This notebook describes several state-space search algorithms, and how they can be used to solve a variety of problems. We start with a simple algorithm and a simple domain: finding a route from city to city. Later we will explore other algorithms and domains.
The Route-Finding Domain
Like all state-space search problems, in a route-finding problem you will be given:
- A start state (for example, 'A' for the city Arad).
- A goal state (for example, 'B' for the city Bucharest).
- Actions that can change state (for example, driving from 'A' to 'S').
You will be asked to find:
- A path from the start state, through intermediate states, to the goal state.
We'll use this map:
<img src="http://robotics.cs.tamu.edu/dshell/cs625/images/map.jpg" height="366" width="603">
A state-space search problem can be represented by a graph, where the vertexes of the graph are the states of the problem (in this case, cities) and the edges of the graph are the actions (in this case, driving along a road).
We'll represent a city by its single initial letter.
We'll represent the graph of connections as a dict that maps each city to a list of the neighboring cities (connected by a road). For now we don't explicitly represent the actions, nor the distances
between cities.
End of explanation
romania['A']
Explanation: Suppose we want to get from A to B. Where can we go from the start state, A?
End of explanation
from collections import deque # Doubly-ended queue: pop from left, append to right.
def breadth_first(start, goal, neighbors):
"Find a shortest sequence of states from start to the goal."
frontier = deque([start]) # A queue of states
previous = {start: None} # start has no previous state; other states will
while frontier:
s = frontier.popleft()
if s == goal:
return path(previous, s)
for s2 in neighbors[s]:
if s2 not in previous:
frontier.append(s2)
previous[s2] = s
def path(previous, s):
"Return a list of states that lead to state s, according to the previous dict."
return [] if (s is None) else path(previous, previous[s]) + [s]
Explanation: We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search: we don't know which immediate action is best, so we'll have to explore, until we find a path that leads to the goal.
How do we explore? We'll start with a simple algorithm that will get us from A to B. We'll keep a frontier—a collection of not-yet-explored states—and expand the frontier outward until it reaches the goal. To be more precise:
Initially, the only state in the frontier is the start state, 'A'.
Until we reach the goal, or run out of states in the frontier to explore, do the following:
Remove the first state from the frontier. Call it s.
If s is the goal, we're done. Return the path to s.
Otherwise, consider all the neighboring states of s. For each one:
If we have not previously explored the state, add it to the end of the frontier.
Also keep track of the previous state that led to this new neighboring state; we'll need this to reconstruct the path to the goal, and to keep us from re-visiting previously explored states.
A Simple Search Algorithm: breadth_first
The function breadth_first implements this strategy:
End of explanation
breadth_first('A', 'B', romania)
breadth_first('L', 'N', romania)
breadth_first('N', 'L', romania)
breadth_first('E', 'E', romania)
Explanation: A couple of things to note:
We always add new states to the end of the frontier queue. That means that all the states that are adjacent to the start state will come first in the queue, then all the states that are two steps away, then three steps, etc.
That's what we mean by breadth-first search.
We recover the path to an end state by following the trail of previous[end] pointers, all the way back to start.
The dict previous is a map of {state: previous_state}.
When we finally get an s that is the goal state, we know we have found a shortest path, because any other state in the queue must correspond to a path that is as long or longer.
Note that previous contains all the states that are currently in frontier as well as all the states that were in frontier in the past.
If no path to the goal is found, then breadth_first returns None. If a path is found, it returns the sequence of states on the path.
Some examples:
End of explanation
from search import *
sgb_words = open_data("EN-text/sgb-words.txt")
Explanation: Now let's try a different kind of problem that can be solved with the same search function.
Word Ladders Problem
A word ladder problem is this: given a start word and a goal word, find the shortest way to transform the start word into the goal word by changing one letter at a time, such that each change results in a word. For example starting with green we can reach grass in 7 steps:
green → greed → treed → trees → tress → cress → crass → grass
We will need a dictionary of words. We'll use 5-letter words from the Stanford GraphBase project for this purpose. Let's get that file from aimadata.
End of explanation
WORDS = set(sgb_words.read().split())
len(WORDS)
Explanation: We can assign WORDS to be the set of all the words in this file:
End of explanation
def neighboring_words(word):
"All words that are one letter away from this word."
neighbors = {word[:i] + c + word[i+1:]
for i in range(len(word))
for c in 'abcdefghijklmnopqrstuvwxyz'
if c != word[i]}
return neighbors & WORDS
Explanation: And define neighboring_words to return the set of all words that are a one-letter change away from a given word:
End of explanation
neighboring_words('hello')
neighboring_words('world')
Explanation: For example:
End of explanation
word_neighbors = {word: neighboring_words(word)
for word in WORDS}
Explanation: Now we can create word_neighbors as a dict of {word: {neighboring_word, ...}}:
End of explanation
breadth_first('green', 'grass', word_neighbors)
breadth_first('smart', 'brain', word_neighbors)
breadth_first('frown', 'smile', word_neighbors)
Explanation: Now the breadth_first function can be used to solve a word ladder problem:
End of explanation
def breadth_first_search(problem):
"Search for goal; paths with least number of steps first."
if problem.is_goal(problem.initial):
return Node(problem.initial)
frontier = FrontierQ(Node(problem.initial), LIFO=False)
explored = set()
while frontier:
node = frontier.pop()
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child.state not in frontier:
if problem.is_goal(child.state):
return child
frontier.add(child)
Explanation: More General Search Algorithms
Now we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities:
We distinguish between an action and the result of an action.
We allow different measures of the cost of a solution (not just the number of steps in the sequence).
We search through the state space in an order that is more likely to lead to an optimal solution quickly.
Here's how we do these things:
Instead of having a graph of neighboring states, we instead have an object of type Problem. A Problem
has one method, Problem.actions(state) to return a collection of the actions that are allowed in a state,
and another method, Problem.result(state, action) that says what happens when you take an action.
We keep a set, explored of states that have already been explored. We also have a class, Frontier, that makes it efficient to ask if a state is on the frontier.
Each action has a cost associated with it (in fact, the cost can vary with both the state and the action).
The Frontier class acts as a priority queue, allowing the "best" state to be explored next.
We represent a sequence of actions and resulting states as a linked list of Node objects.
The algorithm breadth_first_search is basically the same as breadth_first, but using our new conventions:
End of explanation
def uniform_cost_search(problem, costfn=lambda node: node.path_cost):
frontier = FrontierPQ(Node(problem.initial), costfn)
explored = set()
while frontier:
node = frontier.pop()
if problem.is_goal(node.state):
return node
explored.add(node.state)
for action in problem.actions(node.state):
child = node.child(problem, action)
if child.state not in explored and child not in frontier:
frontier.add(child)
elif child in frontier and frontier.cost[child] < child.path_cost:
frontier.replace(child)
Explanation: Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.
End of explanation
def astar_search(problem, heuristic):
costfn = lambda node: node.path_cost + heuristic(node.state)
return uniform_cost_search(problem, costfn)
Explanation: Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.
End of explanation
class Node(object):
A node in a search tree. A search tree is spanning tree over states.
A Node contains a state, the previous node in the tree, the action that
takes us from the previous state to this state, and the path cost to get to
this state. If a state is arrived at by two paths, then there are two nodes
with the same state.
def __init__(self, state, previous=None, action=None, step_cost=1):
"Create a search tree Node, derived from a previous Node by an action."
self.state = state
self.previous = previous
self.action = action
self.path_cost = 0 if previous is None else (previous.path_cost + step_cost)
def __repr__(self): return "<Node {}: {}>".format(self.state, self.path_cost)
def __lt__(self, other): return self.path_cost < other.path_cost
def child(self, problem, action):
"The Node you get by taking an action from this Node."
result = problem.result(self.state, action)
return Node(result, self, action,
problem.step_cost(self.state, action, result))
Explanation: Search Tree Nodes
The solution to a search problem is now a linked list of Nodes, where each Node
includes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).
End of explanation
from collections import OrderedDict
import heapq
class FrontierQ(OrderedDict):
"A Frontier that supports FIFO or LIFO Queue ordering."
def __init__(self, initial, LIFO=False):
Initialize Frontier with an initial Node.
If LIFO is True, pop from the end first; otherwise from front first.
self.LIFO = LIFO
self.add(initial)
def add(self, node):
"Add a node to the frontier."
self[node.state] = node
def pop(self):
"Remove and return the next Node in the frontier."
(state, node) = self.popitem(self.LIFO)
return node
def replace(self, node):
"Make this node replace the nold node with the same state."
del self[node.state]
self.add(node)
class FrontierPQ:
"A Frontier ordered by a cost function; a Priority Queue."
def __init__(self, initial, costfn=lambda node: node.path_cost):
"Initialize Frontier with an initial Node, and specify a cost function."
self.heap = []
self.states = {}
self.costfn = costfn
self.add(initial)
def add(self, node):
"Add node to the frontier."
cost = self.costfn(node)
heapq.heappush(self.heap, (cost, node))
self.states[node.state] = node
def pop(self):
"Remove and return the Node with minimum cost."
(cost, node) = heapq.heappop(self.heap)
self.states.pop(node.state, None) # remove state
return node
def replace(self, node):
"Make this node replace a previous node with the same state."
if node.state not in self:
raise ValueError('{} not there to replace'.format(node.state))
for (i, (cost, old_node)) in enumerate(self.heap):
if old_node.state == node.state:
self.heap[i] = (self.costfn(node), node)
heapq._siftdown(self.heap, 0, i)
return
def __contains__(self, state): return state in self.states
def __len__(self): return len(self.heap)
Explanation: Frontiers
A frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations:
f.add(node): Add a node to the Frontier.
f.pop(): Remove and return the "best" node from the frontier.
f.replace(node): add this node and remove a previous node with the same state.
state in f: Test if some node in the frontier has arrived at state.
f[state]: returns the node corresponding to this state in frontier.
len(f): The number of Nodes in the frontier. When the frontier is empty, f is false.
We provide two kinds of frontiers: One for "regular" queues, either first-in-first-out (for breadth-first search) or last-in-first-out (for depth-first search), and one for priority queues, where you can specify what cost function on nodes you are trying to minimize.
End of explanation
class Problem(object):
The abstract class for a search problem.
def __init__(self, initial=None, goals=(), **additional_keywords):
Provide an initial state and optional goal states.
A subclass can have additional keyword arguments.
self.initial = initial # The initial state of the problem.
self.goals = goals # A collection of possibe goal states.
self.__dict__.update(**additional_keywords)
def actions(self, state):
"Return a list of actions executable in this state."
raise NotImplementedError # Override this!
def result(self, state, action):
"The state that results from executing this action in this state."
raise NotImplementedError # Override this!
def is_goal(self, state):
"True if the state is a goal."
return state in self.goals # Optionally override this!
def step_cost(self, state, action, result=None):
"The cost of taking this action from this state."
return 1 # Override this if actions have different costs
def action_sequence(node):
"The sequence of actions to get to this node."
actions = []
while node.previous:
actions.append(node.action)
node = node.previous
return actions[::-1]
def state_sequence(node):
"The sequence of states to get to this node."
states = [node.state]
while node.previous:
node = node.previous
states.append(node.state)
return states[::-1]
Explanation: Search Problems
Problem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.
End of explanation
dirt = '*'
clean = ' '
class TwoLocationVacuumProblem(Problem):
A Vacuum in a world with two locations, and dirt.
Each state is a tuple of (location, dirt_in_W, dirt_in_E).
def actions(self, state): return ('W', 'E', 'Suck')
def is_goal(self, state): return dirt not in state
def result(self, state, action):
"The state that results from executing this action in this state."
(loc, dirtW, dirtE) = state
if action == 'W': return ('W', dirtW, dirtE)
elif action == 'E': return ('E', dirtW, dirtE)
elif action == 'Suck' and loc == 'W': return (loc, clean, dirtE)
elif action == 'Suck' and loc == 'E': return (loc, dirtW, clean)
else: raise ValueError('unknown action: ' + action)
problem = TwoLocationVacuumProblem(initial=('W', dirt, dirt))
result = uniform_cost_search(problem)
result
action_sequence(result)
state_sequence(result)
problem = TwoLocationVacuumProblem(initial=('E', clean, dirt))
result = uniform_cost_search(problem)
action_sequence(result)
Explanation: Two Location Vacuum World
End of explanation
class PourProblem(Problem):
Problem about pouring water between jugs to achieve some water level.
Each state is a tuples of levels. In the initialization, provide a tuple of
capacities, e.g. PourProblem(capacities=(8, 16, 32), initial=(2, 4, 3), goals={7}),
which means three jugs of capacity 8, 16, 32, currently filled with 2, 4, 3 units of
water, respectively, and the goal is to get a level of 7 in any one of the jugs.
def actions(self, state):
The actions executable in this state.
jugs = range(len(state))
return ([('Fill', i) for i in jugs if state[i] != self.capacities[i]] +
[('Dump', i) for i in jugs if state[i] != 0] +
[('Pour', i, j) for i in jugs for j in jugs if i != j])
def result(self, state, action):
The state that results from executing this action in this state.
result = list(state)
act, i, j = action[0], action[1], action[-1]
if act == 'Fill': # Fill i to capacity
result[i] = self.capacities[i]
elif act == 'Dump': # Empty i
result[i] = 0
elif act == 'Pour':
a, b = state[i], state[j]
result[i], result[j] = ((0, a + b)
if (a + b <= self.capacities[j]) else
(a + b - self.capacities[j], self.capacities[j]))
else:
raise ValueError('unknown action', action)
return tuple(result)
def is_goal(self, state):
True if any of the jugs has a level equal to one of the goal levels.
return any(level in self.goals for level in state)
p7 = PourProblem(initial=(2, 0), capacities=(5, 13), goals={7})
p7.result((2, 0), ('Fill', 1))
result = uniform_cost_search(p7)
action_sequence(result)
Explanation: Water Pouring Problem
Here is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.
End of explanation
def showpath(searcher, problem):
"Show what happens when searcvher solves problem."
problem = Instrumented(problem)
print('\n{}:'.format(searcher.__name__))
result = searcher(problem)
if result:
actions = action_sequence(result)
state = problem.initial
path_cost = 0
for steps, action in enumerate(actions, 1):
path_cost += problem.step_cost(state, action, 0)
result = problem.result(state, action)
print(' {} =={}==> {}; cost {} after {} steps'
.format(state, action, result, path_cost, steps,
'; GOAL!' if problem.is_goal(result) else ''))
state = result
msg = 'GOAL FOUND' if result else 'no solution'
print('{} after {} results and {} goal checks'
.format(msg, problem._counter['result'], problem._counter['is_goal']))
from collections import Counter
class Instrumented:
"Instrument an object to count all the attribute accesses in _counter."
def __init__(self, obj):
self._object = obj
self._counter = Counter()
def __getattr__(self, attr):
self._counter[attr] += 1
return getattr(self._object, attr)
showpath(uniform_cost_search, p7)
p = PourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
class GreenPourProblem(PourProblem):
def step_cost(self, state, action, result=None):
"The cost is the amount of water used in a fill."
if action[0] == 'Fill':
i = action[1]
return self.capacities[i] - state[i]
return 0
p = GreenPourProblem(initial=(0, 0), capacities=(7, 13), goals={2})
showpath(uniform_cost_search, p)
def compare_searchers(problem, searchers=None):
"Apply each of the search algorithms to the problem, and show results"
if searchers is None:
searchers = (breadth_first_search, uniform_cost_search)
for searcher in searchers:
showpath(searcher, problem)
compare_searchers(p)
Explanation: Visualization Output
End of explanation
import random
N, S, E, W = DIRECTIONS = [(0, 1), (0, -1), (1, 0), (-1, 0)]
def Grid(width, height, obstacles=0.1):
A 2-D grid, width x height, with obstacles that are either a collection of points,
or a fraction between 0 and 1 indicating the density of obstacles, chosen at random.
grid = {(x, y) for x in range(width) for y in range(height)}
if isinstance(obstacles, (float, int)):
obstacles = random.sample(grid, int(width * height * obstacles))
def neighbors(x, y):
for (dx, dy) in DIRECTIONS:
(nx, ny) = (x + dx, y + dy)
if (nx, ny) not in obstacles and 0 <= nx < width and 0 <= ny < height:
yield (nx, ny)
return {(x, y): list(neighbors(x, y))
for x in range(width) for y in range(height)}
Grid(5, 5)
class GridProblem(Problem):
"Create with a call like GridProblem(grid=Grid(10, 10), initial=(0, 0), goal=(9, 9))"
def actions(self, state): return DIRECTIONS
def result(self, state, action):
#print('ask for result of', state, action)
(x, y) = state
(dx, dy) = action
r = (x + dx, y + dy)
return r if r in self.grid[state] else state
gp = GridProblem(grid=Grid(5, 5, 0.3), initial=(0, 0), goals={(4, 4)})
showpath(uniform_cost_search, gp)
Explanation: Random Grid
An environment where you can move in any of 4 directions, unless there is an obstacle there.
End of explanation
def hardness(problem):
L = breadth_first_search(problem)
#print('hardness', problem.initial, problem.capacities, problem.goals, L)
return len(action_sequence(L)) if (L is not None) else 0
hardness(p7)
action_sequence(breadth_first_search(p7))
C = 9 # Maximum capacity to consider
phard = max((PourProblem(initial=(a, b), capacities=(A, B), goals={goal})
for A in range(C+1) for B in range(C+1)
for a in range(A) for b in range(B)
for goal in range(max(A, B))),
key=hardness)
phard.initial, phard.capacities, phard.goals
showpath(breadth_first_search, PourProblem(initial=(0, 0), capacities=(7, 9), goals={8}))
showpath(uniform_cost_search, phard)
class GridProblem(Problem):
A Grid.
def actions(self, state): return ['N', 'S', 'E', 'W']
def result(self, state, action):
The state that results from executing this action in this state.
(W, H) = self.size
if action == 'N' and state > W: return state - W
if action == 'S' and state + W < W * W: return state + W
if action == 'E' and (state + 1) % W !=0: return state + 1
if action == 'W' and state % W != 0: return state - 1
return state
compare_searchers(GridProblem(initial=0, goals={44}, size=(10, 10)))
def test_frontier():
#### Breadth-first search with FIFO Q
f = FrontierQ(Node(1), LIFO=False)
assert 1 in f and len(f) == 1
f.add(Node(2))
f.add(Node(3))
assert 1 in f and 2 in f and 3 in f and len(f) == 3
assert f.pop().state == 1
assert 1 not in f and 2 in f and 3 in f and len(f) == 2
assert f
assert f.pop().state == 2
assert f.pop().state == 3
assert not f
#### Depth-first search with LIFO Q
f = FrontierQ(Node('a'), LIFO=True)
for s in 'bcdef': f.add(Node(s))
assert len(f) == 6 and 'a' in f and 'c' in f and 'f' in f
for s in 'fedcba': assert f.pop().state == s
assert not f
#### Best-first search with Priority Q
f = FrontierPQ(Node(''), lambda node: len(node.state))
assert '' in f and len(f) == 1 and f
for s in ['book', 'boo', 'bookie', 'bookies', 'cook', 'look', 'b']:
assert s not in f
f.add(Node(s))
assert s in f
assert f.pop().state == ''
assert f.pop().state == 'b'
assert f.pop().state == 'boo'
assert {f.pop().state for _ in '123'} == {'book', 'cook', 'look'}
assert f.pop().state == 'bookie'
#### Romania: Two paths to Bucharest; cheapest one found first
S = Node('S')
SF = Node('F', S, 'S->F', 99)
SFB = Node('B', SF, 'F->B', 211)
SR = Node('R', S, 'S->R', 80)
SRP = Node('P', SR, 'R->P', 97)
SRPB = Node('B', SRP, 'P->B', 101)
f = FrontierPQ(S)
f.add(SF); f.add(SR), f.add(SRP), f.add(SRPB); f.add(SFB)
def cs(n): return (n.path_cost, n.state) # cs: cost and state
assert cs(f.pop()) == (0, 'S')
assert cs(f.pop()) == (80, 'R')
assert cs(f.pop()) == (99, 'F')
assert cs(f.pop()) == (177, 'P')
assert cs(f.pop()) == (278, 'B')
return 'test_frontier ok'
test_frontier()
%matplotlib inline
import matplotlib.pyplot as plt
p = plt.plot([i**2 for i in range(10)])
plt.savefig('destination_path.eps', format='eps', dpi=1200)
import itertools
import random
# http://stackoverflow.com/questions/10194482/custom-matplotlib-plot-chess-board-like-table-with-colored-cells
from matplotlib.table import Table
def main():
grid_table(8, 8)
plt.axis('scaled')
plt.show()
def grid_table(nrows, ncols):
fig, ax = plt.subplots()
ax.set_axis_off()
colors = ['white', 'lightgrey', 'dimgrey']
tb = Table(ax, bbox=[0,0,2,2])
for i,j in itertools.product(range(ncols), range(nrows)):
tb.add_cell(i, j, 2./ncols, 2./nrows, text='{:0.2f}'.format(0.1234),
loc='center', facecolor=random.choice(colors), edgecolor='grey') # facecolors=
ax.add_table(tb)
#ax.plot([0, .3], [.2, .2])
#ax.add_line(plt.Line2D([0.3, 0.5], [0.7, 0.7], linewidth=2, color='blue'))
return fig
main()
import collections
class defaultkeydict(collections.defaultdict):
Like defaultdict, but the default_factory is a function of the key.
>>> d = defaultkeydict(abs); d[-42]
42
def __missing__(self, key):
self[key] = self.default_factory(key)
return self[key]
Explanation: Finding a hard PourProblem
What solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one.
End of explanation |
11,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<a href="http
Step1: 2.2 Données "Titanic"
Les données sur le naufrage du Titanic sont décrites dans le calepin consacré à la librairie pandas. Reconstruire la table des données en lisant le fichier .csv.
Step2: Il est nécessaire de transformer les données car scikit-learn ne reconnaît pas la classe DataFrame de pandas, ce qui est bien dommage. Les variables qualitatives sont comme précédemment remplacées par les indicatrices de leurs modalités et les variables quantitatives conservées. Cela introduit une évidente redondance dans les données mais les procédures de sélection de modèle feront le tri.
Step3: Extraction des échantillons d'apprentissage et test.
Step4: Attention
Step5: Optimisation du paramètre de complexité du modèle par validation croisée en cherchant l'erreur minimale sur une grille de valeurs du paramètre avec cv=5-fold cross validation et n_jobs=-1 pour une exécution en parallèle utilisant tous les processeurs sauf 1. Attention, comme la validation croisée est aléatoire, deux exécutions successives ne donnent pas le même résultat.
Step6: Le modèle digit_knnOpt est déjà estimé avec la valeur "optimale" du paramètre.
Step7: 3.3 Régression logistique
La prévision de la survie, variable binaire des données "Titanic", se prêtent à une régression logistique. Les versions pénalisées (ridge, lasso, elastic net, lars) du modèle linéaire général sont les algorithmes les plus développés dans Scikit-learn au détriment de ceux plus classiques (forward, backward, step-wise) de sélection de variables en optimisant un critère de type AIC. Une version lasso de la régression logistique est testée afin d'introduire la sélection automatique des variables.
Estimation et erreur de prévision du modèle complet sur l'échantillon test.
Step8: Comme pour le modèle linéaire, il faudrait construire les commandes d'aide à l'interprétation des résultats.
Pénalisation et optimisation du paramètre par validation croisée. Il existe une fonction spécifique mais son mode d'emploi est peu documenté; GridSearchCV lui est préférée.
Step9: Estimation de l'erreur de prévision par le modèle "optimal".
Step10: Petit souci supplémentaire, l'objet produit par GridSearchCV ne connaît pas l'attribut .coef_. Il faut donc ré-estimer le modèle pour connaître les coefficients.
Step11: Commenter
Step12: Optimisation du paramètre de complexité du modèle par validation croisée en cherchant l'erreur minimale sur une grille de valeurs du paramètre avec cv=5-fold cross validation et n_jobs=-1 pour une exécution en parallèle utilisant tous les processeurs sauf 1. Attention, comme la validation croisée est aléatoire et un arbre un modèle instable, deux exécutions successives ne donnent pas nécessairement le même résultat.
Step13: La valeur "optimale" du paramètre reste trop importante pour la lisibilité de l'arbre. Une valeur plus faible est utilisée.
Step14: Noter l'amélioration de l'erreur.
Step15: Tracer l'arbre avec le logiciel Graphviz.
Step16: L'arbre est généré dans un fichier image à visualiser pour se rende compte qu'il est plutôt mal élagué et pas directement interprétable sans les noms en clair des variables et modalités.
Step17: 4.3 Données "Caractères"
La même démarche est utilisée pour ces données.
Step18: Comme pour les autres méthodes, l'objet GridSearchCV ne contient pas tous les attibuts, dont celui tree, et ne permet pas de construire l'arbre. Il faudrait le ré-estimer mais comme il est bien trop complexe, ce résultat n'est pas produit.
5 Forêts aléatoires
L'algorithme d'agrégation de modèles le plus utilisé est celui des forêts aléatoires (random forest) de Breiman (2001) ce qui ne signifie pas qu'il conduit toujours à la meilleure prévision. Voir la documentation pour la signification de tous les paramètres.
Plus que le nombre d'arbres n_estimators, le paramètre à optimiser est le nombre de variables tirées aléatoirement pour la recherche de la division optimale d'un noeud
Step19: L'optimisation du paramètre max_features peut être réalisée en minimisant l'erreur de prévision out-of-bag. Ce n'est pas prévu, il est aussi possible comme précédemment de minimiser l'erreur par validation croisée.
Step20: Comme pour les autres méthodes, l'objet GridSearchCV ne propose pas tous les attributs et donc pas d'erreur out-of-bag ou d'importance des variables. Voir le tutoriel sur la prévision du pic d'ozone pour plus de détails.
Step21: 5.2 Données "Titanic"
Même démarche.
Step22: Modifier la valeur du paramètre pour constater sa faible influence sur la qualité plutôt médiocre du résultat.
Attention, comme déjà signalé, l'échantillon test est de relativement faible taille (autour de 180), il serait opportun d'itérer l'extraction aléatoire d'échantillons tests (validation croisée Monte Carlo) pour tenter de réduire la variance de cette estimation et avoir une idée de sa distribution.
C'est fait dans d'autres calepins du dépôt d'apprentissage.
6 Fonction pipeline
Pour enchaîner et brancher (plugin) plusieurs traitements, généralement des transformations suivies d'une modélisation. Utiliser les fonctionnalités de cette section sans modération afin d'optimiser la structure et l'efficacité (parallélisation) de codes complexes.
6.1 Familles de transformations (transformers)
Classification ou régression sont souvent la dernière étape d'un procédé long et complexe. Dans la "vraie vie", les données ont besoin d'être extraites, sélectionnées, nettoyées, standardisées, complétées... (data munging) avant d'alimenter un algorithme d'apprentissage. Pour structurer le code, Sciki-learn propose d'utiliser le principe d'une API (application programming interface) nommée transformer.
Ces fonctionnalités sont illustrées sur les mêmes données de reconnaissance de caractères.
Step23: Normalisations, réductions
Step24: Sélection de variables par élimination pas à pas
La proicédure RFE (récursive feature selection) supprime une à une les variables les moins significatives ou moins importantes au sens du critère du modèle utilisé; dans cet exemple, il s'agit des forêts aléatoires.
Step25: Décomposition, factorisation, réduction de dimension
Possibilité, par exemple, de récupérer les q premières composantes principales de l'ACP comme résultat d'une transformation.
Step26: Fonction de transformation définie par l'utilisateur
Une fonction de transformation ou transformer est définie et s'applique à un jeu de données avec la syntaxe ci-dessous.
Step27: 6.4 Pipelines
Des transformations sont chaînées en une séquence constituant un pipeline.
Step28: Une chaîne de transformations suivi d'un classifieur construisent un nouveau classifieur
Step29: L'optimisation des paramètres par validation croisée est obtenue avec la même fonction mais peut prendre du temps si plusieurs paramètres sont cocernés! Le pipeline construit à titre illustratif n'est certainement pas optimal.
Step30: 6.5 Union de caractéristiques
Des transformations sont appliquées en parallèle pour réunir en un seul ensemble des transformations des données.
Step31: 6.6 Compositions emboîtées
Comme des pipelines and des unions sont eux-mêmes des estimateurs, ils peuvent être composés dans une structure emboîtée pour construire des combinaisons complexes de modèles comme ceux remportant les concours de type *kaggle.
Les données initiales sont unies aux composantes de l'ACP, puis les variables les plus importantes au sens des forêts aléatoires sont sélectionnées avant de servir à l'apprentissage d'un réseau de neurones. Ce n'est sûrement pas une stratégie optimale !
Step32: Effectivement la combinaison n'est pas optimale | Python Code:
# Importations
import matplotlib.pyplot as plt
from sklearn import datasets
%matplotlib inline
# les données
digits = datasets.load_digits()
# Contenu et mode d'obtention
print(digits)
images_and_labels = list(zip(digits.images,
digits.target))
for index, (image, label) in enumerate(images_and_labels[:8]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: %i' % label)
# variables prédictives et cible
X=digits.data
y=digits.target
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=11)
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="float:right; max-width: 250px; display: inline" alt="Wikistat"/></a>
</center>
<a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 200px; display: inline" alt="Python"/></a> pour Statistique et Science des Données
Apprentissage Statistique / Machine avec <a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 150px; display: inline" alt="Python"/></a> & <a href="http://scikit-learn.org/stable/#"><img src="http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png" style="max-width: 180px; display: inline" alt="Scikit-Learn"/></a>
Résumé: Ce calepin introduit l'utilisation de la librairie scikit-learn pour la modélisation et l'apprentissage. Pourquoi utiliser scikit-learn ? Ou non ? Liste des fonctionnalités, quelques exemples de mise en oeuvre de modélisation (régression logistique, $k$-plus proches voisins, arbres de décision, forêts aléatoires. Optimisation des paramètres (complexité) des modèles par validation croisée. Fontions de chaînage (pipeline) de transformations et estimations. D'autres fonctionalités de Scikit-learn sont abordées dans les calepins du dépot sur l'apprentissage statistique.
1 Introduction
1.1 Scikit-learn vs. R
L'objectif de ce tutoriel est d'introduire l'utilisation de la librairie scikit-learn de Python. Seule l'utilisation directe des fonctions de modélisation sont abordées d'une manière analogue à la mise en oeuvre de R dont les librairies offrent l'accès à bien plus de méthodes. La comparaison avec R repose sur les remarques suivantes.
Cette librairie manipule des objets de classe array de numpy chargés en mémoire et donc de taille limitée par la RAM de l'ordinateur; de façon analogue R charge en RAM des objets de type data.frame.
Scikit-learn (0.18) ne reconnaît pas (ou pas encore ?) la classe DataFrame de pandas; scikit-learn utilise la classe array de numpy. C'est un problème pour la gestion de variables qualitatives complexes. Une variable binaire est simplement remplacée par un codage (0,1) mais, en présence de plusieurs modalités, traiter celles-ci comme des entiers n'a pas de sens statistique et remplacer une variable qualitative par l'ensemble des indicatrices (dummy variables (0,1)) de ses modalités complique les stratégies de sélection de modèle tout en rendant inexploitable l'interprétation statistique.
Les implémentations en Python de certains algorithmes dans scikit-learn sont souvent plus efficaces et utilisent implicitement les capacités de parallélisation.
R offre beaucoup plus de possibilités pour la comparaison de modèles statistiques et leur interprétation.
En conséquences:
- Préférer R et ses librairies si la présentation des résultats et surtout leur interprétation (modèles) est prioritaire, si l'utilisation et / ou la comparaison de beaucoup de méthodes est recherchée.
- Préférer Python et scikit-learn pour mettre au point une chaîne de traitements (pipe line) opérationnelle de l'extraction à une analyse privilégiant la prévision brute à l'interprétation et pour des données quantitatives ou rendues quantitatives ("vectorisation" de corpus de textes).
En revanche, si les données sont trop volumineuses pour la taille du disque et distribuées sur les n\oe uds d'un cluster avec Hadoop, consulter les calepins sur l'utilisation de Spark.
1.2 Fonctions d'apprentissage de Scikit-learn
La communauté qui développe cette librairie est très active et la fait évoluer rapidement. Ne pas hésiter à consulter la documentation pour des compléments. Voici une sélection de ses principales fonctionnalités en lien avec la modélisation.
Transformations (standardisation, discrétisation binaire, regroupement de modalités, imputations rudimentaires de données manquantes) , "vectorisation" de corpus de textes (encodage, catalogue, Tf-idf), images;
Modéle linéaire général avec pénalisation (ridge, lasso, elastic net...), analyse discriminante linéaire et quadratique, $k$ plus proches voisins, processus gaussiens, classifieur bayésien naïf, arbres de régression et classification (CART), agrégation de modèles (bagging, random forest, adaboost, gradient tree boosting), perceptron multicouche (réseau de neurones), SVM (classification, régression, détection d'atypiques...);
Algorithmes de validation croisée (loo, k-fold, VC stratifiée...) et sélection de modèles, optimisation sur une grille de paramètres, séparation aléatoire apprentissage et test, courbe ROC;
Enchaînement (pipeline) de traitements.
En résumé, cette librairie est focalisée sur les aspects "machine" de l'apprentissage de données quantitatives (séries, signaux, images) volumineuses tandis que R intègre l'analyse de variables qualitatives complexes et l'interprétation statistique fine des résultats au détriment parfois de l'efficacité des calculs.
1.3 Objectif
L'objectif est d'illustrer la mise en oeuvre de quelques fonctionnalités. Consulter la documentation et ses nombreux exemples pour plus de détails sur les possibilités d'utilisation de scikit-learn.
Deux jeux de données élémentaires sont utilisés. Celui déjà étudié avec pandas et concernant le naufrage du Titanic. Il mélange des variables explicatives qualitatives et quantitatives dans un objet de la classe DataFrame. Pour être utilisé dans scikit-learn les données doivent être transformées en un objet de classe Array de numpy par le remplacement des variables qualitatives par les indicatrices de leurs modalités. L'autre ensemble de données est entièrement quantitatif. C'est un problème classique et simplifié de reconnaissance de caractères qui est inclus dans la librairie scikit-learn.
Après la phase d'exploration (calepin précédent), ce sont les fonctions de modélisation et apprentissage qui sont abordées: régression logistique (titanic), $k$- plus proches voisins (caractères), arbres de discrimination, et forêts aléatoires. Les paramètres de complexité des modèles sont optimisés par minimisation de l'erreur de prévision estimée par validation croisée *V-fold$.
D'autres fonctionnalités sont rapidement illustrées : enchaînement (pipeline) de méthodes et automatisation, détection d'observations atypiques. Leur maîtrise est néanmoins importante pour la mise en exploitation de codes complexes efficaces.
2 Extraction des échantillons
Le travail préliminaire consiste à séparer les échantillons en une partie apprentissage et une autre de test pour estimer sans biais l'erreur de prévision. L'optimisation (biais-variance) de la complexité des modèles est réalisée en minimisant l'erreur estimée par validation croisée $V-fold$.
2.1 Données "Caractères"
Elles sont disponibles dans la librairie Scikit-learn.
End of explanation
# Lire les données d'apprentissage
import pandas as pd
path='' # si les données sont déjà dans le répertoire courant
# path='http://www.math.univ-toulouse.fr/~besse/Wikistat/data/'
df=pd.read_csv(path+'titanic-train.csv',skiprows=1,header=None,usecols=[1,2,4,5,9,11],
names=["Surv","Classe","Genre","Age","Prix","Port"],dtype={"Surv":object,"Classe":object,"Genre":object,"Port":object})
df.head()
df.shape # dimensions
# Redéfinir les types
df["Surv"]=pd.Categorical(df["Surv"],ordered=False)
df["Classe"]=pd.Categorical(df["Classe"],ordered=False)
df["Genre"]=pd.Categorical(df["Genre"],ordered=False)
df["Port"]=pd.Categorical(df["Port"],ordered=False)
df.dtypes
df.count()
# imputation des valeurs manquantes
df["Age"]=df["Age"].fillna(df["Age"].median())
df.Port=df["Port"].fillna("S")
# Discrétiser les variables quantitatives
df["AgeQ"]=pd.qcut(df.Age,3,labels=["Ag1","Ag2","Ag3"])
df["PrixQ"]=pd.qcut(df.Prix,3,labels=["Pr1","Pr2","Pr3"])
# redéfinir les noms des modalités
df["Surv"]=df["Surv"].cat.rename_categories(["Vnon","Voui"])
df["Classe"]=df["Classe"].cat.rename_categories(["Cl1","Cl2","Cl3"])
df["Genre"]=df["Genre"].cat.rename_categories(["Gfem","Gmas"])
df["Port"]=df["Port"].cat.rename_categories(["Pc","Pq","Ps"])
df.head()
Explanation: 2.2 Données "Titanic"
Les données sur le naufrage du Titanic sont décrites dans le calepin consacré à la librairie pandas. Reconstruire la table des données en lisant le fichier .csv.
End of explanation
# Table de départ
df.head()
# Construction des indicatrices
df_q=df.drop(["Age","Prix"],axis=1)
df_q.head()
# Indicatrices
dc=pd.DataFrame(pd.get_dummies(df_q[["Surv","Classe","Genre","Port","AgeQ","PrixQ"]]))
dc.head()
# Table des indicatrices
df1=pd.get_dummies(df_q[["Surv","Classe","Genre","Port","AgeQ","PrixQ"]])
# Une seule indicatrice par variable binaire
df1=df1.drop(["Surv_Vnon","Genre_Gmas"],axis=1)
# Variables quantitatives
df2=df[["Age","Prix"]]
# Concaténation
df_c=pd.concat([df1,df2],axis=1)
# Vérification
df_c.columns
Explanation: Il est nécessaire de transformer les données car scikit-learn ne reconnaît pas la classe DataFrame de pandas, ce qui est bien dommage. Les variables qualitatives sont comme précédemment remplacées par les indicatrices de leurs modalités et les variables quantitatives conservées. Cela introduit une évidente redondance dans les données mais les procédures de sélection de modèle feront le tri.
End of explanation
# variables explicatives
T=df_c.drop(["Surv_Voui"],axis=1)
# Variable à modéliser
z=df_c["Surv_Voui"]
# Extractions
from sklearn.model_selection import train_test_split
T_train,T_test,z_train,z_test=train_test_split(T,z,test_size=0.2,random_state=11)
Explanation: Extraction des échantillons d'apprentissage et test.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=10)
digit_knn=knn.fit(X_train, y_train)
# Estimation de l'erreur de prévision
# sur l'échantillon test
1-digit_knn.score(X_test,y_test)
Explanation: Attention: l'échantillon test des données "Titanic" est relativement petit, l'estimation de l'erreur de prévision est donc sujette à caution car probablement de grande variance. Il suffit de changer l'initialisation (paramètre random_state) et ré-exécuter les scripts pour s'en assurer.
3 K plus proches voisins
Les images des caractères sont codées par des variables quantitatives. Le problème de reconnaissance de forme ou de discrimination est adapté à l'algorithme des $k$-plus proches voisins. Le paramètre à optimiser pour contrôler la complexité du modèle est le nombre de voisin n_neighbors. Les autres options sont décrites dans la documentation.
End of explanation
from sklearn.model_selection import GridSearchCV
# grille de valeurs
param=[{"n_neighbors":list(range(1,15))}]
knn= GridSearchCV(KNeighborsClassifier(),param,cv=5,n_jobs=-1)
digit_knnOpt=knn.fit(X_train, y_train)
# paramètre optimal
digit_knnOpt.best_params_["n_neighbors"]
Explanation: Optimisation du paramètre de complexité du modèle par validation croisée en cherchant l'erreur minimale sur une grille de valeurs du paramètre avec cv=5-fold cross validation et n_jobs=-1 pour une exécution en parallèle utilisant tous les processeurs sauf 1. Attention, comme la validation croisée est aléatoire, deux exécutions successives ne donnent pas le même résultat.
End of explanation
# Estimation de l'erreur de prévision sur l'échantillon test
1-digit_knnOpt.score(X_test,y_test)
# Prévision
y_chap = digit_knnOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_test,y_chap)
print(table)
plt.matshow(table)
plt.title("Matrice de Confusion")
plt.colorbar()
plt.show()
Explanation: Le modèle digit_knnOpt est déjà estimé avec la valeur "optimale" du paramètre.
End of explanation
from sklearn.linear_model import LogisticRegression
logit = LogisticRegression()
titan_logit=logit.fit(T_train, z_train)
# Erreur sur l'écahntillon test
1-titan_logit.score(T_test, z_test)
# Coefficients
titan_logit.coef_
Explanation: 3.3 Régression logistique
La prévision de la survie, variable binaire des données "Titanic", se prêtent à une régression logistique. Les versions pénalisées (ridge, lasso, elastic net, lars) du modèle linéaire général sont les algorithmes les plus développés dans Scikit-learn au détriment de ceux plus classiques (forward, backward, step-wise) de sélection de variables en optimisant un critère de type AIC. Une version lasso de la régression logistique est testée afin d'introduire la sélection automatique des variables.
Estimation et erreur de prévision du modèle complet sur l'échantillon test.
End of explanation
# grille de valeurs
param=[{"C":[0.01,0.096,0.098,0.1,0.12,1,10]}]
logit = GridSearchCV(LogisticRegression(penalty="l1"),
param,cv=5,n_jobs=-1)
titan_logitOpt=logit.fit(T_train, z_train)
# paramètre optimal
titan_logitOpt.best_params_["C"]
Explanation: Comme pour le modèle linéaire, il faudrait construire les commandes d'aide à l'interprétation des résultats.
Pénalisation et optimisation du paramètre par validation croisée. Il existe une fonction spécifique mais son mode d'emploi est peu documenté; GridSearchCV lui est préférée.
End of explanation
# Erreur sur l'échantillon test
1-titan_logitOpt.score(T_test, z_test)
Explanation: Estimation de l'erreur de prévision par le modèle "optimal".
End of explanation
# Estimation avec le paramètre optimal et coefficients
LogisticRegression(penalty="l1",C=titan_logitOpt.best_params_['C']).fit(T_train, z_train).coef_
Explanation: Petit souci supplémentaire, l'objet produit par GridSearchCV ne connaît pas l'attribut .coef_. Il faut donc ré-estimer le modèle pour connaître les coefficients.
End of explanation
from sklearn.tree import DecisionTreeClassifier
tree=DecisionTreeClassifier()
digit_tree=tree.fit(T_train, z_train)
# Estimation de l'erreur de prévision
1-digit_tree.score(T_test,z_test)
Explanation: Commenter : parcimonie du modèle vs. erreur de prévision.
4 Arbre de décision
4.1 Implémentation
Les arbres binaires de décision (CART: classification and regression trees) s'appliquent à tous types de variables. Les options de l'algorithme sont décrites dans la documentation. La complexité du modèle est gérée par deux paramètres : max_depth, qui détermine le nombre max de feuilles dans l'arbre, et le nombre minimales min_samples_split d'observations requises pour rechercher une dichotomie.
Attention: Même s'il s'agit d'une implémentation proche de celle originale proposée par Breiman et al. (1984) il n'existe pas (encore?) comme dans R (package rpart) un paramètre de pénalisation de la déviance du modèle par sa complexité (nombre de feuilles) afin de construire une séquence d'arbres emboîtés dans la perspective d'un élagage (pruning) optimal par validation croisée. La fonction générique de $k$-fold cross validation GridSearchCV est utilisée pour optimiser le paramètre de profondeur mais sans beaucoup de précision dans l'élagage car ce dernier élimine tout un niveau et pas les seules feuilles inutiles à la qualité de la prévision.
En revanche, l'implémentation anticipe sur celles des méthodes d'agrégation de modèles en intégrant les paramètres (nombre de variables tirées, importance...) qui leurs sont spécifiques. D'autre part, la représentation graphique d'un arbre n'est pas incluse et nécessite l'implémentation d'un autre logiciel libre: Graphviz.
Tout ceci souligne encore les objectifs de développement de cette librairie: temps de calcul et prévision brute au détriment d'une recherche d'interprétation. Dans certains exemples éventuellement pas trop compliqués, un arbre élagué de façon optimal peut en effet prévoir à peine moins bien (différence non significative) qu'une agrégation de modèles (forêt aléatoire ou boosting) et apporter un éclairage nettement plus pertinent qu'un algorithme de type "boîte noire".
4.2 Données "Titanic"
Estimation de l'arbre complet.
End of explanation
param=[{"max_depth":list(range(2,10))}]
titan_tree= GridSearchCV(DecisionTreeClassifier(),param,cv=5,n_jobs=-1)
titan_opt=titan_tree.fit(T_train, z_train)
# paramètre optimal
titan_opt.best_params_
Explanation: Optimisation du paramètre de complexité du modèle par validation croisée en cherchant l'erreur minimale sur une grille de valeurs du paramètre avec cv=5-fold cross validation et n_jobs=-1 pour une exécution en parallèle utilisant tous les processeurs sauf 1. Attention, comme la validation croisée est aléatoire et un arbre un modèle instable, deux exécutions successives ne donnent pas nécessairement le même résultat.
End of explanation
tree=DecisionTreeClassifier(max_depth=3)
titan_tree=tree.fit(T_train, z_train)
# Estimation de l'erreur de prévision
# sur l'échantillon test
1-titan_tree.score(T_test,z_test)
Explanation: La valeur "optimale" du paramètre reste trop importante pour la lisibilité de l'arbre. Une valeur plus faible est utilisée.
End of explanation
# prévision de l'échantillon test
z_chap = titan_tree.predict(T_test)
# matrice de confusion
table=pd.crosstab(z_test,z_chap)
print(table)
Explanation: Noter l'amélioration de l'erreur.
End of explanation
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
import pydotplus
dot_data = StringIO()
export_graphviz(titan_tree, out_file=dot_data)
graph=pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png("titan_tree.png")
Explanation: Tracer l'arbre avec le logiciel Graphviz.
End of explanation
from IPython.display import Image
Image(filename='titan_tree.png')
Explanation: L'arbre est généré dans un fichier image à visualiser pour se rende compte qu'il est plutôt mal élagué et pas directement interprétable sans les noms en clair des variables et modalités.
End of explanation
# Arbre complet
tree=DecisionTreeClassifier()
digit_tree=tree.fit(X_train, y_train)
# Estimation de l'erreur de prévision
1-digit_tree.score(X_test,y_test)
# Optimisation par validation croisée
param=[{"max_depth":list(range(5,15))}]
digit_tree= GridSearchCV(DecisionTreeClassifier(),param,cv=5,n_jobs=-1)
digit_treeOpt=digit_tree.fit(X_train, y_train)
digit_treeOpt.best_params_
# Estimation de l'erreur de prévision
1-digit_treeOpt.score(X_test,y_test)
# Echantillon test
y_chap = digit_treeOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_test,y_chap)
print(table)
plt.matshow(table)
plt.title("Matrice de Confusion")
plt.colorbar()
plt.show()
Explanation: 4.3 Données "Caractères"
La même démarche est utilisée pour ces données.
End of explanation
from sklearn.ensemble import RandomForestClassifier
# définition des paramètres
forest = RandomForestClassifier(n_estimators=500,
criterion='gini', max_depth=None,
min_samples_split=2, min_samples_leaf=1,
max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=True)
# apprentissage et erreur out-of-bag
forest = forest.fit(X_train,y_train)
print(1-forest.oob_score_)
# erreur de prévision sur le test
1-forest.score(X_test,y_test)
Explanation: Comme pour les autres méthodes, l'objet GridSearchCV ne contient pas tous les attibuts, dont celui tree, et ne permet pas de construire l'arbre. Il faudrait le ré-estimer mais comme il est bien trop complexe, ce résultat n'est pas produit.
5 Forêts aléatoires
L'algorithme d'agrégation de modèles le plus utilisé est celui des forêts aléatoires (random forest) de Breiman (2001) ce qui ne signifie pas qu'il conduit toujours à la meilleure prévision. Voir la documentation pour la signification de tous les paramètres.
Plus que le nombre d'arbres n_estimators, le paramètre à optimiser est le nombre de variables tirées aléatoirement pour la recherche de la division optimale d'un noeud: max_features. Par défaut, il prend la valeur $\frac{p}{3}$ en régression et $\sqrt{p}$ en discrimination.
5.1 Données "Caractères"
End of explanation
param=[{"max_features":list(range(4,64,4))}]
digit_rf= GridSearchCV(RandomForestClassifier(n_estimators=100),param,cv=5,n_jobs=-1)
digit_rfOpt=digit_rf.fit(X_train, y_train)
# paramètre optimal
digit_rfOpt.best_params_
Explanation: L'optimisation du paramètre max_features peut être réalisée en minimisant l'erreur de prévision out-of-bag. Ce n'est pas prévu, il est aussi possible comme précédemment de minimiser l'erreur par validation croisée.
End of explanation
# erreur de prévision sur le test
1-digit_rfOpt.score(X_test,y_test)
# prévision
y_chap = digit_rfOpt.predict(X_test)
# matrice de confusion
table=pd.crosstab(y_test,y_chap)
print(table)
plt.matshow(table)
plt.title("Matrice de Confusion")
plt.colorbar()
plt.show()
Explanation: Comme pour les autres méthodes, l'objet GridSearchCV ne propose pas tous les attributs et donc pas d'erreur out-of-bag ou d'importance des variables. Voir le tutoriel sur la prévision du pic d'ozone pour plus de détails.
End of explanation
# définition des paramètres
forest = RandomForestClassifier(n_estimators=500, criterion='gini', max_depth=None,
min_samples_split=2, min_samples_leaf=1, max_features='auto', max_leaf_nodes=None,bootstrap=True, oob_score=True)
# apprentissage
forest = forest.fit(T_train,z_train)
print(1-forest.oob_score_)
# erreur de prévision sur le test
1-forest.score(T_test,z_test)
# optimisation de max_features
param=[{"max_features":list(range(2,15))}]
titan_rf= GridSearchCV(RandomForestClassifier(n_estimators=100),param,cv=5,n_jobs=-1)
titan_rfOpt=titan_rf.fit(T_train, z_train)
# paramètre optimal
titan_rfOpt.best_params_
# erreur de prévision sur le test
1-titan_rfOpt.score(T_test,z_test)
# prévision
z_chap = titan_rfOpt.predict(T_test)
# matrice de confusion
table=pd.crosstab(z_test,z_chap)
print(table)
Explanation: 5.2 Données "Titanic"
Même démarche.
End of explanation
# Rechargement des données
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Plot
sample_id = 42
plt.imshow(X[sample_id].reshape((8, 8)), interpolation="nearest", cmap=plt.cm.Blues)
plt.title("y = %d" % y[sample_id])
plt.show()
Explanation: Modifier la valeur du paramètre pour constater sa faible influence sur la qualité plutôt médiocre du résultat.
Attention, comme déjà signalé, l'échantillon test est de relativement faible taille (autour de 180), il serait opportun d'itérer l'extraction aléatoire d'échantillons tests (validation croisée Monte Carlo) pour tenter de réduire la variance de cette estimation et avoir une idée de sa distribution.
C'est fait dans d'autres calepins du dépôt d'apprentissage.
6 Fonction pipeline
Pour enchaîner et brancher (plugin) plusieurs traitements, généralement des transformations suivies d'une modélisation. Utiliser les fonctionnalités de cette section sans modération afin d'optimiser la structure et l'efficacité (parallélisation) de codes complexes.
6.1 Familles de transformations (transformers)
Classification ou régression sont souvent la dernière étape d'un procédé long et complexe. Dans la "vraie vie", les données ont besoin d'être extraites, sélectionnées, nettoyées, standardisées, complétées... (data munging) avant d'alimenter un algorithme d'apprentissage. Pour structurer le code, Sciki-learn propose d'utiliser le principe d'une API (application programming interface) nommée transformer.
Ces fonctionnalités sont illustrées sur les mêmes données de reconnaissance de caractères.
End of explanation
import numpy as np
from sklearn.preprocessing import StandardScaler
tf = StandardScaler()
tf.fit(X_train, y_train)
Xt_train = tf.transform(X)
print("Moyenne avant centrage et réduction =", np.mean(X_train))
print("Moyenne après centrage et réduction =", np.mean(Xt_train))
# See also Binarizer, MinMaxScaler, Normalizer, ...
# Raccourci: Xt = tf.fit_transform(X)
tf.fit_transform(X)
# NB. La standardisation préalable est indispensable pour certains algorithmes
# notamment les SVM
from sklearn.svm import SVC
clf = SVC()
# Calcul des scores (bien classés)
print("Sans standardisation =", clf.fit(X_train, y_train).score(X_test, y_test))
print("Avec standardisation =", clf.fit(tf.transform(X_train), y_train).score(tf.transform(X_test), y_test))
Explanation: Normalisations, réductions
End of explanation
# Sélection de variables par élémination pas à pas
from sklearn.feature_selection import RFE
from sklearn.ensemble import RandomForestClassifier
tf = RFE(RandomForestClassifier(), n_features_to_select=10, verbose=1)
Xt = tf.fit_transform(X_train, y_train)
print("Shape =", Xt.shape)
# Variables (pixels) sélectionnées
plt.imshow(tf.get_support().reshape((8, 8)), interpolation="nearest", cmap=plt.cm.Blues)
plt.show()
Explanation: Sélection de variables par élimination pas à pas
La proicédure RFE (récursive feature selection) supprime une à une les variables les moins significatives ou moins importantes au sens du critère du modèle utilisé; dans cet exemple, il s'agit des forêts aléatoires.
End of explanation
# par ACP ou SVD
from sklearn.decomposition import PCA
tf = PCA(n_components=2)
Xt_train = tf.fit_transform(X_train)
Explanation: Décomposition, factorisation, réduction de dimension
Possibilité, par exemple, de récupérer les q premières composantes principales de l'ACP comme résultat d'une transformation.
End of explanation
from sklearn.preprocessing import FunctionTransformer
def increment(X):
return X + 1
tf = FunctionTransformer(func=increment)
Xt = tf.fit_transform(X)
print(X[0])
print(Xt[0])
Explanation: Fonction de transformation définie par l'utilisateur
Une fonction de transformation ou transformer est définie et s'applique à un jeu de données avec la syntaxe ci-dessous.
End of explanation
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import RFE
#tf = RFE(RandomForestClassifier(), n_features_to_select=10)
# La succession de deux transformeurs constituent un transformeur
tf = make_pipeline(StandardScaler(), RFE(RandomForestClassifier(),n_features_to_select=10))
tf.fit(X_train, y_train)
Xt_train = tf.transform(X_train)
print("Mean =", np.mean(Xt_train))
print("Shape =", Xt_train.shape)
Explanation: 6.4 Pipelines
Des transformations sont chaînées en une séquence constituant un pipeline.
End of explanation
clf = make_pipeline(StandardScaler(),
RFE(RandomForestClassifier(), n_features_to_select=10),
RandomForestClassifier())
clf.fit(X_train, y_train)
print(clf.predict_proba(X_test)[:5])
# L'hyperparamètre est accessible
print("n_features =", clf.get_params()["rfe__estimator__n_estimators"])
Explanation: Une chaîne de transformations suivi d'un classifieur construisent un nouveau classifieur
End of explanation
grid = GridSearchCV(clf, param_grid={"rfe__estimator__n_estimators": [5, 10],
"randomforestclassifier__max_features": [0.1, 0.25, 0.5]})
grid.fit(X_train, y_train)
print("Valeurs optimales =", grid.best_params_)
Explanation: L'optimisation des paramètres par validation croisée est obtenue avec la même fonction mais peut prendre du temps si plusieurs paramètres sont cocernés! Le pipeline construit à titre illustratif n'est certainement pas optimal.
End of explanation
from sklearn.pipeline import make_union
from sklearn.decomposition import PCA, FastICA
tf = make_union(PCA(n_components=10), FastICA(n_components=10))
Xt_train = tf.fit_transform(X_train)
print("Shape =", Xt_train.shape)
Explanation: 6.5 Union de caractéristiques
Des transformations sont appliquées en parallèle pour réunir en un seul ensemble des transformations des données.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.neural_network import MLPClassifier
clf = make_pipeline(
# Build features
make_union(
FunctionTransformer(func=lambda X: X), PCA(),),
# Select the best features
RFE(RandomForestClassifier(), n_features_to_select=10),
# Train
MLPClassifier(max_iter=500)
)
clf.fit(X_train, y_train)
Explanation: 6.6 Compositions emboîtées
Comme des pipelines and des unions sont eux-mêmes des estimateurs, ils peuvent être composés dans une structure emboîtée pour construire des combinaisons complexes de modèles comme ceux remportant les concours de type *kaggle.
Les données initiales sont unies aux composantes de l'ACP, puis les variables les plus importantes au sens des forêts aléatoires sont sélectionnées avant de servir à l'apprentissage d'un réseau de neurones. Ce n'est sûrement pas une stratégie optimale !
End of explanation
# erreur de test
1-clf.score(X_test,y_test)
Explanation: Effectivement la combinaison n'est pas optimale:
End of explanation |
11,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 5T_데이터 분석을 위한 SQL 실습 (4) - SQL Advanced
특정 카테고리에 포함된 영화들의 렌탈 횟수
rental, inventory, film, film_category, category
"Comedy", "Sports", "Family" 카테고리에 포함되는 영화들의 렌탈 횟수
Step3: Store 1의 등급별 매출 중 "R", "PG-13"의 매출
film, payment, inventory, rental
Step6: 배우별 매출
영화별 매출을 구하고 이 데이터를 바탕으로 배우별 매출을 구하세요 | Python Code:
import pymysql
db = pymysql.connect(
"db.fastcamp.us",
"root",
"dkstncks",
"sakila",
charset='utf8',
)
rental_df = pd.read_sql("SELECT * FROM rental;", db)
inventory_df = pd.read_sql("SELECT * FROM inventory;", db)
film_df = pd.read_sql("SELECT * FROM film;", db)
film_category_df = pd.read_sql("SELECT * FROM film_category;", db)
category_df = pd.read_sql("SELECT * FROM category;", db)
rental_df.head(1)
inventory_df.head(1)
film_df.head(1)
film_category_df.head(1)
category_df.head(1)
SQL_QUERY =
SELECT
c.category_id category_id,
c.name category_name,
COUNT(*) rentals_per_category
FROM
rental r
JOIN inventory i ON r.inventory_id = i.inventory_id
JOIN film f ON f.film_id = i.film_id
JOIN film_category fc ON fc.film_id = f.film_id
JOIN category c ON fc.category_id = c.category_id
WHERE
c.name IN ("Family", "Sports", "Comedy")
GROUP BY
category_id
ORDER BY rentals_per_category DESC
;
pd.read_sql(SQL_QUERY, db)
Explanation: 5T_데이터 분석을 위한 SQL 실습 (4) - SQL Advanced
특정 카테고리에 포함된 영화들의 렌탈 횟수
rental, inventory, film, film_category, category
"Comedy", "Sports", "Family" 카테고리에 포함되는 영화들의 렌탈 횟수
End of explanation
payment_df = pd.read_sql("SELECT * FROM payment;", db)
film_df.head(1)
payment_df.head(1)
inventory_df.head(1)
rental_df.head(1)
SQL_QUERY =
SELECT
i.store_id store_id,
f.rating rating,
SUM(p.amount) total_revenue
FROM
payment p
JOIN rental r ON p.rental_id = r.rental_id
JOIN inventory i ON r.inventory_id = i.inventory_id
JOIN film f ON i.film_id = f.film_id
WHERE
i.store_id = 1
AND f.rating IN ("PG-13", "R")
GROUP BY
store_id,
rating
;
pd.read_sql(SQL_QUERY, db)
Explanation: Store 1의 등급별 매출 중 "R", "PG-13"의 매출
film, payment, inventory, rental
End of explanation
# 1. 영화별 매출 - rental, film, inventory
REVENUE_PER_FILM_SQL_QUERY =
SELECT
f.film_id film_id,
COUNT(*) * f.rental_rate revenue
FROM
rental r
JOIN inventory i ON r.inventory_id = i.inventory_id
JOIN film f ON i.film_id = f.film_id
GROUP BY
film_id
;
pd.read_sql(REVENUE_PER_FILM_SQL_QUERY, db)
# 2. 배우별 매출 - actor, film_actor
SQL_QUERY =
SELECT
a.actor_id,
a.last_name last_name,
a.first_name first_name,
SUM(rpf.revenue) revenue_per_actor
FROM ({REVENUE_PER_FILM_SQL_QUERY}) AS rpf
JOIN film_actor fa ON rpf.film_id = fa.film_id
JOIN actor a ON fa.actor_id = a.actor_id
GROUP BY
actor_id
ORDER BY revenue_per_actor DESC
;
.format(
REVENUE_PER_FILM_SQL_QUERY=REVENUE_PER_FILM_SQL_QUERY.replace(";",""),
)
pd.read_sql(SQL_QUERY, db)
Explanation: 배우별 매출
영화별 매출을 구하고 이 데이터를 바탕으로 배우별 매출을 구하세요
End of explanation |
11,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-Class Classifier on Particle Track Data
Step1: Get angle values and cast to boolean
Step2: Create our simple classification target
Step3: Create an image generator from this dataframe
Step4: Create a very simple convolutional model from scratch
Step5: Okay, maybe that was too easy
I mean, if any pixels are lit up on the top half / bottom half, it's a smoking gun.
Let's make it harder with binned measurements and treat it as categorical.
Step6: Similar model, with some tweaks
Step8: Check out predictions on Holdout data | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import os
import sys
import numpy as np
import math
Explanation: Multi-Class Classifier on Particle Track Data
End of explanation
track_params = pd.read_csv('../TRAIN/track_parms.csv')
track_params.tail()
Explanation: Get angle values and cast to boolean
End of explanation
# Bin the phi values to get multi-class labels
track_params['phi_binned'], phi_bins = pd.cut(track_params.phi,
bins=range(-10, 12, 2),
retbins=True)
track_params['phi_binned'] = track_params['phi_binned'].astype(str)
track_params.head()
Explanation: Create our simple classification target
End of explanation
from tensorflow.keras.preprocessing.image import ImageDataGenerator
DATAGEN = ImageDataGenerator(rescale=1./255.,
validation_split=0.25)
height = 100
width = 36
def create_generator(target, subset, class_mode,
idg=DATAGEN, df=track_params, N=1000):
return idg.flow_from_dataframe(
dataframe=track_params.head(N),
directory="../TRAIN",
x_col="filename",
y_col=target,
subset=subset,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=True,
class_mode=class_mode,
)
Explanation: Create an image generator from this dataframe
End of explanation
from tensorflow.keras import Sequential, Model
from tensorflow.keras.layers import (
Conv2D, Activation, MaxPooling2D,
Flatten, Dense, Dropout, Input
)
Explanation: Create a very simple convolutional model from scratch
End of explanation
mc_train_generator = create_generator(
target="phi_binned",
subset="training",
class_mode="categorical",
N=10000
)
mc_val_generator = create_generator(
target="phi_binned",
subset="validation",
class_mode="categorical",
N=10000
)
Explanation: Okay, maybe that was too easy
I mean, if any pixels are lit up on the top half / bottom half, it's a smoking gun.
Let's make it harder with binned measurements and treat it as categorical.
End of explanation
width = 36
height = 100
channels = 3
def multiclass_classifier():
model = Sequential()
# Convoluional Layer
model.add(Conv2D(32, (3, 3), input_shape=(height, width, channels)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
# Dense, Classification Layer
model.add(Flatten())
model.add(Dense(32))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return model
STEP_SIZE_TRAIN = mc_train_generator.n//mc_train_generator.batch_size
STEP_SIZE_VAL = mc_val_generator.n//mc_val_generator.batch_size
mc_model = multiclass_classifier()
mc_history = mc_model.fit_generator(
generator=mc_train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=mc_val_generator,
validation_steps=STEP_SIZE_VAL,
epochs=10
)
plt.plot(mc_history.history['accuracy'], label="Train Accuracy")
plt.plot(mc_history.history['val_accuracy'], label="Validation Accuracy")
plt.legend()
plt.show()
Explanation: Similar model, with some tweaks
End of explanation
holdout_track_params = pd.read_csv('../VALIDATION/track_parms.csv')
holdout_track_params['phi_binned'] = pd.cut(
holdout_track_params['phi'],
bins=phi_bins
)
holdout_track_params['phi_binned'] = (
holdout_track_params['phi_binned'].astype(str)
)
mc_holdout_generator = DATAGEN.flow_from_dataframe(
dataframe=holdout_track_params,
directory="../VALIDATION",
x_col="filename",
y_col="phi_binned",
subset=None,
target_size=(height, width),
batch_size=32,
seed=314,
shuffle=False,
class_mode="categorical",
)
holdout_track_params['y_pred'] = mc_model.predict_classes(mc_holdout_generator)
holdout_track_params['y_true'] = mc_holdout_generator.classes
import numpy as np
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
given a sklearn confusion matrix (cm), make a nice plot
Arguments
---------
cm: confusion matrix from sklearn.metrics.confusion_matrix
target_names: given classification classes such as [0, 1, 2]
the class names, for example: ['high', 'medium', 'low']
title: the text to display at the top of the matrix
cmap: the gradient of the values displayed from matplotlib.pyplot.cm
see http://matplotlib.org/examples/color/colormaps_reference.html
plt.get_cmap('jet') or plt.cm.Blues
normalize: If False, plot the raw numbers
If True, plot the proportions
Usage
-----
plot_confusion_matrix(cm = cm, # confusion matrix created by
# sklearn.metrics.confusion_matrix
normalize = True, # show proportions
target_names = y_labels_vals, # list of names of the classes
title = best_estimator_name) # title of graph
Citiation
---------
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(10, 8))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
y_pred = mc_model.predict_classes(mc_holdout_generator)
y_true = mc_holdout_generator.labels
label_list = ['(-10.0, -8.0]', '(-8.0, -6.0]', '(-6.0, -4.0]', '(-4.0, -2.0]',
'(-2.0, 0.0]', '(0.0, 2.0]', '(2.0, 4.0]', '(4.0, 6.0]', '(6.0, 8.0]',
'(8.0, 10.0]']
plot_confusion_matrix(confusion_matrix(y_true, y_pred),
target_names=label_list,
normalize=False)
Explanation: Check out predictions on Holdout data
End of explanation |
11,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Create an Estimator from a Keras model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Create a simple Keras model.
In Keras, you assemble layers to build models. A model is (usually) a graph
of layers. The most common type of model is a stack of layers
Step3: Compile the model and get a summary.
Step4: Create an input function
Use the Datasets API to scale to large datasets
or multi-device training.
Estimators need control of when and how their input pipeline is built. To allow this, they require an "Input function" or input_fn. The Estimator will call this function with no arguments. The input_fn must return a tf.data.Dataset.
Step5: Test out your input_fn
Step6: Create an Estimator from the tf.keras model.
A tf.keras.Model can be trained with the tf.estimator API by converting the
model to an tf.estimator.Estimator object with
tf.keras.estimator.model_to_estimator.
Step7: Train and evaluate the estimator. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
Explanation: Create an Estimator from a Keras model
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimator/keras_model_to_estimator"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/estimator/keras_model_to_estimator.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Warning: Estimators are not recommended for new code. Estimators run v1.Session-style code which is more difficult to write correctly, and can behave unexpectedly, especially when combined with TF 2 code. Estimators do fall under our compatibility guarantees, but will receive no fixes other than security vulnerabilities. See the migration guide for details.
Overview
TensorFlow Estimators are supported in TensorFlow, and can be created from new and existing tf.keras models. This tutorial contains a complete, minimal example of that process.
Note: If you have a Keras model, you can use it directly with tf.distribute strategies without converting it to an estimator. As such, model_to_estimator is no longer recommended.
Setup
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(3)
])
Explanation: Create a simple Keras model.
In Keras, you assemble layers to build models. A model is (usually) a graph
of layers. The most common type of model is a stack of layers: the
tf.keras.Sequential model.
To build a simple, fully-connected network (i.e. multi-layer perceptron):
End of explanation
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam')
model.summary()
Explanation: Compile the model and get a summary.
End of explanation
def input_fn():
split = tfds.Split.TRAIN
dataset = tfds.load('iris', split=split, as_supervised=True)
dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels))
dataset = dataset.batch(32).repeat()
return dataset
Explanation: Create an input function
Use the Datasets API to scale to large datasets
or multi-device training.
Estimators need control of when and how their input pipeline is built. To allow this, they require an "Input function" or input_fn. The Estimator will call this function with no arguments. The input_fn must return a tf.data.Dataset.
End of explanation
for features_batch, labels_batch in input_fn().take(1):
print(features_batch)
print(labels_batch)
Explanation: Test out your input_fn
End of explanation
import tempfile
model_dir = tempfile.mkdtemp()
keras_estimator = tf.keras.estimator.model_to_estimator(
keras_model=model, model_dir=model_dir)
Explanation: Create an Estimator from the tf.keras model.
A tf.keras.Model can be trained with the tf.estimator API by converting the
model to an tf.estimator.Estimator object with
tf.keras.estimator.model_to_estimator.
End of explanation
keras_estimator.train(input_fn=input_fn, steps=500)
eval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10)
print('Eval result: {}'.format(eval_result))
Explanation: Train and evaluate the estimator.
End of explanation |
11,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k!
Step1: 练习 2:写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
Step2: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
改写第三周练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
Step3: 改写第三周练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符)。
Step4: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。 | Python Code:
def compute_multi(end):
i = 0
multi = 1
while i < end:
i = i + 1
multi = multi * i
return multi
n = int(input('请输入第1个整数,以回车结束。'))
m = int(input('请输入第2个整数,以回车结束。'))
k = int(input('请输入第3个整数,以回车结束。'))
print('最终的和是:', compute_multi(m) + compute_multi(n) + compute_multi(k))
Explanation: 练习 1:仿照求$ \sum_{i=1}^mi + \sum_{i=1}^ni + \sum_{i=1}^ki$的完整代码,写程序,可求m!+n!+k!
End of explanation
def compute_sum(end):
a = 1
total = 0
x = 0
while a < n:
if a % 2 == 0:
x = - 1 / (2*a-1)
else:
x = 1 / (2*a-1)
a = a + 1
total = total + x
return total
n=int(input('你想输入的数是1000还是100000:'))
print('1-1/3+1/5-1/7...的4倍是:',4*compute_sum(n))
Explanation: 练习 2:写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
End of explanation
def xingzuo(month,date):
if month == 3:
if date >=21:
print(name,',你是白羊座。')
else:
print(name,',你是双鱼座。')
elif month == 4:
if date >= 20:
print(name,',你是金牛座。')
else:
print(name,',你是白羊座。')
elif month == 5:
if date >= 21:
print(name,',你是双子座。')
else:
print(name,',你是金牛座。')
elif month == 6:
if date >= 22:
print(name,',你是巨蟹座。')
else:
print(name,',你是双子座。')
elif month == 7:
if date >= 23:
print(name,',你是狮子座。')
else:
print(name,',你是巨蟹座。')
elif month == 8:
if date >= 23:
print(name,',你是处女座。')
else:
print(name,',你是狮子座。')
elif month == 9:
if date >= 23:
print(name,',你是天秤座。')
else:
print(name,',你是处女座。')
elif month == 10:
if date >= 24:
print(name,',你是天蝎座。')
else:
print(name,',你是天秤座。')
elif month == 11:
if date >= 23:
print(name,',你是射手座。')
else:
print(name,',你是天蝎座。')
elif month == 12:
if date >= 22:
print(name,',你是摩羯座。')
else:
print(name,',你是射手座。')
elif month == 1:
if date >= 20:
print(name,',你是水瓶座。')
else:
print(name,',你是摩羯座。')
elif month == 2:
if date >= 19:
print(name,',你是双鱼座。')
else:
print(name,',你是水瓶座。')
return xingzuo
#主程序
name=input('请输入你的姓名,回车结束:')
print(name,'你好!')
a=int(input('请输入你的出生月份,回车结束:'))
b=int(input('请输入你的出生日期,回车结束:'))
xingzuo(a,b)
Explanation: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
改写第三周练习 1:写程序,可由键盘读入用户姓名例如Mr. right,让用户输入出生的月份与日期,判断用户星座,假设用户是金牛座,则输出,Mr. right,你是非常有性格的金牛座!。
End of explanation
def ciwei(a):
if a.endswith('o'):
print(s,'加es')
elif a.endswith('ch') or s.endswith('sh'):
print(s,'加es')
else:
print(s,'加s')
#主程序
s=input('请输入一个英文单词,回车结束')
ciwei(s)
Explanation: 改写第三周练习 4:英文单词单数转复数,要求输入一个英文动词(单数形式),能够得到其复数形式,或给出单数转复数形式的建议(提示,some_string.endswith(some_letter)函数可以判断某字符串结尾字符)。
End of explanation
def compute_sum(end):
total = 0
i = 0
while i < end:
i =+ k
total =+ i
return total
#主程序
m = int(input('请输入一个较小的整数'))
n = int(input('请输入一个较大的整数'))
k = int(input('求和间隔是'))
print(compute_sum(n)-compute_sum(m))
Explanation: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。
End of explanation |
11,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keypoint Detection with Transfer Learning
Author
Step1: Data collection
The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and
segmentation maps. It is developed from the Stanford dogs dataset.
It can be downloaded with the command below
Step2: Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs
to fill this form to get access to it. The
authors explicitly instruct users not to share the JSON file, and this example respects this wish
Step3: Imports
Step4: Define hyperparameters
Step5: Load data
The authors also provide a metadata file that specifies additional information about the
keypoints, like color information, animal pose name, etc. We will load this file in a pandas
dataframe to extract information for visualization purposes.
Step6: A single entry of json_dict looks like the following
Step7: Visualize data
Now, we write a utility function to visualize the images and their keypoints.
Step8: The plots show that we have images of non-uniform sizes, which is expected in most
real-world scenarios. However, if we resize these images to have a uniform shape (for
instance (224 x 224)) their ground-truth annotations will also be affected. The same
applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image.
Fortunately, imgaug provides utilities that can handle this issue.
In the next section, we will write a data generator inheriting the
keras.utils.Sequence class
that applies data augmentation on batches of data using imgaug.
Prepare data generator
Step9: To know more about how to operate with keypoints in imgaug check out
this document.
Define augmentation transforms
Step10: Create training and validation splits
Step11: Data generator investigation
Step12: Model building
The Stanford dogs dataset (on which
the StanfordExtra dataset is based) was built using the ImageNet-1k dataset.
So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful
for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to
extract meaningful features from the images and then pass those to a custom regression
head for predicting coordinates.
Step13: Our custom network is fully-convolutional which makes it more parameter-friendly than the
same version of the network having fully-connected dense layers.
Step14: Notice the output shape of the network
Step15: Make predictions and visualize them | Python Code:
!pip install -q -U imgaug
Explanation: Keypoint Detection with Transfer Learning
Author: Sayak Paul<br>
Date created: 2021/05/02<br>
Last modified: 2021/05/02<br>
Description: Training a keypoint detector with data augmentation and transfer learning.
Keypoint detection consists of locating key object parts. For example, the key parts
of our faces include nose tips, eyebrows, eye corners, and so on. These parts help to
represent the underlying object in a feature-rich manner. Keypoint detection has
applications that include pose estimation, face detection, etc.
In this example, we will build a keypoint detector using the
StanfordExtra dataset,
using transfer learning. This example requires TensorFlow 2.4 or higher,
as well as imgaug library,
which can be installed using the following command:
End of explanation
!wget -q http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar
Explanation: Data collection
The StanfordExtra dataset contains 12,000 images of dogs together with keypoints and
segmentation maps. It is developed from the Stanford dogs dataset.
It can be downloaded with the command below:
End of explanation
!tar xf images.tar
!unzip -qq ~/stanfordextra_v12.zip
Explanation: Annotations are provided as a single JSON file in the StanfordExtra dataset and one needs
to fill this form to get access to it. The
authors explicitly instruct users not to share the JSON file, and this example respects this wish:
you should obtain the JSON file yourself.
The JSON file is expected to be locally available as stanfordextra_v12.zip.
After the files are downloaded, we can extract the archives.
End of explanation
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
from imgaug.augmentables.kps import KeypointsOnImage
from imgaug.augmentables.kps import Keypoint
import imgaug.augmenters as iaa
from PIL import Image
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
import json
import os
Explanation: Imports
End of explanation
IMG_SIZE = 224
BATCH_SIZE = 64
EPOCHS = 5
NUM_KEYPOINTS = 24 * 2 # 24 pairs each having x and y coordinates
Explanation: Define hyperparameters
End of explanation
IMG_DIR = "Images"
JSON = "StanfordExtra_V12/StanfordExtra_v12.json"
KEYPOINT_DEF = (
"https://github.com/benjiebob/StanfordExtra/raw/master/keypoint_definitions.csv"
)
# Load the ground-truth annotations.
with open(JSON) as infile:
json_data = json.load(infile)
# Set up a dictionary, mapping all the ground-truth information
# with respect to the path of the image.
json_dict = {i["img_path"]: i for i in json_data}
Explanation: Load data
The authors also provide a metadata file that specifies additional information about the
keypoints, like color information, animal pose name, etc. We will load this file in a pandas
dataframe to extract information for visualization purposes.
End of explanation
# Load the metdata definition file and preview it.
keypoint_def = pd.read_csv(KEYPOINT_DEF)
keypoint_def.head()
# Extract the colours and labels.
colours = keypoint_def["Hex colour"].values.tolist()
colours = ["#" + colour for colour in colours]
labels = keypoint_def["Name"].values.tolist()
# Utility for reading an image and for getting its annotations.
def get_dog(name):
data = json_dict[name]
img_data = plt.imread(os.path.join(IMG_DIR, data["img_path"]))
# If the image is RGBA convert it to RGB.
if img_data.shape[-1] == 4:
img_data = img_data.astype(np.uint8)
img_data = Image.fromarray(img_data)
img_data = np.array(img_data.convert("RGB"))
data["img_data"] = img_data
return data
Explanation: A single entry of json_dict looks like the following:
'n02085782-Japanese_spaniel/n02085782_2886.jpg':
{'img_bbox': [205, 20, 116, 201],
'img_height': 272,
'img_path': 'n02085782-Japanese_spaniel/n02085782_2886.jpg',
'img_width': 350,
'is_multiple_dogs': False,
'joints': [[108.66666666666667, 252.0, 1],
[147.66666666666666, 229.0, 1],
[163.5, 208.5, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[54.0, 244.0, 1],
[77.33333333333333, 225.33333333333334, 1],
[79.0, 196.5, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[150.66666666666666, 86.66666666666667, 1],
[88.66666666666667, 73.0, 1],
[116.0, 106.33333333333333, 1],
[109.0, 123.33333333333333, 1],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0],
[0, 0, 0]],
'seg': ...}
In this example, the keys we are interested in are:
img_path
joints
There are a total of 24 entries present inside joints. Each entry has 3 values:
x-coordinate
y-coordinate
visibility flag of the keypoints (1 indicates visibility and 0 indicates non-visibility)
As we can see joints contain multiple [0, 0, 0] entries which denote that those
keypoints were not labeled. In this example, we will consider both non-visible as well as
unlabeled keypoints in order to allow mini-batch learning.
End of explanation
# Parts of this code come from here:
# https://github.com/benjiebob/StanfordExtra/blob/master/demo.ipynb
def visualize_keypoints(images, keypoints):
fig, axes = plt.subplots(nrows=len(images), ncols=2, figsize=(16, 12))
[ax.axis("off") for ax in np.ravel(axes)]
for (ax_orig, ax_all), image, current_keypoint in zip(axes, images, keypoints):
ax_orig.imshow(image)
ax_all.imshow(image)
# If the keypoints were formed by `imgaug` then the coordinates need
# to be iterated differently.
if isinstance(current_keypoint, KeypointsOnImage):
for idx, kp in enumerate(current_keypoint.keypoints):
ax_all.scatter(
[kp.x], [kp.y], c=colours[idx], marker="x", s=50, linewidths=5
)
else:
current_keypoint = np.array(current_keypoint)
# Since the last entry is the visibility flag, we discard it.
current_keypoint = current_keypoint[:, :2]
for idx, (x, y) in enumerate(current_keypoint):
ax_all.scatter([x], [y], c=colours[idx], marker="x", s=50, linewidths=5)
plt.tight_layout(pad=2.0)
plt.show()
# Select four samples randomly for visualization.
samples = list(json_dict.keys())
num_samples = 4
selected_samples = np.random.choice(samples, num_samples, replace=False)
images, keypoints = [], []
for sample in selected_samples:
data = get_dog(sample)
image = data["img_data"]
keypoint = data["joints"]
images.append(image)
keypoints.append(keypoint)
visualize_keypoints(images, keypoints)
Explanation: Visualize data
Now, we write a utility function to visualize the images and their keypoints.
End of explanation
class KeyPointsDataset(keras.utils.Sequence):
def __init__(self, image_keys, aug, batch_size=BATCH_SIZE, train=True):
self.image_keys = image_keys
self.aug = aug
self.batch_size = batch_size
self.train = train
self.on_epoch_end()
def __len__(self):
return len(self.image_keys) // self.batch_size
def on_epoch_end(self):
self.indexes = np.arange(len(self.image_keys))
if self.train:
np.random.shuffle(self.indexes)
def __getitem__(self, index):
indexes = self.indexes[index * self.batch_size : (index + 1) * self.batch_size]
image_keys_temp = [self.image_keys[k] for k in indexes]
(images, keypoints) = self.__data_generation(image_keys_temp)
return (images, keypoints)
def __data_generation(self, image_keys_temp):
batch_images = np.empty((self.batch_size, IMG_SIZE, IMG_SIZE, 3), dtype="int")
batch_keypoints = np.empty(
(self.batch_size, 1, 1, NUM_KEYPOINTS), dtype="float32"
)
for i, key in enumerate(image_keys_temp):
data = get_dog(key)
current_keypoint = np.array(data["joints"])[:, :2]
kps = []
# To apply our data augmentation pipeline, we first need to
# form Keypoint objects with the original coordinates.
for j in range(0, len(current_keypoint)):
kps.append(Keypoint(x=current_keypoint[j][0], y=current_keypoint[j][1]))
# We then project the original image and its keypoint coordinates.
current_image = data["img_data"]
kps_obj = KeypointsOnImage(kps, shape=current_image.shape)
# Apply the augmentation pipeline.
(new_image, new_kps_obj) = self.aug(image=current_image, keypoints=kps_obj)
batch_images[i,] = new_image
# Parse the coordinates from the new keypoint object.
kp_temp = []
for keypoint in new_kps_obj:
kp_temp.append(np.nan_to_num(keypoint.x))
kp_temp.append(np.nan_to_num(keypoint.y))
# More on why this reshaping later.
batch_keypoints[i,] = np.array(kp_temp).reshape(1, 1, 24 * 2)
# Scale the coordinates to [0, 1] range.
batch_keypoints = batch_keypoints / IMG_SIZE
return (batch_images, batch_keypoints)
Explanation: The plots show that we have images of non-uniform sizes, which is expected in most
real-world scenarios. However, if we resize these images to have a uniform shape (for
instance (224 x 224)) their ground-truth annotations will also be affected. The same
applies if we apply any geometric transformation (horizontal flip, for e.g.) to an image.
Fortunately, imgaug provides utilities that can handle this issue.
In the next section, we will write a data generator inheriting the
keras.utils.Sequence class
that applies data augmentation on batches of data using imgaug.
Prepare data generator
End of explanation
train_aug = iaa.Sequential(
[
iaa.Resize(IMG_SIZE, interpolation="linear"),
iaa.Fliplr(0.3),
# `Sometimes()` applies a function randomly to the inputs with
# a given probability (0.3, in this case).
iaa.Sometimes(0.3, iaa.Affine(rotate=10, scale=(0.5, 0.7))),
]
)
test_aug = iaa.Sequential([iaa.Resize(IMG_SIZE, interpolation="linear")])
Explanation: To know more about how to operate with keypoints in imgaug check out
this document.
Define augmentation transforms
End of explanation
np.random.shuffle(samples)
train_keys, validation_keys = (
samples[int(len(samples) * 0.15) :],
samples[: int(len(samples) * 0.15)],
)
Explanation: Create training and validation splits
End of explanation
train_dataset = KeyPointsDataset(train_keys, train_aug)
validation_dataset = KeyPointsDataset(validation_keys, test_aug, train=False)
print(f"Total batches in training set: {len(train_dataset)}")
print(f"Total batches in validation set: {len(validation_dataset)}")
sample_images, sample_keypoints = next(iter(train_dataset))
assert sample_keypoints.max() == 1.0
assert sample_keypoints.min() == 0.0
sample_keypoints = sample_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE
visualize_keypoints(sample_images[:4], sample_keypoints)
Explanation: Data generator investigation
End of explanation
def get_model():
# Load the pre-trained weights of MobileNetV2 and freeze the weights
backbone = keras.applications.MobileNetV2(
weights="imagenet", include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3)
)
backbone.trainable = False
inputs = layers.Input((IMG_SIZE, IMG_SIZE, 3))
x = keras.applications.mobilenet_v2.preprocess_input(inputs)
x = backbone(x)
x = layers.Dropout(0.3)(x)
x = layers.SeparableConv2D(
NUM_KEYPOINTS, kernel_size=5, strides=1, activation="relu"
)(x)
outputs = layers.SeparableConv2D(
NUM_KEYPOINTS, kernel_size=3, strides=1, activation="sigmoid"
)(x)
return keras.Model(inputs, outputs, name="keypoint_detector")
Explanation: Model building
The Stanford dogs dataset (on which
the StanfordExtra dataset is based) was built using the ImageNet-1k dataset.
So, it is likely that the models pretrained on the ImageNet-1k dataset would be useful
for this task. We will use a MobileNetV2 pre-trained on this dataset as a backbone to
extract meaningful features from the images and then pass those to a custom regression
head for predicting coordinates.
End of explanation
get_model().summary()
Explanation: Our custom network is fully-convolutional which makes it more parameter-friendly than the
same version of the network having fully-connected dense layers.
End of explanation
model = get_model()
model.compile(loss="mse", optimizer=keras.optimizers.Adam(1e-4))
model.fit(train_dataset, validation_data=validation_dataset, epochs=EPOCHS)
Explanation: Notice the output shape of the network: (None, 1, 1, 48). This is why we have reshaped
the coordinates as: batch_keypoints[i, :] = np.array(kp_temp).reshape(1, 1, 24 * 2).
Model compilation and training
For this example, we will train the network only for five epochs.
End of explanation
sample_val_images, sample_val_keypoints = next(iter(validation_dataset))
sample_val_images = sample_val_images[:4]
sample_val_keypoints = sample_val_keypoints[:4].reshape(-1, 24, 2) * IMG_SIZE
predictions = model.predict(sample_val_images).reshape(-1, 24, 2) * IMG_SIZE
# Ground-truth
visualize_keypoints(sample_val_images, sample_val_keypoints)
# Predictions
visualize_keypoints(sample_val_images, predictions)
Explanation: Make predictions and visualize them
End of explanation |
11,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Census Data Correlation
Correlate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Census Data Correlation Recipe Parameters
Pre-requisite is Census Normalize, run that at least once.
Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query.
Define the DATASET and TABLE for the joinable source. Can be a view.
Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value.
Specify where to write the results.
IMPORTANT
Step3: 4. Execute Census Data Correlation
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Census Data Correlation
Correlate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth':'service', # Credentials used for writing data.
'join':'', # Name of column to join on, must match Census Geo_Id column.
'pass':[], # Comma seperated list of columns to pass through.
'sum':[], # Comma seperated list of columns to sum, optional.
'correlate':[], # Comma seperated list of percentage columns to correlate.
'from_dataset':'', # Existing BigQuery dataset.
'from_table':'', # Table to use as join data.
'significance':'80', # Select level of significance to test.
'to_dataset':'', # Existing BigQuery dataset.
'type':'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Census Data Correlation Recipe Parameters
Pre-requisite is Census Normalize, run that at least once.
Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query.
Define the DATASET and TABLE for the joinable source. Can be a view.
Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value.
Specify where to write the results.
IMPORTANT:** If you use VIEWS, you will have to delete them manually if the recipe changes.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'census':{
'auth':{'field':{'name':'auth','kind':'authentication','order':0,'default':'service','description':'Credentials used for writing data.'}},
'correlate':{
'join':{'field':{'name':'join','kind':'string','order':1,'default':'','description':'Name of column to join on, must match Census Geo_Id column.'}},
'pass':{'field':{'name':'pass','kind':'string_list','order':2,'default':[],'description':'Comma seperated list of columns to pass through.'}},
'sum':{'field':{'name':'sum','kind':'string_list','order':3,'default':[],'description':'Comma seperated list of columns to sum, optional.'}},
'correlate':{'field':{'name':'correlate','kind':'string_list','order':4,'default':[],'description':'Comma seperated list of percentage columns to correlate.'}},
'dataset':{'field':{'name':'from_dataset','kind':'string','order':5,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'from_table','kind':'string','order':6,'default':'','description':'Table to use as join data.'}},
'significance':{'field':{'name':'significance','kind':'choice','order':7,'default':'80','description':'Select level of significance to test.','choices':['80','90','98','99','99.5','99.95']}}
},
'to':{
'dataset':{'field':{'name':'to_dataset','kind':'string','order':9,'default':'','description':'Existing BigQuery dataset.'}},
'type':{'field':{'name':'type','kind':'choice','order':10,'default':'table','description':'Write Census_Percent as table or view.','choices':['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Census Data Correlation
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
11,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tsam - 1. Example
Example usage of the time series aggregation module (tsam)
Date
Step1: Input data
Read in time series from testdata.csv with pandas
Step2: Show a slice of the dataset
Step3: Show the shape of the raw input data
Step4: Create a plot function for the temperature for a visual comparison of the time series
Step5: Plot an example series - in this case the temperature
Step6: Simple k-mean aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
Step7: Create the typical periods
Step8: Show shape of typical periods
Step9: Save typical periods to .csv file
Step10: Repredict the original time series based on the typical periods
Step11: Plot the repredicted data
Step12: As seen, they days with the minimal temperature are excluded. In case that they are required they can be added to the aggregation as follow.
Hierarchical aggregation including extreme periods
Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
Step13: Create the typical periods
Step14: The aggregation can also be evaluated by indicators
Step15: Save typical periods to .csv file
Step16: Repredict the original time series based on the typical periods
Step17: Plot repredicted data
Step18: Now also the days with the minimal temperature are integrated into the typical periods.
Comparison of the aggregations
It was shown for the temperature, but both times all four time series have been aggregated. Therefore, we compare here also the duration curves of the electrical load for the original time series, the aggregation with k-mean, and the hierarchical aggregation including peak periods.
Step19: Or as unsorted time series for an example week | Python Code:
%load_ext autoreload
%autoreload 2
import copy
import os
import pandas as pd
import matplotlib.pyplot as plt
import tsam.timeseriesaggregation as tsam
%matplotlib inline
Explanation: tsam - 1. Example
Example usage of the time series aggregation module (tsam)
Date: 08.05.2017
Author: Leander Kotzur
Import pandas and the relevant time series aggregation class
End of explanation
raw = pd.read_csv('testdata.csv', index_col = 0)
Explanation: Input data
Read in time series from testdata.csv with pandas
End of explanation
raw.head()
Explanation: Show a slice of the dataset
End of explanation
raw.shape
Explanation: Show the shape of the raw input data: 4 types of timeseries (GHI, Temperature, Wind and Load) for every hour in a year
End of explanation
def plotTS(data, periodlength, vmin, vmax):
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
stacked, timeindex = tsam.unstackToPeriods(copy.deepcopy(data), periodlength)
cax = axes.imshow(stacked.values.T, interpolation = 'nearest', vmin = vmin, vmax = vmax)
axes.set_aspect('auto')
axes.set_ylabel('Hour')
plt.xlabel('Day')
fig.subplots_adjust(right = 1.2)
cbar=plt.colorbar(cax)
cbar.set_label('T [°C]')
Explanation: Create a plot function for the temperature for a visual comparison of the time series
End of explanation
plotTS(raw['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
Explanation: Plot an example series - in this case the temperature
End of explanation
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'k_means')
Explanation: Simple k-mean aggregation
Initialize an aggregation class object with k-mean as method for eight typical days, without any integration of extreme periods. Alternative clusterMethod's are 'averaging','hierarchical' and 'k_medoids'.
End of explanation
typPeriods = aggregation.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
typPeriods.shape
Explanation: Show shape of typical periods: 4 types of timeseries for 8*24 hours
End of explanation
typPeriods.to_csv(os.path.join('results','testperiods_kmeans.csv'))
Explanation: Save typical periods to .csv file
End of explanation
predictedPeriods = aggregation.predictOriginalData()
Explanation: Repredict the original time series based on the typical periods
End of explanation
plotTS(predictedPeriods['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
Explanation: Plot the repredicted data
End of explanation
aggregation = tsam.TimeSeriesAggregation(raw, noTypicalPeriods = 8, hoursPerPeriod = 24,
clusterMethod = 'hierarchical',
extremePeriodMethod = 'new_cluster_center',
addPeakMin = ['T'], addPeakMax = ['Load'] )
Explanation: As seen, they days with the minimal temperature are excluded. In case that they are required they can be added to the aggregation as follow.
Hierarchical aggregation including extreme periods
Initialize a time series aggregation which integrates the day with the minimal temperature and the day with the maximal load as periods.
End of explanation
typPeriods = aggregation.createTypicalPeriods()
Explanation: Create the typical periods
End of explanation
aggregation.accuracyIndicators()
Explanation: The aggregation can also be evaluated by indicators
End of explanation
typPeriods.to_csv(os.path.join('results','testperiods_hierarchical.csv'))
Explanation: Save typical periods to .csv file
End of explanation
predictedPeriodsWithEx = aggregation.predictOriginalData()
Explanation: Repredict the original time series based on the typical periods
End of explanation
plotTS(predictedPeriodsWithEx['T'], 24, vmin = raw['T'].min(), vmax = raw['T'].max())
Explanation: Plot repredicted data
End of explanation
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = 'Original')
predictedPeriods['Load'].sort_values(ascending=False).reset_index(drop=True).plot(label = '8 typ days')
predictedPeriodsWithEx['Load'].sort_values(
ascending=False).reset_index(drop=True).plot(label = '8 typ days \n + peak period')
plt.legend()
plt.xlabel('Hours [h]')
plt.ylabel('Duration Load [MW]')
Explanation: Now also the days with the minimal temperature are integrated into the typical periods.
Comparison of the aggregations
It was shown for the temperature, but both times all four time series have been aggregated. Therefore, we compare here also the duration curves of the electrical load for the original time series, the aggregation with k-mean, and the hierarchical aggregation including peak periods.
End of explanation
fig, axes = plt.subplots(figsize = [6, 2], dpi = 100, nrows = 1, ncols = 1)
raw['Load']['20100210':'20100218'].plot(label = 'Original')
predictedPeriods['Load']['20100210':'20100218'].plot(label = '8 typ days')
predictedPeriodsWithEx['Load']['20100210':'20100218'].plot(label = '8 typ days \n + peak period')
plt.legend()
plt.ylabel('Load [MW]')
Explanation: Or as unsorted time series for an example week
End of explanation |
11,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: JFreeChart Versions - Diving into Differences and Similarities
This notebook reports code and results of the analysis conducted on the two versions of the JfreeChart software system included in the dataset (i.e. $0.6.0$ and $0.7.1$).
The goal of the analysis is to provide insights on the differences and/or similarities between the methods included in both systems, along with some considerations on the coherence of methods included only in one of the two.
Utilities
Utilities functions used throughout this notebook.
Feel free to skip to the <a href="#analysis">Analisys Section</a> directly.
Preamble
Step6: Text Processing Functions (for code and comments)
Step10: Pipeline Process
Helper class that aids the creation and the application of multiple text processing functions
Step13: Function to Analyse Coherence
Step16: Functions to gather Data from the DB
Step18: Utility function to test processing results and check what's going on
Step19: <a name="analysis"></a>
Analysis
<a name="nav"></a>
<a href="#data">Load Data</a>
<a href="#common_methods">Analysis of Methods in Common</a> (<a href="#comments_stats">Stats</a>)
<a href="#analysis_comment">Analysis of Comments</a>
<a href="#same_comment">Methods with Same Comment</a>
<a href="#tw01">Take Away No. 1</a>
<a href="#diff_comment">Methods with Different Comment</a>
<a href="#tw02">Take Away No. 2</a>
<a href="#analysis_code">Analysis of Implementation</a>
<a href="#scomm_code">Methods with Same Comment</a>
<a href="#tw03">Take Away No. 3</a>
<a href="#dcomm_code">Methods with Different Comment</a>
<a href="#tw04">Take Away No. 4</a>
<a href="#analysis_coherence">Analysis of Coherence</a>
<a href="#scomm_scode_coherence">Methods with Same Comment and Code</a>
<a href="#scomm_dcode_coherence">Methods with Same Comment, Different Code</a>
<a href="#tw05">Take Away No. 5</a>
<a href="#dcomm_scode_coherence">Methods with Different Comment, Same Code</a>
<a href="#tw06">Take Away No. 6</a>
<a href="#tw07">Take Away No. 7</a>
<a href="#dcomm_dcode_coherence">Methods with Different Comment and Code</a>
<a href="#tw08">Take Away No. 8</a>
<a href="#tw09">Take Away No. 9</a>
<a href="#difference_methods">Analysis of Methods NOT in Common</a>
<a href="#diff_jf060">Coherence of Methods in JFreeChart 0.6.0</a>
<a href="#diff_jf071">Coherence of Methods in JFreeChart 0.7.1</a>
<a href="#match">(Likely) Modified Methods from JFreeChart 0.6.0 to 0.7.1</a>
<a href="#summary">Summary</a>
<a name="data"></a>
Load Data
Step20: Total Stats (starting point)
Step21: <a name="common_methods"></a> <a href="#nav">Back to top</a>
Common Methods
Step22: <a name="analysis_comment"></a> <a href="#nav">Back to top</a>
Analysis of the Comments
Step23: <a name="comments_stats"></a> <a href="#nav">Back to top</a>
Stats
Step24: <a name="same_comment"></a> <a href="#nav">Back to top</a>
Methods in Common with the Same Comment
Step25: <a name="tw01"></a> <a href="#nav">Back to top</a>
Take Away No. 1
From the total set of $283$ methods in common between JFreeChart 0.6.0 and 0.7.1,
$257$ ($\approx 91\%$) have no significant differences in comments (differences limited only to text formattings and punctuaction characters)
Hence this changes do not affect the coherence evaluation in any way.
<a name="diff_comment"></a> <a href="#nav">Back to top</a>
Methods in Common with Different Comments
Step26: <a name="tw02"></a> <a href="#nav">Back to top</a>
Take Away No. 2
Step27: <a name="tw03"></a> <a href="#nav">Back to top</a>
Take Away No. 3
Step28: <a name="tw04"></a> <a href="#nav">Back to top</a>
Take Away No. 4
Step29: Result
No differences occur in the evaluation of coherences for common methods sharing the same code and comments (as expected)
<a name="scomm_dcode_coherence"></a> <a href="#nav">Back to top</a>
Coherence of Methods with Same Comments but Different Code
Step30: <a name="tw05"></a> <a href="#nav">Back to top</a>
Take Away No. 5
Step31: <a name="tw06"></a> <a href="#nav">Back to top</a>
Take Away No. 6
Step32: <a name="tw07"></a> <a href="#nav">Back to top</a>
Take Away No. 7
Step33: Pick some examples from those methods who share the same coherence evaluations
Step34: <a name="tw08"></a> <a href="#nav">Back to top</a>
Take Away No. 8
Step35: <a name="tw09"></a> <a href="#nav">Back to top</a>
Take Away No. 9
Step36: Percentage of Coherente / Not Coherent Methods
Analyse how many of the methods in the difference between the two versions are Coherent or Not Coherent
<a name="diff_jf060"></a> <a href="#nav">Back to top</a>
JFreeChart 0.6.0
Step37: <a name="diff_jf071"></a> <a href="#nav">Back to top</a>
JFreeChart 0.7.1
Step39: <a name="match"></a> <a href="#nav">Back to top</a>
(Likely) Modified Methods from JFreeChart 0.6.0 to 0.7.1
In this section, the main purpose of the analysis is to try to check some possible matches between methods not in JFreeChart 0.6.0 but present in JFreeChart 0.7.1.
The main idea is that we would like to find (or guess) all those methods whose signature has been changed/updated thus not appearing in the set of Common Methods. | Python Code:
# %load preamble_directives.py
Some imports and path settings to make notebook code
running smoothly.
# Author: Valerio Maggio <[email protected]>
# Copyright (c) 2015 Valerio Maggio <[email protected]>
# License: BSD 3 clause
import sys, os
# Extending PYTHONPATH to allow relative import!
sys.path.append(os.path.join(os.path.abspath(os.path.curdir), '..'))
# Import Django Settings
from django.conf import settings
# Import Comments_Classification (Django) Project Settings
from comments_classification import settings as comments_classification_settings
try:
settings.configure(**comments_classification_settings.__dict__)
except RuntimeError:
# settings already configured
pass
# ---------------------
# Module Import Section
# ---------------------
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: JFreeChart Versions - Diving into Differences and Similarities
This notebook reports code and results of the analysis conducted on the two versions of the JfreeChart software system included in the dataset (i.e. $0.6.0$ and $0.7.1$).
The goal of the analysis is to provide insights on the differences and/or similarities between the methods included in both systems, along with some considerations on the coherence of methods included only in one of the two.
Utilities
Utilities functions used throughout this notebook.
Feel free to skip to the <a href="#analysis">Analisys Section</a> directly.
Preamble
End of explanation
import re
def strip_tags(text):
Strips all HTML tags from text
HTML_TAG_RE = re.compile(r'<[^>]+>')
return HTML_TAG_RE.sub('', text)
from string import punctuation
def strip_punctuations(text, allowed='@'):
Strips all the punctuation character from the input comment.
Parameters
----------
text : str
The input text to process
allowed : str, optional.
The list of punctuation characters to exclude from the processing
(default is '@' as for JavaDoc Comments).
black_list = punctuation
for c in allowed:
black_list = black_list.replace(c, '')
for p in black_list:
text = text.replace(p, ' ')
return '\n'.join(' '.join(w for w in line.split() if len(w)) for line in text.splitlines() if len(line))
def normalise_lines(text):
Removes additional (trailing) spaces from lines of the given text and
returns it normalised (no extra spaces)
return '\n'.join(l.strip() for l in text.splitlines() if len(l.strip()))
def extract_locs(code_fragment):
Normalise and returns the lines of code in the input fragment.
locs = ' '.join(l.strip() for st in code_fragment.split(';') for l in st.splitlines()
if len(l.strip()) and len(strip_punctuations(l.strip())))
return locs
Explanation: Text Processing Functions (for code and comments)
End of explanation
class Pipeline(object):
Implements a simple linear pipeline process
def __init__(self, *callables):
Creates a new Pipeline of processes (i.e. `callables`.)
Each of this callable, must always return a value as it
will represent the new `data` parameter passed through
the pipeline till finally returned.
Parameters
----------
callables : list
A list of callables (e.g. functions) of arity one.
This list constitutes the set of filters of the pipeline.
self._filters = list(callables)
@property
def filters(self):
return self._filters
def __iadd__(self, process):
if not callable(process):
raise ValueError('The value should be a callable')
from inspect import signature
sig_process = signature(process)
if len(sig_process.parameters) != 1:
raise ValueError('The input function must have arity one!')
self._filters.append(process)
return self
def process(self, data):
Execute the pipeline
for callable in self._filters:
data = callable(data)
return data
Explanation: Pipeline Process
Helper class that aids the creation and the application of multiple text processing functions
End of explanation
NOT_EVALUATED = -1
DONT_KNOW = 2
FURTHER_EVAL = 5
AGREEMENT = 3
STRONG_AGREEMENT = 4
def is_coherent(method):
Return wheter or not comment is coherent with its method implementation according to judges evaluations.
Parameters
----------
method : `source_code_analysis.models.CodeMethod`
Instance of a `CodeMethod` model holding the reference to its corresponding evaluations.
Returns
-------
bool :
True if the evaluation (the first one retrieved from the db) corresponds to an
AGREEMENT | STRONG_AGREEMENT value.
return (method.agreement_evaluations.last().agreement_vote in (AGREEMENT, STRONG_AGREEMENT))
def has_agreement_evaluations(method):
Check that input methods has agreement evaluations interesting for the current analysis
(i.e. different from DONT_KNOW).
Parameters
----------
method : `source_code_analysis.models.CodeMethod`
Instance of a `CodeMethod` model holding the reference to its corresponding evaluations.
Returns
-------
bool :
True if the evaluation (the first one retrieved from the db) does **not**
correspond to a DONT_KNOW value.
return (method.agreement_evaluations.last().agreement_vote not in (NOT_EVALUATED,
DONT_KNOW, FURTHER_EVAL))
Explanation: Function to Analyse Coherence
End of explanation
def signature(code_fragment):
Returns the signature of a method extracted from the input code fragment.
Parameters
----------
code_fragment : str
The implementation code of a method (i.e. method.code_fragment attribute)
Returns
-------
str :
The signature string of the method
first_line = code_fragment[:code_fragment.find('{')]
return ' '.join([l.strip() for l in first_line.splitlines() if len(l.strip())])
def gather_all_methods(sw_project):
Gather all methods for the input software project
Parameters
----------
sw_project : `source_code_analysis.models.SoftwareProject`
Target Software Project
Returns
-------
dict :
A dictionary mapping all methods with its unique key.
This key is extremely important to correctly identify similarities among
multiple versions of the same software.
In more details, the key for a single *method* is defined by the following triple:
* Name of the Source File
* Name of the Class
* Signature of the method
# gather all methods
methods = filter(has_agreement_evaluations, sw_project.code_methods.all())
# create the map
methods_map = dict()
for method in methods:
key = '{}{}{}'.format(method.code_class.src_filename,
method.code_class.class_name,
signature(method.code_fragment))
if not key in methods_map:
methods_map[key] = method
else:
print('Key already present: ')
print('Current method ID: ', method.id)
print('Already present method: ', methods_map[key].id)
# return map
return methods_map
Explanation: Functions to gather Data from the DB
End of explanation
def randomly_pick_a_method_from(list_of_methods_keys, with_code=False, with_coherence=False):
Randomly pick a method from the input collection of keys and print the
corresponding lead comments. If `with_code` parameter is provided,
the code_fragment is printed as well.
Moreover, if the `with_coherence` parameter is True, the corresponding coherence evaluation is
reported in the output, as well.
from random import choice
random_key = choice(list_of_methods_keys)
method_in_jf060 = jf060_methods[random_key]
method_in_jf071 = jf071_methods[random_key]
print('='*80)
print('Method in JFreeChart 0.6.0', end=' ')
if with_coherence:
print('Is Coherent: ', is_coherent(method_in_jf060))
else:
print('')
print(method_in_jf060.comment)
if with_code:
print('')
print(method_in_jf060.code_fragment)
print('\n\n')
print('Method in JFreeChart 0.7.1', end=' ')
if with_coherence:
print('Is Coherent: ', is_coherent(method_in_jf071))
else:
print('')
print(method_in_jf071.comment)
if with_code:
print('')
print(method_in_jf071.code_fragment)
print('='*80, end='\n\n')
Explanation: Utility function to test processing results and check what's going on
End of explanation
from source_code_analysis.models import SoftwareProject
jfreechart_060 = SoftwareProject.objects.get(name__iexact='JFreeChart', version='0.6.0')
jfreechart_071 = SoftwareProject.objects.get(name__iexact='JFreeChart', version='0.7.1')
jf060_methods = gather_all_methods(jfreechart_060)
jf071_methods = gather_all_methods(jfreechart_071)
Explanation: <a name="analysis"></a>
Analysis
<a name="nav"></a>
<a href="#data">Load Data</a>
<a href="#common_methods">Analysis of Methods in Common</a> (<a href="#comments_stats">Stats</a>)
<a href="#analysis_comment">Analysis of Comments</a>
<a href="#same_comment">Methods with Same Comment</a>
<a href="#tw01">Take Away No. 1</a>
<a href="#diff_comment">Methods with Different Comment</a>
<a href="#tw02">Take Away No. 2</a>
<a href="#analysis_code">Analysis of Implementation</a>
<a href="#scomm_code">Methods with Same Comment</a>
<a href="#tw03">Take Away No. 3</a>
<a href="#dcomm_code">Methods with Different Comment</a>
<a href="#tw04">Take Away No. 4</a>
<a href="#analysis_coherence">Analysis of Coherence</a>
<a href="#scomm_scode_coherence">Methods with Same Comment and Code</a>
<a href="#scomm_dcode_coherence">Methods with Same Comment, Different Code</a>
<a href="#tw05">Take Away No. 5</a>
<a href="#dcomm_scode_coherence">Methods with Different Comment, Same Code</a>
<a href="#tw06">Take Away No. 6</a>
<a href="#tw07">Take Away No. 7</a>
<a href="#dcomm_dcode_coherence">Methods with Different Comment and Code</a>
<a href="#tw08">Take Away No. 8</a>
<a href="#tw09">Take Away No. 9</a>
<a href="#difference_methods">Analysis of Methods NOT in Common</a>
<a href="#diff_jf060">Coherence of Methods in JFreeChart 0.6.0</a>
<a href="#diff_jf071">Coherence of Methods in JFreeChart 0.7.1</a>
<a href="#match">(Likely) Modified Methods from JFreeChart 0.6.0 to 0.7.1</a>
<a href="#summary">Summary</a>
<a name="data"></a>
Load Data
End of explanation
print('Total No. of Methods in JFreeChart 0.6.0: ', len(jf060_methods))
print('Total No. of Methods in JFreeChart 0.7.1: ', len(jf071_methods))
Explanation: Total Stats (starting point)
End of explanation
# Set of all the Keys in common between the two considered versions
methods_in_common = set(jf060_methods.keys()).intersection(set(jf071_methods.keys()))
print('Total Methods in Common: ', len(methods_in_common))
Explanation: <a name="common_methods"></a> <a href="#nav">Back to top</a>
Common Methods
End of explanation
# Set the Pipelines
comment_pipeline = Pipeline(strip_tags, strip_punctuations, normalise_lines)
# List to store the references to methods
# sharing (or not) the same comment between the two versions
same_comment = list()
different_comment = list()
for mkey in methods_in_common:
comment_in_060 = comment_pipeline.process(jf060_methods[mkey].comment)
comment_in_071 = comment_pipeline.process(jf071_methods[mkey].comment)
if comment_in_060 == comment_in_071:
same_comment.append(mkey)
else:
different_comment.append(mkey)
Explanation: <a name="analysis_comment"></a> <a href="#nav">Back to top</a>
Analysis of the Comments
End of explanation
print('Total Number of Methods in Common: ', len(methods_in_common))
print('\t No. of those Sharing the Same Comment: ', len(same_comment))
print('\t No. of those With Differences in Comment: ', len(different_comment))
Explanation: <a name="comments_stats"></a> <a href="#nav">Back to top</a>
Stats
End of explanation
# Test: get a random key and check that they actually share the same comments
# regardless the formattings (e.g. trailing spaces)
randomly_pick_a_method_from(same_comment) # Test 1
randomly_pick_a_method_from(same_comment) # Test 2
randomly_pick_a_method_from(same_comment) # Test 3
randomly_pick_a_method_from(same_comment) # Test 4
Explanation: <a name="same_comment"></a> <a href="#nav">Back to top</a>
Methods in Common with the Same Comment
End of explanation
# Qualitative Analysis (preliminary)
randomly_pick_a_method_from(different_comment) # Test 1
randomly_pick_a_method_from(different_comment) # Test 2
randomly_pick_a_method_from(different_comment) # Test 3
randomly_pick_a_method_from(different_comment) # Test 4
Explanation: <a name="tw01"></a> <a href="#nav">Back to top</a>
Take Away No. 1
From the total set of $283$ methods in common between JFreeChart 0.6.0 and 0.7.1,
$257$ ($\approx 91\%$) have no significant differences in comments (differences limited only to text formattings and punctuaction characters)
Hence this changes do not affect the coherence evaluation in any way.
<a name="diff_comment"></a> <a href="#nav">Back to top</a>
Methods in Common with Different Comments
End of explanation
methods_with_same_comments = same_comment # code readability purposes
# Set the Pipeline
code_pipeline = Pipeline(extract_locs)
same_comment_and_code = list()
same_comment_different_code = list()
for mkey in methods_with_same_comments:
code_in_060 = code_pipeline.process(jf060_methods[mkey].code_fragment)
code_in_071 = code_pipeline.process(jf071_methods[mkey].code_fragment)
if code_in_060 == code_in_071:
same_comment_and_code.append(mkey)
else:
same_comment_different_code.append(mkey)
print('Total Number of Common Methods with the same comments: ', len(methods_with_same_comments))
print('\t No. of those Sharing the Same Code: ', len(same_comment_and_code))
print('\t No. of those With Differences in Code: ', len(same_comment_different_code))
Explanation: <a name="tw02"></a> <a href="#nav">Back to top</a>
Take Away No. 2:
At a first glance, considering the comments of the methods included in the list of $26$ those with different comments, differences appear quite resonable (thus not only limited to layout and formattings).
<a name="analysis_code"></a> <a href="#nav">Back to top</a>
Analysis of the Implementation
<a name="scomm_code"></a> <a href="#nav">Back to top</a>
Analysis of the Implementation for Methods with the Same Comment
End of explanation
methods_with_different_comments = different_comment # code readability purposes
# Set the Pipeline
code_pipeline = Pipeline(extract_locs)
different_comment_same_code = list()
different_comment_and_code = list()
for mkey in methods_with_different_comments:
code_in_060 = code_pipeline.process(jf060_methods[mkey].code_fragment)
code_in_071 = code_pipeline.process(jf071_methods[mkey].code_fragment)
if code_in_060 == code_in_071:
different_comment_same_code.append(mkey)
else:
different_comment_and_code.append(mkey)
print('Total Number of Common Methods with different comments: ', len(methods_with_different_comments))
print('\t No. of those Sharing the Same Code: ', len(different_comment_same_code))
print('\t No. of those With Differences in Code: ', len(different_comment_and_code))
Explanation: <a name="tw03"></a> <a href="#nav">Back to top</a>
Take Away No. 3:
Among the $257$ methods in common between the two version that shares the same comment:
$225$ methods ($\approx 88\%$) have no difference in the implementation
$32$ methods have changes in code (and thus not in comments)
<a name="dcomm_code"></a> <a href="#nav">Back to top</a>
Analysis of the Implementation for Methods with Different Comments
End of explanation
same_coherence = list()
coherence_changed = list()
for mkey in same_comment_and_code:
mth_in_060 = jf060_methods[mkey]
mth_in_071 = jf071_methods[mkey]
if is_coherent(mth_in_060) == is_coherent(mth_in_071):
same_coherence.append(mkey)
else:
coherence_changed.append(mkey)
print('Total Number of Methods Sharing the same Lead Comment and Code')
print('\t Same Coherence: ', len(same_coherence))
print('\t Different Coherence', len(coherence_changed))
Explanation: <a name="tw04"></a> <a href="#nav">Back to top</a>
Take Away No. 4:
Among the $26$ methods in common between the two version that have differences in their lead comments:
$16$ methods ($\approx 62\%$) have no difference in the implementation
$10$ methods have changes in code (as well as in comments)
<a name="analysis_coherence"></a> <a href="#nav">Back to top</a>
Analysis of the Coherence
<a name="scomm_scode_coherence"></a> <a href="#nav">Back to top</a>
Coherence of Methods with Same Comments and Code
End of explanation
same_coherence = list()
coherence_changed = list()
for mkey in same_comment_different_code:
mth_in_060 = jf060_methods[mkey]
mth_in_071 = jf071_methods[mkey]
if is_coherent(mth_in_060) == is_coherent(mth_in_071):
same_coherence.append(mkey)
else:
coherence_changed.append(mkey)
print('Total Number of Methods sharing the same lead comment but have different code', len(same_comment_different_code))
print('\t Same Coherence: ', len(same_coherence))
print('\t Different Coherence', len(coherence_changed))
# Try to spot some insights
randomly_pick_a_method_from(same_comment_different_code, with_code=True, with_coherence=True) # Method 1
randomly_pick_a_method_from(same_comment_different_code, with_code=True, with_coherence=True) # Method 2
randomly_pick_a_method_from(same_comment_different_code, with_code=True, with_coherence=True) # Method 3
randomly_pick_a_method_from(same_comment_different_code, with_code=True, with_coherence=True) # Method 4
Explanation: Result
No differences occur in the evaluation of coherences for common methods sharing the same code and comments (as expected)
<a name="scomm_dcode_coherence"></a> <a href="#nav">Back to top</a>
Coherence of Methods with Same Comments but Different Code
End of explanation
same_coherence = list()
coherence_changed = list()
for mkey in different_comment_same_code:
mth_in_060 = jf060_methods[mkey]
mth_in_071 = jf071_methods[mkey]
if is_coherent(mth_in_060) == is_coherent(mth_in_071):
same_coherence.append(mkey)
else:
coherence_changed.append(mkey)
print('Total Number of Methods having different comment but the same code', len(different_comment_same_code))
print('\t Same Coherence: ', len(same_coherence))
print('\t Different Coherence', len(coherence_changed))
# Try to slpot some insights
randomly_pick_a_method_from(coherence_changed, with_code=True, with_coherence=True) # Method 1
randomly_pick_a_method_from(coherence_changed, with_code=True, with_coherence=True) # Method 2
Explanation: <a name="tw05"></a> <a href="#nav">Back to top</a>
Take Away No. 5:
Among those methods sharing the same lead comments but have differences in the implementations ($32$ in total),
None of them have differences in the coherence evaluation.
This means that differences in the implementation were limited to syntactic constructs and names of the variables.
<a name="dcomm_scode_coherence"></a> <a href="#nav">Back to top</a>
Coherence of Methods with Different Comments but the Same Code
End of explanation
# Try to spot some insights
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 1
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 2
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 3
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 4
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 5
Explanation: <a name="tw06"></a> <a href="#nav">Back to top</a>
Take Away No. 6:
This is interesting.
There are just 2 cases in which differences in comments (and not in implementation) lead to different coherence evaluations.
As a matter of facts, the changes in the lead comments for methods gathered from JFreeChart 0.7.1 are not compliant
with the corresponding code. For instance, in the latter case,
the parameters listed in the Javadoc @param annotations are not all compliant with the corresponding method signature.
This phenomenon may be likely due to refactoring changes (i.e. renaming of variables) not reflected in the comment.
Pick some examples from those methods who share the same coherence evaluations (instead)
End of explanation
same_coherence = list()
coherence_changed = list()
for mkey in different_comment_and_code:
mth_in_060 = jf060_methods[mkey]
mth_in_071 = jf071_methods[mkey]
if is_coherent(mth_in_060) == is_coherent(mth_in_071):
same_coherence.append(mkey)
else:
coherence_changed.append(mkey)
print('Total Number of Methods having different comment and code', len(different_comment_and_code))
print('\t Same Coherence: ', len(same_coherence))
print('\t Different Coherence', len(coherence_changed))
Explanation: <a name="tw07"></a> <a href="#nav">Back to top</a>
Take Away No. 7:
This is also interesting.
In all other cases of common methods having differences in the lead comment but not in the implementation, it seems that the corresponding coherence is not affected.
As a matter of facts, all the changes and differences in the lead comments for methods gathered from JFreeChart 0.7.1
are limited to Javadoc syntax adjustments, typos corrections, and revisions of parameters and method's descriptions.
<a name="dcomm_dcode_coherence"></a> <a href="#nav">Back to top</a>
Coherence of Methods with Different Comments and Code
End of explanation
# Try to spot some insights
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 1
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 2
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 3
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 4
randomly_pick_a_method_from(same_coherence, with_code=True, with_coherence=True) # Method 5
Explanation: Pick some examples from those methods who share the same coherence evaluations
End of explanation
# Try to spot some insights
randomly_pick_a_method_from(coherence_changed, with_code=True, with_coherence=True) # Method 1
Explanation: <a name="tw08"></a> <a href="#nav">Back to top</a>
Take Away No. 8:
This is interesting as well!!.
Among the $10$ methods in common between the two versions that have differences in code and comments, $9$ of them have the same evaluation of coherence.
This phenomenon reflects the fact that in these $9$ cases, code and comments have been updated accordingly!
However, if we look at the totality of common methods, this number is ridiculous!
In more details, it is $9$ methods out of $58$, $\approx 16\%$, where $58$ corresponds to $283$ (methods in common) minus those with no real difference in code and comments ($225$ - the majority of them)!!
Pick the ONLY example where differences in code and comments have reflected changes in the coherence evaluation
End of explanation
methods_in_060 = set(jf060_methods.keys())
methods_in_071 = set(jf071_methods.keys())
# Set of Methods in 0.6.0 and not in 0.7.1
methods_in_060_not_in_071 = methods_in_060.difference(methods_in_071)
print('Total No. of Methods in JfreeChart 0.6.0 and NOT in 0.7.1: ', len(methods_in_060_not_in_071))
print('-'*80)
# Set of Methods in 0.7.1 and not in 0.6.0
methods_in_071_not_in_060 = methods_in_071.difference(methods_in_060)
print('Total No. of Methods in JfreeChart 0.7.1 and NOT in 0.6.0: ', len(methods_in_071_not_in_060))
Explanation: <a name="tw09"></a> <a href="#nav">Back to top</a>
Take Away No. 9:
This is interesting (and correct)!!.
In the only case (out of 10) where there is difference in the coherence evaluation between the two methods in common, the corresponding lead comments and implementations have been updated accordingly!
In fact, while there was no coherence for the method extracted from JFreeChart 0.6.0, there is for the one from JfreeChart 0.7.1.
<a name="difference_methods"></a> <a href="#nav">Back to top</a>
Analysis of Methods NOT in Common
End of explanation
coherent = list()
not_coherent = list()
for mkey in methods_in_060_not_in_071:
method = jf060_methods[mkey]
if is_coherent(method):
coherent.append(mkey)
else:
not_coherent.append(mkey)
print('Total Number of Methods in JFreeChart 0.6.0 but NOT in 0.7.1:', len(methods_in_060_not_in_071))
print('\t Coherent: ', len(coherent))
print('\t Not Coherent', len(not_coherent))
Explanation: Percentage of Coherente / Not Coherent Methods
Analyse how many of the methods in the difference between the two versions are Coherent or Not Coherent
<a name="diff_jf060"></a> <a href="#nav">Back to top</a>
JFreeChart 0.6.0
End of explanation
coherent = list()
not_coherent = list()
for mkey in methods_in_071_not_in_060:
method = jf071_methods[mkey]
if is_coherent(method):
coherent.append(mkey)
else:
not_coherent.append(mkey)
print('Total Number of Methods in JFreeChart 0.7.1 but NOT in 0.6.0:', len(methods_in_071_not_in_060))
print('\t Coherent: ', len(coherent))
print('\t Not Coherent', len(not_coherent))
Explanation: <a name="diff_jf071"></a> <a href="#nav">Back to top</a>
JFreeChart 0.7.1
End of explanation
# Analyse the differences: Try to guess if there is some method that has been CHANGED between the two versions
from collections import defaultdict
associations_map = defaultdict(list)
def match(method_key, target_class, target_file, target_signature_stub):
We try to infer a possible matching between two methods if they share:
- the same class name
- the same source file name
- their signature starts with the same _stub_
In particular, the `target_signature_stub` corresponds to the
first part of the signature till the first open paranthesis, i.e. "("
mth = jf060_methods[method_key]
return (method_key not in jf071_methods and
mth.code_class.src_filename == target_file
and mth.code_class.class_name == target_class and
signature(mth.code_fragment).startswith(target_signature_stub))
# Iterate over all methods in JFreeChart 0.7.1 and NOT in 0.6.0
# in order to guess some possible signature matchings
for mkey in methods_in_071_not_in_060:
method = jf071_methods[mkey]
signature_071 = signature(method.code_fragment)
# get all the methods whose signature starts similarly to the target method
signature_stub = signature_071[:signature_071.find('(')]
class_name = method.code_class.class_name
src_file = method.code_class.src_filename
associations = list(filter(lambda k: match(k, class_name, src_file, signature_stub),
jf060_methods.keys()))
associations_map[mkey] = associations
# Filter out all that had no matching in the first place
possible_mappings = {k:v for k, v in associations_map.items() if len(v)}
print('We inferred a total of {} matchings for {} methods NOT in common!'.format(len(possible_mappings),
len(methods_in_071_not_in_060)))
# Print Matchings Guessed
for mkey, associations in possible_mappings.items():
print('Target Signature: ', signature(jf071_methods[mkey].code_fragment))
print('Possible Associations in Total: ', len(associations))
for i, assoc in enumerate(associations):
print('\t {}): '.format(i+1), assoc, end="\n\n")
print('-'*80)
Explanation: <a name="match"></a> <a href="#nav">Back to top</a>
(Likely) Modified Methods from JFreeChart 0.6.0 to 0.7.1
In this section, the main purpose of the analysis is to try to check some possible matches between methods not in JFreeChart 0.6.0 but present in JFreeChart 0.7.1.
The main idea is that we would like to find (or guess) all those methods whose signature has been changed/updated thus not appearing in the set of Common Methods.
End of explanation |
11,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost-Ray with Dask
This notebook includes an example workflow using
XGBoost-Ray and
Dask for distributed model training,
hyperparameter optimization, and prediction.
Cluster Setup
First, we'll set up our Ray Cluster. The provided dask_xgboost.yaml
cluster config can be used to set up an AWS cluster with 64 CPUs.
The following steps assume you are in a directory with both
dask_xgboost.yaml and this file saved as dask_xgboost.ipynb.
Step 1
Step1: Next, let's parse some arguments. This will be used for executing the .py
file, but not for the .ipynb. If you are using the interactive notebook,
you can directly override the arguments manually.
Step2: Override these arguments as needed
Step3: Connecting to the Ray cluster
Now, let's connect our Python script to this newly deployed Ray cluster!
Step4: Data Preparation
We will use the HIGGS dataset from the UCI Machine Learning dataset
repository <https
Step5: With the connection established, we can now create the Dask dataframe.
We will split the data into a training set and a evaluation set using a 80-20
proportion.
Step6: Distributed Training
The train_xgboost function contains all of the logic necessary for
training using XGBoost-Ray.
Distributed training can not only speed up the process, but also allow you
to use datasets that are to large to fit in memory of a single node. With
distributed training, the dataset is sharded across different actors
running on separate nodes. Those actors communicate with each other to
create the final model.
First, the dataframes are wrapped in RayDMatrix objects, which handle
data sharding across the cluster. Then, the train function is called.
The evaluation scores will be saved to evals_result dictionary. The
function returns a tuple of the trained model (booster) and the evaluation
scores.
The ray_params variable expects a RayParams object that contains
Ray-specific settings, such as the number of workers.
Step7: We can now pass our Dask dataframes and run the function. We will use
RayParams to specify that the number of actors and CPUs to train with.
The dataset has to be downloaded onto the cluster, which may take a few
minutes.
Step8: Hyperparameter optimization
If we are not content with the results obtained with default XGBoost
parameters, we can use Ray Tune for cutting-edge
distributed hyperparameter tuning. XGBoost-Ray automatically integrates
with Ray Tune, meaning we can use the same training function as before.
In this workflow, we will tune three hyperparameters - eta, subsample
and max_depth. We are using Tune's samplers to define the search
space.
The experiment configuration is done through tune.run. We set the amount
of resources each trial (hyperparameter combination) requires by using the
get_tune_resources method of RayParams. The num_samples argument
controls how many trials will be ran in total. In the end, the best
combination of hyperparameters evaluated during the experiment will be
returned.
By default, Tune will use simple random search. However, Tune also
provides various search algorithms and
schedulers
to further improve the optimization process.
Step9: Hyperparameter optimization may take some time to complete.
Step10: Prediction
With the model trained, we can now predict on unseen data. For the
purposes of this example, we will use the same dataset for prediction as
for training.
Since prediction is naively parallelizable, distributing it over multiple
actors can measurably reduce the amount of time needed. | Python Code:
import argparse
import time
import dask
import dask.dataframe as dd
from xgboost_ray import RayDMatrix, RayParams, train, predict
import ray
from ray import tune
from ray.util.dask import ray_dask_get
Explanation: XGBoost-Ray with Dask
This notebook includes an example workflow using
XGBoost-Ray and
Dask for distributed model training,
hyperparameter optimization, and prediction.
Cluster Setup
First, we'll set up our Ray Cluster. The provided dask_xgboost.yaml
cluster config can be used to set up an AWS cluster with 64 CPUs.
The following steps assume you are in a directory with both
dask_xgboost.yaml and this file saved as dask_xgboost.ipynb.
Step 1: Bring up the Ray cluster.
bash
pip install ray boto3
ray up dask_xgboost.yaml
Step 2: Move dask_xgboost.ipynb to the cluster and start Jupyter.
bash
ray rsync_up dask_xgboost.yaml "./dask_xgboost.ipynb" \
"~/dask_xgboost.ipynb"
ray exec dask_xgboost.yaml --port-forward=9999 "jupyter notebook \
--port=9999"
You can then access this notebook at the URL that is output:
http://localhost:9999/?token=<token>
Python Setup
First, we'll import all the libraries we'll be using. This step also helps us
verify that the environment is configured correctly. If any of the imports
are missing, an exception will be raised.
End of explanation
parser = argparse.ArgumentParser()
parser.add_argument(
"--address", type=str, default="auto", help="The address to use for Ray."
)
parser.add_argument(
"--smoke-test",
action="store_true",
help="Read a smaller dataset for quick testing purposes.",
)
parser.add_argument(
"--num-actors", type=int, default=4, help="Sets number of actors for training."
)
parser.add_argument(
"--cpus-per-actor",
type=int,
default=6,
help="The number of CPUs per actor for training.",
)
parser.add_argument(
"--num-actors-inference",
type=int,
default=16,
help="Sets number of actors for inference.",
)
parser.add_argument(
"--cpus-per-actor-inference",
type=int,
default=2,
help="The number of CPUs per actor for inference.",
)
# Ignore -f from ipykernel_launcher
args, _ = parser.parse_known_args()
Explanation: Next, let's parse some arguments. This will be used for executing the .py
file, but not for the .ipynb. If you are using the interactive notebook,
you can directly override the arguments manually.
End of explanation
address = args.address
smoke_test = args.smoke_test
num_actors = args.num_actors
cpus_per_actor = args.cpus_per_actor
num_actors_inference = args.num_actors_inference
cpus_per_actor_inference = args.cpus_per_actor_inference
Explanation: Override these arguments as needed:
End of explanation
if not ray.is_initialized():
ray.init(address=address)
Explanation: Connecting to the Ray cluster
Now, let's connect our Python script to this newly deployed Ray cluster!
End of explanation
LABEL_COLUMN = "label"
if smoke_test:
# Test dataset with only 10,000 records.
FILE_URL = "https://ray-ci-higgs.s3.us-west-2.amazonaws.com/simpleHIGGS" ".csv"
else:
# Full dataset. This may take a couple of minutes to load.
FILE_URL = (
"https://archive.ics.uci.edu/ml/machine-learning-databases"
"/00280/HIGGS.csv.gz"
)
colnames = [LABEL_COLUMN] + ["feature-%02d" % i for i in range(1, 29)]
dask.config.set(scheduler=ray_dask_get)
load_data_start_time = time.time()
data = dd.read_csv(FILE_URL, names=colnames)
data = data[sorted(colnames)]
data = data.persist()
load_data_end_time = time.time()
load_data_duration = load_data_end_time - load_data_start_time
print(f"Dataset loaded in {load_data_duration} seconds.")
Explanation: Data Preparation
We will use the HIGGS dataset from the UCI Machine Learning dataset
repository <https://archive.ics.uci.edu/ml/datasets/HIGGS>_. The HIGGS
dataset consists of 11,000,000 samples and 28 attributes, which is large
enough size to show the benefits of distributed computation.
We set the Dask scheduler to ray_dask_get to use Dask on Ray
<https://docs.ray.io/en/latest/data/dask-on-ray.html>_ backend.
End of explanation
train_df, eval_df = data.random_split([0.8, 0.2])
train_df, eval_df = train_df.persist(), eval_df.persist()
print(train_df, eval_df)
Explanation: With the connection established, we can now create the Dask dataframe.
We will split the data into a training set and a evaluation set using a 80-20
proportion.
End of explanation
def train_xgboost(config, train_df, test_df, target_column, ray_params):
train_set = RayDMatrix(train_df, target_column)
test_set = RayDMatrix(test_df, target_column)
evals_result = {}
train_start_time = time.time()
# Train the classifier
bst = train(
params=config,
dtrain=train_set,
evals=[(test_set, "eval")],
evals_result=evals_result,
ray_params=ray_params,
)
train_end_time = time.time()
train_duration = train_end_time - train_start_time
print(f"Total time taken: {train_duration} seconds.")
model_path = "model.xgb"
bst.save_model(model_path)
print("Final validation error: {:.4f}".format(evals_result["eval"]["error"][-1]))
return bst, evals_result
Explanation: Distributed Training
The train_xgboost function contains all of the logic necessary for
training using XGBoost-Ray.
Distributed training can not only speed up the process, but also allow you
to use datasets that are to large to fit in memory of a single node. With
distributed training, the dataset is sharded across different actors
running on separate nodes. Those actors communicate with each other to
create the final model.
First, the dataframes are wrapped in RayDMatrix objects, which handle
data sharding across the cluster. Then, the train function is called.
The evaluation scores will be saved to evals_result dictionary. The
function returns a tuple of the trained model (booster) and the evaluation
scores.
The ray_params variable expects a RayParams object that contains
Ray-specific settings, such as the number of workers.
End of explanation
# standard XGBoost config for classification
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
}
bst, evals_result = train_xgboost(
config,
train_df,
eval_df,
LABEL_COLUMN,
RayParams(cpus_per_actor=cpus_per_actor, num_actors=num_actors),
)
print(f"Results: {evals_result}")
Explanation: We can now pass our Dask dataframes and run the function. We will use
RayParams to specify that the number of actors and CPUs to train with.
The dataset has to be downloaded onto the cluster, which may take a few
minutes.
End of explanation
def tune_xgboost(train_df, test_df, target_column):
# Set XGBoost config.
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"eta": tune.loguniform(1e-4, 1e-1),
"subsample": tune.uniform(0.5, 1.0),
"max_depth": tune.randint(1, 9),
}
ray_params = RayParams(
max_actor_restarts=1, cpus_per_actor=cpus_per_actor, num_actors=num_actors
)
tune_start_time = time.time()
analysis = tune.run(
tune.with_parameters(
train_xgboost,
train_df=train_df,
test_df=test_df,
target_column=target_column,
ray_params=ray_params,
),
# Use the `get_tune_resources` helper function to set the resources.
resources_per_trial=ray_params.get_tune_resources(),
config=config,
num_samples=10,
metric="eval-error",
mode="min",
)
tune_end_time = time.time()
tune_duration = tune_end_time - tune_start_time
print(f"Total time taken: {tune_duration} seconds.")
accuracy = 1.0 - analysis.best_result["eval-error"]
print(f"Best model parameters: {analysis.best_config}")
print(f"Best model total accuracy: {accuracy:.4f}")
return analysis.best_config
Explanation: Hyperparameter optimization
If we are not content with the results obtained with default XGBoost
parameters, we can use Ray Tune for cutting-edge
distributed hyperparameter tuning. XGBoost-Ray automatically integrates
with Ray Tune, meaning we can use the same training function as before.
In this workflow, we will tune three hyperparameters - eta, subsample
and max_depth. We are using Tune's samplers to define the search
space.
The experiment configuration is done through tune.run. We set the amount
of resources each trial (hyperparameter combination) requires by using the
get_tune_resources method of RayParams. The num_samples argument
controls how many trials will be ran in total. In the end, the best
combination of hyperparameters evaluated during the experiment will be
returned.
By default, Tune will use simple random search. However, Tune also
provides various search algorithms and
schedulers
to further improve the optimization process.
End of explanation
tune_xgboost(train_df, eval_df, LABEL_COLUMN)
Explanation: Hyperparameter optimization may take some time to complete.
End of explanation
inference_df = RayDMatrix(data, ignore=[LABEL_COLUMN, "partition"])
results = predict(
bst,
inference_df,
ray_params=RayParams(
cpus_per_actor=cpus_per_actor_inference, num_actors=num_actors_inference
),
)
print(results)
Explanation: Prediction
With the model trained, we can now predict on unseen data. For the
purposes of this example, we will use the same dataset for prediction as
for training.
Since prediction is naively parallelizable, distributing it over multiple
actors can measurably reduce the amount of time needed.
End of explanation |
11,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Preprocessing
Joeri R. Hermans
Departement of Data Science & Knowledge Engineering
Maastricht University, The Netherlands
In this notebook we mainly deal with the preprocessing of the physics data that we processed into a more managable format in the Data Extraction notebook. We prepare them in such a way that they are ready for ML problems. Furthermore, in order to have complete information at a later stage, we maintain the "old" information as well. This includes all the extracted parameters of the tracks that have been reconstructed by offline software.
Cluster Configuration
In the following sections, we set up the cluster properties in order to preprocess the tracks dataset.
Step1: Data preperation
Now we'll read the tracks dataset, and count the number of tracks. This action will cause the data to be precached on the different nodes. Thus, making our data mapping functions faster.
Step2: This gives us the total number of tracks within our datafile. However, a collision is defined as run, event, and luminosity. As a result, we have to group all tracks which share these properties to form a collision.
Step4: In order to preserve information, we would like to know which track types are produced within a collision. For this, we will allocate an array of track types for every collision that is produced.
Furthermore, to make our lives easier, we will construct a collision-id for every collision. This will be in the form of "run-event-luminosity".
Step5: As can been seen in the schema shown above, every track now has a collsion_id. Using this collision id, we can group all tracks to produce the collisions.
Step6: Utility Functions
In this Section I will define some utility functions which we will use throughout this notebook.
Step7: Machine Learning Preprocessing
At this point we have a dataset which is ready to be preprocessed for Machine Learning problems. For every collision, we have the reconstructed tracks, the background hits, and the track parameters. Depending on the application, the data needs to be processed in a particular way. This will happen in the following sections.
Autoencoder
In essence, an autoencoder tries the obtain a parametrization in such a way that it is able to produce $f(x) = x$. This has some interesting properties in the case that the dimensionality of the hidden layer is actually lower then the number of inputs. Intuitively, one could say that the autoencoder obtains a compression of the data, thereby reducing the dimensionality of the input problem. An other application of autoencoders is so called de-noising, these neural networks are called de-noising autoencoders. This could be of particular interest to our application, give some collision with background $x'$, find $x$. Mathematically
Step8: Nevertheless, most machine learning problems have a better convergence rate when the data is normalized. This is because they in such cases don't have to deal (and correct) for large values. As a result, we perform normalization of our feature matrices as well. All feature matrices will be normalized within the range [0,1].
Furthermore, the way we apply normalization to these feature matrices is also important. In the case of images, normalization is quite trivial since the pixel-sensor's maximum value is constrainted by 255 (0xff). However, this is not the case in our problem. Multiple particle tracks can pass through a single part of the detector at the same time. An initial approach would be to normalize the data with respect to the maximum value of a matrix. But this would give a "wrong impression" to the classifier / regressor since in low-saturated vs highly-saturated environments this would mean that they have the same importance, while this is truly not the case. An additonal approach would be to obtain the max value over all matrices (with some additonal margin), and normalize the matrices with respect to that value. But then again, then we assume some maximum. Personally, I would go with the latter approach in this particular case.
Step9: From this we can safely assume (with some margin) that we can normalize the feature matrices by 200. So every pixel in the feature matrix will be normalized with respect to 200.
Step10: However, other techniques apply batch normalization. This basically means that you normalize the instance with respect to a batch of figures. Nevertheless, in our case the batch size is 1. This implies that we normalize the instances with respect to their maximum. | Python Code:
%matplotlib inline
import numpy as np
import os
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.storagelevel import StorageLevel
from pyspark.sql import Row
from pyspark.sql.types import *
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# Use the DataBricks AVRO reader.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-avro_2.11:3.2.0 pyspark-shell'
# Modify these variables according to your needs.
application_name = "CMS Track Preprocessing"
using_spark_2 = False
local = False
path_data = "data/tracks.avro"
if local:
# Tell master to use local resources.
master = "local[*]"
num_processes = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-client"
num_executors = 20
num_processes = 4
# This variable is derived from the number of cores and executors,
# and will be used to assign the number of model trainers.
num_workers = num_executors * num_processes
print("Number of desired executors: " + `num_executors`)
print("Number of desired processes / executor: " + `num_processes`)
print("Total number of workers: " + `num_workers`)
# Do not change anything here.
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_processes`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.executor.memory", "5g")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
conf.set("spark.kryoserializer.buffer.max", "2000")
conf.set("spark.executor.heartbeatInterval", "6000s")
conf.set("spark.network.timeout", "1000000s")
conf.set("spark.shuffle.spill", "true")
conf.set("spark.driver.memory", "5g")
# Check if the user is running Spark 2.0 +
if using_spark_2:
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
else:
# Create the Spark context.
sc = SparkContext(conf=conf)
# Add the missing imports
from pyspark import SQLContext
sqlContext = SQLContext(sc)
# Check if we are using Spark 2.0
if using_spark_2:
reader = sc
s = sc
else:
reader = sqlContext
s = sqlContext
Explanation: Data Preprocessing
Joeri R. Hermans
Departement of Data Science & Knowledge Engineering
Maastricht University, The Netherlands
In this notebook we mainly deal with the preprocessing of the physics data that we processed into a more managable format in the Data Extraction notebook. We prepare them in such a way that they are ready for ML problems. Furthermore, in order to have complete information at a later stage, we maintain the "old" information as well. This includes all the extracted parameters of the tracks that have been reconstructed by offline software.
Cluster Configuration
In the following sections, we set up the cluster properties in order to preprocess the tracks dataset.
End of explanation
# Read the dataset.
dataset = reader.read.format("com.databricks.spark.avro").load("data/tracks.avro")
print("Number of tracks: " + str(dataset.count()))
Explanation: Data preperation
Now we'll read the tracks dataset, and count the number of tracks. This action will cause the data to be precached on the different nodes. Thus, making our data mapping functions faster.
End of explanation
dataset.printSchema()
Explanation: This gives us the total number of tracks within our datafile. However, a collision is defined as run, event, and luminosity. As a result, we have to group all tracks which share these properties to form a collision.
End of explanation
def new_dataframe_row(old_row, column_name, column_value):
Constructs a new Spark Row based on the old row, and a new column name and value.
row = Row(*(old_row.__fields__ + [column_name]))(*(old_row + (column_value, )))
return row
def construct_keys(row):
run = row['run']
event = row['event']
luminosity = row['luminosity']
id = str(run) + "-" + str(event) + "-" + str(luminosity)
return new_dataframe_row(row, "collision_id", id)
schema = dataset.schema
tracks_rdd = dataset.map(construct_keys)
schema.add(StructField("collision_id", StringType(), False))
tracks = s.createDataFrame(tracks_rdd, schema)
tracks.persist(StorageLevel.MEMORY_AND_DISK)
tracks.printSchema()
Explanation: In order to preserve information, we would like to know which track types are produced within a collision. For this, we will allocate an array of track types for every collision that is produced.
Furthermore, to make our lives easier, we will construct a collision-id for every collision. This will be in the form of "run-event-luminosity".
End of explanation
def prepare_reduce(row):
collision_id = row['collision_id']
return Row(**{'id': collision_id, 'tracks': [row]})
collisions = tracks.map(prepare_reduce)
collisions.persist(StorageLevel.MEMORY_AND_DISK)
collisions = collisions.reduceByKey(lambda a, b: a + b)
collisions.persist(StorageLevel.MEMORY_AND_DISK)
print("Number of collisions: " + str(collisions.count()))
import copy
# Before storing the collisions dataset, we first need to specify the schema.
track_schema = copy.deepcopy(tracks.schema)
# The collisions dataset is structured as follows: id(int), tracks(array[track_schema]).
collisions_schema = StructType([StructField("id", StringType(), False),
StructField("tracks", ArrayType(track_schema), False)])
# Construct the collisions dataframe from the specified schema.
collisions_df = s.createDataFrame(collisions, collisions_schema)
# Save the collisions dataset for future use.
collisions_df.write.format("com.databricks.spark.avro").save("data/collisions.avro")
# Cleanup the old dataframes.
dataset.unpersist()
tracks.unpersist()
collisions_df.unpersist()
# Read the collisions from disk as a starting point for future actions.
collisions = reader.read.format("com.databricks.spark.avro").load("data/collisions.avro")
collisions.persist(StorageLevel.MEMORY_AND_DISK)
Explanation: As can been seen in the schema shown above, every track now has a collsion_id. Using this collision id, we can group all tracks to produce the collisions.
End of explanation
def construct_feature_matrix_front(tracks, background=False):
# Define the front matrix with the specified granularity.
granularity = 1.0
unit = 1.0 / granularity
size_x = int((320.0 / granularity) + 1)
size_y = int((240.0 / granularity) + 1)
middle_x = size_x / 2 - 1
middle_y = size_y / 2 - 1
m = np.zeros((size_y, size_x))
hits = []
# Obtain the hits.
for track in tracks:
# Add the track hits.
hits.extend([x for x in track['track_hits']])
# Check if the background hits need to be added.
if background:
# Add the background hits.
hits.extend([x for x in track['background_hits']])
# Add the tracks to the matrix.
for hit in hits:
x = int((hit['x'] * unit) + middle_x)
y = -int((hit['y'] * unit) + middle_y)
m[y][x] += 1.0
return m
def construct_feature_matrix_side(tracks, background=False):
# Define the side matrix with the specified granularity
granularity = 1.0
unit = 1.0 / granularity
size_x = int((600.0 / granularity) + 1)
size_y = int((300.0 / granularity) + 1)
middle_x = size_x / 2 - 1
middle_y = size_y / 2 - 1
m = np.zeros((size_y, size_x))
hits = []
# Obtain the hits.
for track in tracks:
# Add the track hits.
hits.extend([x for x in track['track_hits']])
# Check if the background hits need to be added.
if background:
# Add the background hits.
hits.extend([x for x in track['background_hits']])
# Add the tracks to the matrix.
for hit in hits:
z = int((hit['z'] * unit) + middle_x)
y = -int((hit['y'] * unit) + middle_y)
m[y][z] += 1.0
return m
def plot_matrix(m):
plt.imshow(m, cmap='plasma', interpolation='nearest')
plt.show()
Explanation: Utility Functions
In this Section I will define some utility functions which we will use throughout this notebook.
End of explanation
c = collisions.take(1)[0]
# Obtain the front feature matrix, exclude background hits.
m_f = construct_feature_matrix_front(c['tracks'], background=False)
m_s = construct_feature_matrix_side(c['tracks'], background=False)
# Plot the matrices.
plot_matrix(m_f)
plot_matrix(m_s)
def construct_feature_matrices(collision):
# Obtain the tracks from the collision.
tracks = collision['tracks']
# Obtain the collision id.
collision_id = collision['id']
# Construct the front and side feature matrices.
m_f = construct_feature_matrix_front(tracks, background=False)
m_s = construct_feature_matrix_side(tracks, background=False)
return Row(**{'collision_id': collision_id, 'front': m_f.tolist(), 'side': m_s.tolist()})
feature_matrices = collisions.map(construct_feature_matrices).toDF()
feature_matrices.printSchema()
feature_matrices.write.format("com.databricks.spark.avro").save("data/collisions_feature_matrices.avro")
Explanation: Machine Learning Preprocessing
At this point we have a dataset which is ready to be preprocessed for Machine Learning problems. For every collision, we have the reconstructed tracks, the background hits, and the track parameters. Depending on the application, the data needs to be processed in a particular way. This will happen in the following sections.
Autoencoder
In essence, an autoencoder tries the obtain a parametrization in such a way that it is able to produce $f(x) = x$. This has some interesting properties in the case that the dimensionality of the hidden layer is actually lower then the number of inputs. Intuitively, one could say that the autoencoder obtains a compression of the data, thereby reducing the dimensionality of the input problem. An other application of autoencoders is so called de-noising, these neural networks are called de-noising autoencoders. This could be of particular interest to our application, give some collision with background $x'$, find $x$. Mathematically: $h(x') = x$.
In a first stage, we would like to obtain this exact identiy function of the CMS detector. For this, we need to convert the tracks to a format which neural networks might be able to process (e.g., matrices). This is done by generating a feature matrix of the front and the side. Note that at a later stage we would like to try a full 3 dimensional feature matrix. However, in order to reduce the dimensionality of the problem, lets try a 2 dimensional approach first.
For a particular collision, the feature matrices will look like this:
Note: A higher intensity means this that part of the detector was activated by more particle tracks.
End of explanation
def feature_matrix_max(row):
# Obtain the feature matrices.
m_f = np.asarray(row['front'])
m_s = np.asarray(row['side'])
# Obtain the max value of all feature matrices.
max_f = float(m_f.max())
max_s = float(m_s.max())
return Row(**{"max_front": max_f, "max_side": max_s})
features_max = feature_matrices.map(feature_matrix_max).toDF()
max = features_max.agg({"max_front": "max", "max_side": "max"}).collect()[0]
max_front = max['max(max_front)']
max_side = max['max(max_side)']
print("Maximum of the front feature matrix: " + str(max_front))
print("Maximum of the side feature matrix: " + str(max_side))
Explanation: Nevertheless, most machine learning problems have a better convergence rate when the data is normalized. This is because they in such cases don't have to deal (and correct) for large values. As a result, we perform normalization of our feature matrices as well. All feature matrices will be normalized within the range [0,1].
Furthermore, the way we apply normalization to these feature matrices is also important. In the case of images, normalization is quite trivial since the pixel-sensor's maximum value is constrainted by 255 (0xff). However, this is not the case in our problem. Multiple particle tracks can pass through a single part of the detector at the same time. An initial approach would be to normalize the data with respect to the maximum value of a matrix. But this would give a "wrong impression" to the classifier / regressor since in low-saturated vs highly-saturated environments this would mean that they have the same importance, while this is truly not the case. An additonal approach would be to obtain the max value over all matrices (with some additonal margin), and normalize the matrices with respect to that value. But then again, then we assume some maximum. Personally, I would go with the latter approach in this particular case.
End of explanation
def normalize(row):
# Obtain the collision id.
collision_id = row['collision_id']
# Set the normalizer.
normalizer = 200.0
# Obtain the feature matrices.
m_f = np.asarray(row['front'])
m_s = np.asarray(row['side'])
# Normalize the feature matrices.
m_f = np.divide(m_f, normalizer)
m_s = np.divide(m_s, normalizer)
return Row(**{'collision_id': collision_id, 'front': m_f.tolist(), 'side': m_s.tolist()})
feature_matrices_normalized = feature_matrices.map(normalize).toDF()
feature_matrices_normalized.write.format("com.databricks.spark.avro") \
.save("data/collisions_feature_matrices_normalized.avro")
Explanation: From this we can safely assume (with some margin) that we can normalize the feature matrices by 200. So every pixel in the feature matrix will be normalized with respect to 200.
End of explanation
def normalize(row):
# Obtain the collision id.
collision_id = row['collision_id']
# Obtain the feature matrices.
m_f = np.asarray(row['front'])
m_s = np.asarray(row['side'])
# Obtain the max of both feature matrices.
max_f = m_f.max()
max_s = m_s.max()
# Normalize the feature matrices.
m_f = np.divide(m_f, max_f)
m_s = np.divide(m_s, max_s)
return Row(**{'collision_id': collision_id, 'front': m_f.tolist(), 'side': m_s.tolist()})
feature_matrices_batch_normalized = feature_matrices.map(normalize).toDF()
feature_matrices_batch_normalized.write.format("com.databricks.spark.avro") \
.save("data/collisions_feature_matrices_batch_normalized.avro")
Explanation: However, other techniques apply batch normalization. This basically means that you normalize the instance with respect to a batch of figures. Nevertheless, in our case the batch size is 1. This implies that we normalize the instances with respect to their maximum.
End of explanation |
11,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-1
Step1: Description
A 120-V 1/4-hp 60-Hz four-pole split-phase induction motor has the following impedances
Step2: If the slip is 0.05, find the following quantities for this motor
Step3: $$Z_B = \frac{(R_2/(2-s) + jX_2)(jX_M)}{R_2/(2-s) + jX_2 + jX_M}$$
Step4: (a)
The input current is
Step5: (b)
The air-gap power is
Step6: (c)
The power converted from electrical to mechanical form is
Step7: (d)
The output power is
Step8: (e)
The induced torque is
$$\tau_\text{ind} = \frac{P_\text{AG}}{\omega_\text{sync}}$$
Step9: (f)
The load torque is
Step10: (g)
The overall efficiency is
Step11: (h)
The stator power factor is | Python Code:
%pylab notebook
%precision %.4g
Explanation: Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-1
End of explanation
V = 120 # [V]
p = 4
R1 = 2.0 # [Ohm]
R2 = 2.8 # [Ohm]
X1 = 2.56 # [Ohm]
X2 = 2.56 # [Ohm]
Xm = 60.5 # [Ohm]
s = 0.05
Prot = 51 # [W]
Explanation: Description
A 120-V 1/4-hp 60-Hz four-pole split-phase induction motor has the following impedances:
$$R_1 = 2.00\,\Omega \qquad X_1 = 2.56\,\Omega \qquad X_M = 60.5\,\Omega$$
$$R_2 = 2.80\,\Omega \qquad X_2 = 2.56\,\Omega$$
At a slip of 0.05, the motor's rotational losses are 51 W. The rotational losses may be assumed constant over the normal operating range of the motor.
End of explanation
Zf = ((R2/s + X2*1j)*(Xm*1j)) / (R2/s + X2*1j + Xm*1j)
Zf
Explanation: If the slip is 0.05, find the following quantities for this motor:
(a)
Input power
(b)
Air-gap power
(c)
$P_\text{inv}$
(d)
$P_\text{out}$
(e)
$\tau_\text{ind}$
(f)
$\tau_\text{load}$
(g)
Overall motor efficiency
(h)
Stator power factor
SOLUTION
The equivalent circuit of the motor is shown below:
<img src="figs/FigC_9-28.jpg" width="70%">
The impedances $Z_F$ and $Z_B$ are:
$$Z_F = \frac{(R_2/s + jX_2)(jX_M)}{R_2/s + jX_2 + jX_M}$$
End of explanation
Zb = ((R2/(2-s) + X2*1j)*(Xm*1j)) / (R2/(2-s) + X2*1j + Xm*1j)
Zb
Explanation: $$Z_B = \frac{(R_2/(2-s) + jX_2)(jX_M)}{R_2/(2-s) + jX_2 + jX_M}$$
End of explanation
I1 = V / (R1 +X1*1j + 0.5*Zf + 0.5*Zb)
I1_angle = arctan(I1.imag/I1.real)
print('I1 = {:.3f} V ∠{:.1f}°'.format(abs(I1), I1_angle/pi*180))
Pin = V*abs(I1)*cos(I1_angle)
print('''
Pin = {:.1f} W
============='''.format(Pin))
Explanation: (a)
The input current is:
$$\vec{I}_1 = \frac{\vec{V}}{R_1 + jX_1 + 0.5Z_F + 0.5Z_B}$$
End of explanation
Pag_f = abs(I1)**2*0.5*Zf.real
Pag_f
Pag_b = abs(I1)**2*0.5*Zb.real
Pag_b
Pag = Pag_f - Pag_b
print('''
Pag = {:.0f} W
==========='''.format(Pag))
Explanation: (b)
The air-gap power is:
End of explanation
Pconv_f = (1-s)*Pag_f
Pconv_f
Pconv_b = (1-s)*Pag_b
Pconv_b
Pconv = Pconv_f - Pconv_b
print('''
Pconv = {:.0f} W
============='''.format(Pconv))
Explanation: (c)
The power converted from electrical to mechanical form is:
End of explanation
Pout = Pconv - Prot
print('''
Pout = {:.0f} W
============'''.format(Pout))
Explanation: (d)
The output power is:
End of explanation
n_sync = 1800.0 # [r/min]
w_sync = n_sync * (2.0*pi/1.0) * (1.0/60.0)
tau_ind = Pag / w_sync
print('''
τ_ind = {:.2f} Nm
==============='''.format(tau_ind))
Explanation: (e)
The induced torque is
$$\tau_\text{ind} = \frac{P_\text{AG}}{\omega_\text{sync}}$$
End of explanation
w_m = (1-s)*w_sync
tau_load = Pout / w_m
print('''
τ_load = {:.2f} Nm
================'''.format(tau_load))
Explanation: (f)
The load torque is:
$$\tau_\text{load} = \frac{P_\text{out}}{\omega_m}$$
End of explanation
eta = Pout/Pin
print('''
η = {:.1f} %
=========='''.format(eta*100))
Explanation: (g)
The overall efficiency is:
$$\eta = \frac{P_\text{out}}{P_\text{in}} \cdot 100\%$$
End of explanation
PF = cos(I1_angle)
print('''
PF = {:.3f} lagging
=================='''.format(PF))
Explanation: (h)
The stator power factor is:
End of explanation |
11,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Usage for Drop-in List Replacements
Step1: BList
The underlying data structure can be any drop-in replacement for list, in this example blist is used.
Step2: All the standard functionality works exactly the same
Step3: Works for Series as well | Python Code:
# remove comment to use latest development version
import sys; sys.path.insert(0, '../')
# import libraries
import raccoon as rc
Explanation: Example Usage for Drop-in List Replacements
End of explanation
from blist import blist
# Construct with blist
df_blist = rc.DataFrame({'a': [1, 2, 3]}, index=[5, 6, 7], dropin=blist)
# see that the data structures are all blists
df_blist.data
df_blist.index
df_blist.columns
# the dropin class
df_blist.dropin
Explanation: BList
The underlying data structure can be any drop-in replacement for list, in this example blist is used.
End of explanation
df_blist[6, 'a']
df_blist[8, 'b'] = 44
print(df_blist)
Explanation: All the standard functionality works exactly the same
End of explanation
# Construct with blist=True, the default
srs_blist = rc.Series([1, 2, 3], index=[5, 6, 7], dropin=blist)
# see that the data structures are all blists
srs_blist.data
srs_blist.index
Explanation: Works for Series as well
End of explanation |
11,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: tsfresh returns a great number of features. Depending on the dynamics of the inspected time series, some of them maybe highly correlated.
A common technique to deal with such highly correlated features are transformations such as a principal component analysis (PCA). This notebooks shows you how to perform a PCA on the extracted features.
Step6: Load robot failure example
Splits the data set in a train (1 <= id <= 87) and a test set (87 <= id <= 88). It is assumed that the selection process is done in the past (train) and features for future (test) data sets should be determined. The id 87 is overlapping so that the correctness of the procedure can be easily shown.
Step7: Train
Extract train features
Step8: Select train features
Step9: Principal Component Analysis on train features
Step10: Test
Extract test features
Only the selected features from the train data are extracted.
Step11: Principal Component Analysis on test features
The PCA components of the id 87 are the same as in the previous train PCA. | Python Code:
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import pandas as pd
class PCAForPandas(PCA):
This class is just a small wrapper around the PCA estimator of sklearn including normalization to make it
compatible with pandas DataFrames.
def __init__(self, **kwargs):
self._z_scaler = StandardScaler()
super(self.__class__, self).__init__(**kwargs)
self._X_columns = None
def fit(self, X, y=None):
Normalize X and call the fit method of the base class with numpy arrays instead of pandas data frames.
X = self._prepare(X)
self._z_scaler.fit(X.values, y)
z_data = self._z_scaler.transform(X.values, y)
return super(self.__class__, self).fit(z_data, y)
def fit_transform(self, X, y=None):
Call the fit and the transform method of this class.
X = self._prepare(X)
self.fit(X, y)
return self.transform(X, y)
def transform(self, X, y=None):
Normalize X and call the transform method of the base class with numpy arrays instead of pandas data frames.
X = self._prepare(X)
z_data = self._z_scaler.transform(X.values, y)
transformed_ndarray = super(self.__class__, self).transform(z_data)
pandas_df = pd.DataFrame(transformed_ndarray)
pandas_df.columns = ["pca_{}".format(i) for i in range(len(pandas_df.columns))]
return pandas_df
def _prepare(self, X):
Check if the data is a pandas DataFrame and sorts the column names.
:raise AttributeError: if pandas is not a DataFrame or the columns of the new X is not compatible with the
columns from the previous X data
if not isinstance(X, pd.DataFrame):
raise AttributeError("X is not a pandas DataFrame")
X.sort_index(axis=1, inplace=True)
if self._X_columns is not None:
if self._X_columns != list(X.columns):
raise AttributeError("The columns of the new X is not compatible with the columns from the previous X data")
else:
self._X_columns = list(X.columns)
return X
Explanation: tsfresh returns a great number of features. Depending on the dynamics of the inspected time series, some of them maybe highly correlated.
A common technique to deal with such highly correlated features are transformations such as a principal component analysis (PCA). This notebooks shows you how to perform a PCA on the extracted features.
End of explanation
from tsfresh.examples.robot_execution_failures import download_robot_execution_failures, load_robot_execution_failures
from tsfresh.feature_extraction import extract_features
from tsfresh.feature_selection import select_features
from tsfresh.utilities.dataframe_functions import impute
from tsfresh.feature_extraction import ComprehensiveFCParameters, MinimalFCParameters, settings
download_robot_execution_failures()
df, y = load_robot_execution_failures()
df_train = df.iloc[(df.id <= 87).values]
y_train = y[0:-1]
df_test = df.iloc[(df.id >= 87).values]
y_test = y[-2:]
df.head()
Explanation: Load robot failure example
Splits the data set in a train (1 <= id <= 87) and a test set (87 <= id <= 88). It is assumed that the selection process is done in the past (train) and features for future (test) data sets should be determined. The id 87 is overlapping so that the correctness of the procedure can be easily shown.
End of explanation
X_train = extract_features(df_train, column_id='id', column_sort='time', default_fc_parameters=MinimalFCParameters(),
impute_function=impute)
X_train.head()
Explanation: Train
Extract train features
End of explanation
X_train_filtered = select_features(X_train, y_train)
X_train_filtered.tail()
Explanation: Select train features
End of explanation
pca_train = PCAForPandas(n_components=4)
X_train_pca = pca_train.fit_transform(X_train_filtered)
# add index plus 1 to keep original index from robot example
X_train_pca.index += 1
X_train_pca.tail()
Explanation: Principal Component Analysis on train features
End of explanation
X_test_filtered = extract_features(df_test, column_id='id', column_sort='time',
kind_to_fc_parameters=settings.from_columns(X_train_filtered.columns),
impute_function=impute)
X_test_filtered
Explanation: Test
Extract test features
Only the selected features from the train data are extracted.
End of explanation
X_test_pca = pca_train.transform(X_test_filtered)
# reset index to keep original index from robot example
X_test_pca.index = [87, 88]
X_test_pca
Explanation: Principal Component Analysis on test features
The PCA components of the id 87 are the same as in the previous train PCA.
End of explanation |
11,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Posterior Predictive Checks in PyMC3
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
PyMC3 has random number support thanks to Mark Wibrow as implemented in PR784.
Here we will implement a general routine to draw samples from the observed nodes of a model.
Step1: Lets generate a very simple model
Step2: This function will randomly draw 50 samples of parameters from the trace. Then, for each sample, it will draw 100 random numbers from a normal distribution specified by the values of mu and std in that sample.
Step3: Now, ppc contains 500 generated data sets (containing 100 samples each), each using a different parameter setting from the posterior
Step4: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean
Step5: Prediction
The same pattern can be used for prediction. Here we're building a logistic regression model. Note that since we're dealing the full posterior, we're also getting uncertainty in our predictions for free.
Step6: Mean predicted values plus error bars to give sense of uncertainty in prediction | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pymc3 as pm
import seaborn as sns
import matplotlib.pyplot as plt
from collections import defaultdict
Explanation: Posterior Predictive Checks in PyMC3
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
PyMC3 has random number support thanks to Mark Wibrow as implemented in PR784.
Here we will implement a general routine to draw samples from the observed nodes of a model.
End of explanation
data = np.random.randn(100)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1, testval=0)
sd = pm.HalfNormal('sd', sd=1)
n = pm.Normal('n', mu=mu, sd=sd, observed=data)
step = pm.NUTS()
trace = pm.sample(5000, step)
pm.traceplot(trace);
Explanation: Lets generate a very simple model:
End of explanation
ppc = pm.sample_ppc(trace, samples=500, model=model, size=100)
Explanation: This function will randomly draw 50 samples of parameters from the trace. Then, for each sample, it will draw 100 random numbers from a normal distribution specified by the values of mu and std in that sample.
End of explanation
np.asarray(ppc['n']).shape
Explanation: Now, ppc contains 500 generated data sets (containing 100 samples each), each using a different parameter setting from the posterior:
End of explanation
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['n']], kde=False, ax=ax)
ax.axvline(data.mean())
ax.set(title='Posterior predictive of the mean', xlabel='mean(x)', ylabel='Frequency');
Explanation: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean:
End of explanation
# Use a theano shared variable to be able to exchange the data the model runs on
from theano import shared
def invlogit(x):
return np.exp(x) / (1 + np.exp(x))
n = 4000
n_oos = 50
coeff = 1.
predictors = np.random.normal(size=n)
# Turn predictor into a shared var so that we can change it later
predictors_shared = shared(predictors)
outcomes = np.random.binomial(1, invlogit(coeff * predictors))
outcomes
predictors_oos = np.random.normal(size=50)
outcomes_oos = np.random.binomial(1, invlogit(coeff * predictors_oos))
def tinvlogit(x):
import theano.tensor as t
return t.exp(x) / (1 + t.exp(x))
with pm.Model() as model:
coeff = pm.Normal('coeff', mu=0, sd=1)
p = tinvlogit(coeff * predictors_shared)
o = pm.Bernoulli('o', p, observed=outcomes)
start = pm.find_MAP()
step = pm.NUTS(scaling=start)
trace = pm.sample(500, step)
# Changing values here will also change values in the model
predictors_shared.set_value(predictors_oos)
# Simply running PPC will use the updated values and do prediction
ppc = pm.sample_ppc(trace, model=model, samples=500)
Explanation: Prediction
The same pattern can be used for prediction. Here we're building a logistic regression model. Note that since we're dealing the full posterior, we're also getting uncertainty in our predictions for free.
End of explanation
plt.errorbar(x=predictors_oos, y=np.asarray(ppc['o']).mean(axis=0), yerr=np.asarray(ppc['o']).std(axis=0), linestyle='', marker='o')
plt.plot(predictors_oos, outcomes_oos, 'o')
plt.ylim(-.05, 1.05)
plt.xlabel('predictor')
plt.ylabel('outcome')
Explanation: Mean predicted values plus error bars to give sense of uncertainty in prediction
End of explanation |
11,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Сценарий
Step1: Группируем по миллисекундам и усредняем
Step2: Интересные нам всплески потребления кончаются где-то на 10000-ной миллисекунде.
Step3: Синхронизируемся по 3-му всплеску, кажется, он самый острый
Step4: Грузим события из лога (adb logcat -d |egrep "DownloadTracking|onTorchStatusChanged")
Step5: И построим их на нашем графике
Step6: Seems like ok. Нас интересует idle период в районе 100000 - 160000, там видны всплески потребления и период скачивания -- последние две метки.
Step7: Сравниваем средние | Python Code:
df = pd.DataFrame(np.fromfile(
"./browser_download_lte_wf.bin",
dtype=np.uint16).astype(np.float32) * (3300 / 2**12))
Explanation: Сценарий:
- заранее выдвигаем шторку с фонариком и запускаем браузер
- включаем мониторинг
- мигаем фонариком пять раз
- задвигаем шторку, ждем чуть больше минуты
- жмем на загрузку файла, ждем окончания
- ждем еще несколько секунд, стопаем мониторинг
Читаем данные из порта USB в файл:
cat /dev/cu.usbmodem1421 > browser_download.bin
Они будут в бинарном формате, прочитаем их в DataFrame и сконвертируем в миллиамперы:
End of explanation
df_r1000 = df.groupby(df.index//1000).mean()
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000.plot(ax=ax)
Explanation: Группируем по миллисекундам и усредняем:
End of explanation
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[:10000].plot(ax=ax)
Explanation: Интересные нам всплески потребления кончаются где-то на 10000-ной миллисекунде.
End of explanation
sync = 3040
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[2900:3200].plot(ax=ax)
sns.plt.axvline(sync)
Explanation: Синхронизируемся по 3-му всплеску, кажется, он самый острый:
End of explanation
from datetime import datetime
with open("browser_download_lte_wf_events.log") as eventlog:
events = [
datetime.strptime(
l.split()[1], "%H:%M:%S.%f")
for l in eventlog.readlines()]
offsets = [(ev - events[0]).total_seconds() for ev in events]
Explanation: Грузим события из лога (adb logcat -d |egrep "DownloadTracking|onTorchStatusChanged"):
End of explanation
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000.plot(ax=ax)
for o in offsets:
sns.plt.axvline(sync + (o - offsets[3]) * 1000)
Explanation: И построим их на нашем графике:
End of explanation
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[100000:160000].plot(ax=ax)
for o in offsets:
sns.plt.axvline(sync + (o - offsets[7]) * 1000)
fig = sns.plt.figure(figsize=(16, 6))
ax = sns.plt.subplot()
df_r1000[
int(sync + (offsets[-2] - offsets[5]) * 1000):int(
sync + (offsets[-1] - offsets[5]) * 1000)].plot(ax=ax)
for o in offsets:
sns.plt.axvline(sync + (o - offsets[5]) * 1000)
Explanation: Seems like ok. Нас интересует idle период в районе 100000 - 160000, там видны всплески потребления и период скачивания -- последние две метки.
End of explanation
curr_mean_idle = df_r1000[100000:160000].mean()
curr_mean_download = df_r1000[
int(sync + (offsets[-2] - offsets[5]) * 1000):int(
sync + (offsets[-1] - offsets[5]) * 1000)].mean()
print("Среднее значение тока в покое, мА: %.2f" % curr_mean_idle)
print("Среднее значение тока во время загрузки, мА: %.2f" % curr_mean_download)
print("Разница, мА: %.2f" % (curr_mean_download - curr_mean_idle))
Explanation: Сравниваем средние:
End of explanation |
11,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Exercise
Step1: Load 120 seconds of an audio file
Step2: Plot the time-domain waveform of the audio signal
Step3: Play the audio file
Step4: Step 2
Step5: We transpose the result to accommodate scikit-learn which assumes that each row is one observation, and each column is one feature dimension
Step6: Scale the features to have zero mean and unit variance
Step7: Verify that the scaling worked
Step8: Step 2b
Step9: Load 120 seconds of an audio file
Step10: Listen to the second audio file.
Step11: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.
Step12: Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.
Step13: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.
Step14: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.
Step15: Step 3
Step16: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.
Step17: Create a classifer model object
Step18: Train the classifier
Step19: Step 4
Step20: Listen to both of the test audio excerpts
Step21: Compute MFCCs from both of the test audio excerpts
Step22: Scale the MFCCs using the previous scaler
Step23: Concatenate all test features together
Step24: Concatenate all test labels together
Step25: Compute the predicted labels
Step26: Finally, compute the accuracy score of the classifier on the test data
Step27: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?
Step28: Step 5
Step29: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?
Step30: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
Step31: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
Step32: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers | Python Code:
filename_brahms = 'brahms_hungarian_dance_5.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename_brahms
if not os.path.exists(filename_brahms):
urllib.urlretrieve(url, filename=filename_brahms)
Explanation: ← Back to Index
Exercise: Genre Recognition
Goals
Extract features from an audio signal.
Train a genre classifier.
Use the classifier to classify the genre in a song.
Step 1: Retrieve Audio
Download an audio file onto your local machine.
End of explanation
librosa.load?
x_brahms, fs_brahms = librosa.load(filename_brahms, duration=120)
Explanation: Load 120 seconds of an audio file:
End of explanation
librosa.display.waveplot?
# Your code here:
Explanation: Plot the time-domain waveform of the audio signal:
End of explanation
IPython.display.Audio?
# Your code here:
Explanation: Play the audio file:
End of explanation
librosa.feature.mfcc?
n_mfcc = 12
mfcc_brahms = librosa.feature.mfcc(x_brahms, sr=fs_brahms, n_mfcc=n_mfcc).T
Explanation: Step 2: Extract Features
For each segment, compute the MFCCs. Experiment with n_mfcc to select a different number of coefficients, e.g. 12.
End of explanation
mfcc_brahms.shape
Explanation: We transpose the result to accommodate scikit-learn which assumes that each row is one observation, and each column is one feature dimension:
End of explanation
scaler = sklearn.preprocessing.StandardScaler()
mfcc_brahms_scaled = scaler.fit_transform(mfcc_brahms)
Explanation: Scale the features to have zero mean and unit variance:
End of explanation
mfcc_brahms_scaled.mean(axis=0)
mfcc_brahms_scaled.std(axis=0)
Explanation: Verify that the scaling worked:
End of explanation
filename_busta = 'busta_rhymes_hits_for_days.mp3'
url = "http://audio.musicinformationretrieval.com/" + filename_busta
urllib.urlretrieve?
# Your code here. Download the second audio file in the same manner as the first audio file above.
Explanation: Step 2b: Repeat steps 1 and 2 for another audio file.
End of explanation
librosa.load?
# Your code here. Load the second audio file in the same manner as the first audio file.
# x_busta, fs_busta =
Explanation: Load 120 seconds of an audio file:
End of explanation
IPython.display.Audio?
Explanation: Listen to the second audio file.
End of explanation
plt.plot?
# See http://musicinformationretrieval.com/stft.html for more details on displaying spectrograms.
librosa.feature.melspectrogram?
librosa.amplitude_to_db?
librosa.display.specshow?
Explanation: Plot the time-domain waveform and spectrogram of the second audio file. In what ways does the time-domain waveform look different than the first audio file? What differences in musical attributes might this reflect? What additional insights are gained from plotting the spectrogram? Explain.
End of explanation
librosa.feature.mfcc?
# Your code here:
# mfcc_busta =
mfcc_busta.shape
Explanation: Extract MFCCs from the second audio file. Be sure to transpose the resulting matrix such that each row is one observation, i.e. one set of MFCCs. Also be sure that the shape and size of the resulting MFCC matrix is equivalent to that for the first audio file.
End of explanation
scaler.transform?
# Your code here:
# mfcc_busta_scaled =
Explanation: Scale the resulting MFCC features to have approximately zero mean and unit variance. Re-use the scaler from above.
End of explanation
mfcc_busta_scaled.mean?
mfcc_busta_scaled.std?
Explanation: Verify that the mean of the MFCCs for the second audio file is approximately equal to zero and the variance is approximately equal to one.
End of explanation
features = numpy.vstack((mfcc_brahms_scaled, mfcc_busta_scaled))
features.shape
Explanation: Step 3: Train a Classifier
Concatenate all of the scaled feature vectors into one feature table.
End of explanation
labels = numpy.concatenate((numpy.zeros(len(mfcc_brahms_scaled)), numpy.ones(len(mfcc_busta_scaled))))
Explanation: Construct a vector of ground-truth labels, where 0 refers to the first audio file, and 1 refers to the second audio file.
End of explanation
# Support Vector Machine
model = sklearn.svm.SVC()
Explanation: Create a classifer model object:
End of explanation
model.fit?
# Your code here
Explanation: Train the classifier:
End of explanation
x_brahms_test, fs_brahms = librosa.load(filename_brahms, duration=10, offset=120)
x_busta_test, fs_busta = librosa.load(filename_busta, duration=10, offset=120)
Explanation: Step 4: Run the Classifier
To test the classifier, we will extract an unused 10-second segment from the earlier audio fields as test excerpts:
End of explanation
IPython.display.Audio?
IPython.display.Audio?
Explanation: Listen to both of the test audio excerpts:
End of explanation
librosa.feature.mfcc?
librosa.feature.mfcc?
Explanation: Compute MFCCs from both of the test audio excerpts:
End of explanation
scaler.transform?
scaler.transform?
Explanation: Scale the MFCCs using the previous scaler:
End of explanation
numpy.vstack?
Explanation: Concatenate all test features together:
End of explanation
numpy.concatenate?
Explanation: Concatenate all test labels together:
End of explanation
model.predict?
Explanation: Compute the predicted labels:
End of explanation
score = model.score(test_features, test_labels)
score
Explanation: Finally, compute the accuracy score of the classifier on the test data:
End of explanation
# Your code here.
Explanation: Currently, the classifier returns one prediction for every MFCC vector in the test audio signal. Can you modify the procedure above such that the classifier returns a single prediction for a 10-second excerpt?
End of explanation
df_brahms = pandas.DataFrame(mfcc_brahms_test_scaled)
df_brahms.shape
df_brahms.head()
df_busta = pandas.DataFrame(mfcc_busta_test_scaled)
Explanation: Step 5: Analysis in Pandas
Read the MFCC features from the first test audio excerpt into a data frame:
End of explanation
df_brahms.corr()
df_busta.corr()
Explanation: Compute the pairwise correlation of every pair of 12 MFCCs against one another for both test audio excerpts. For each audio excerpt, which pair of MFCCs are the most correlated? least correlated?
End of explanation
df_brahms.plot.scatter?
Explanation: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
df_busta.plot.scatter?
Explanation: Display a scatter plot of any two of the MFCC dimensions (i.e. columns of the data frame) against one another. Try for multiple pairs of MFCC dimensions.
End of explanation
df_brahms[0].plot.hist()
df_busta[11].plot.hist()
Explanation: Plot a histogram of all values across a single MFCC, i.e. MFCC coefficient number. Repeat for a few different MFCC numbers:
End of explanation |
11,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Biology with Python
By Fatih Enes Kemal Ergin
In this small tutorial I will talk about biological concepts with theory and implementation in Python
Before we go into the implementation, we should cover the theoratical side of the biology. This beginning part will be for the programmers who has a little background on biology and to remind them, if you don't know anything about biology you should go to this post. If you have good amount of information about the theoratical part of the Biology, you may skip this reminder part and go directly to the implementations...
Short Biology
Chemicals and molecules are the main constituent of the life, in here we will start examining biology with 3 main molecule structures
Step3: 2. Estimating the Molecular Mass
Step5: 3. Finding a Sequence Motif
The next example script is designed to find a particular smaller sub-sequence within a larger sequence. This kind of operation is useful because specific small sequences, called motifs, often have important biological roles.
Here is simple example of how to find a fixed sub-sequence within a larger sequence
Step6: 4. GC Content
The next example investigates a DNA sequence by measuring its GC content
Step8: 5. Protein hydrophobicity plot
Now we will move on to another example which produces data which we can display as a graph, but this time it will be for a protein sequence. The task here is to generate a plot of how water-hating, or to use the proper term hydrophobic, a given stretch of residues is.
The next example function aims to predict whether a protein possesses a sufficiently hydrophobic segment of residues (which will fold into a helix) that will allow it to be inserted into a cell’s system of membranes.
Initially we define a hydrophobicity scale, then we define the function that will perform the search so that it accepts a protein sequence and hydrophobicity scale dictionary as mandatory inputs, and an optional input to specify a search window size.
Step11: 6. Measuring Repetitiveness
We will refer to the formulation we use for this comparative measure of repetitiveness as the relative entropy, also known as the Kullback-Leibler divergence.
The actual example code will be broken up into two separate functions; one will calculate the relative entropy and the other will scan through a sequence compiling the results.
Step13: 7. Protein Isoelectric Point
This is an example that involves an optimisation.
It is commonplace to come across problems where the values we are interested in are not directly accessible.
The topic of this example is the estimation of the isoelectric point of a protein, which we will call the pI. This is a measurable property of a protein
Step15: The estimateIsoelectric function uses the estimateCharge function defined above to estimate the pH at which a protein sequence will be neutrally charged. To the input sequence of letters we add the + and - symbols to represent the charge groups at the N and C termini (strictly speaking these don’t have to be at the ends because order is unimportant).
Step16: Obtaining Sequences with BioPython
You will naturally want to get your sequences from a database or file where they are stored, rather than having to type sequence letters into a Python file, if you want to use some algorithms from above...
There are a lot of tools on bioinformatics but we mostly use and rely on BioPython, since it has a lot of methods to use it.
Reading and Writing FASTA files
To read a FASTA-format file using BioPython we use the SeqIO module, which in this case takes an open file object and extracts each sequence of the file, in turn creating a special object for each record.
Step17: Writing a FASTA file using BioPython is slightly trickier because we have to first create the right type of BioPython objects (SeqRecord), which we then pass into a function for writing.
We make several more imports from the BioPython library. The SeqRecord is the final object we wish to make, and which will be written out. The Seq object is needed internally to make a SeqRecord and IUPAC is needed to check the sequence letters according to some (the IUPAC) standard.
Step18: Accessing Public Databases
Sometimes, we wish to get data directly from a database then there are a few helper functions in BioPython that allow easy access to some large sequence databases via Internet-based services, rather than having to talk to the database directly.
Here is the example to extract FASTA files from NCBI's GenBank database;
import the Entrez module
set the email address attribute (to identify ourselves, as encouraged by the database
call a function to fetch a given entry based on a given database type “protein”, return format type “fasta” and sequence identifier number | Python Code:
# Here is the genetic code of the amino acids defined as dictionaries
STANDARD_GENETIC_CODE = {'UUU':'Phe', 'UUC':'Phe', 'UCU':'Ser', 'UCC':'Ser',
'UAU':'Tyr', 'UAC':'Tyr', 'UGU':'Cys', 'UGC':'Cys',
'UUA':'Leu', 'UCA':'Ser', 'UAA':None, 'UGA':None,
'UUG':'Leu', 'UCG':'Ser', 'UAG':None, 'UGG':'Trp',
'CUU':'Leu', 'CUC':'Leu', 'CCU':'Pro', 'CCC':'Pro',
'CAU':'His', 'CAC':'His', 'CGU':'Arg', 'CGC':'Arg',
'CUA':'Leu', 'CUG':'Leu', 'CCA':'Pro', 'CCG':'Pro',
'CAA':'Gln', 'CAG':'Gln', 'CGA':'Arg', 'CGG':'Arg',
'AUU':'Ile', 'AUC':'Ile', 'ACU':'Thr', 'ACC':'Thr',
'AAU':'Asn', 'AAC':'Asn', 'AGU':'Ser', 'AGC':'Ser',
'AUA':'Ile', 'ACA':'Thr', 'AAA':'Lys', 'AGA':'Arg',
'AUG':'Met', 'ACG':'Thr', 'AAG':'Lys', 'AGG':'Arg',
'GUU':'Val', 'GUC':'Val', 'GCU':'Ala', 'GCC':'Ala',
'GAU':'Asp', 'GAC':'Asp', 'GGU':'Gly', 'GGC':'Gly',
'GUA':'Val', 'GUG':'Val', 'GCA':'Ala', 'GCG':'Ala',
'GAA':'Glu', 'GAG':'Glu', 'GGA':'Gly', 'GGG':'Gly'
}
# Pre-defined DNA sequence, We will use this along the way.
dnaSeq = 'ATGGTGCATCTGACTCCTGAGGAGAAGTCTGCCGTTACTGCCCTGTGGGGCAAGGTG'
def proteinTranslation(seq, geneticCode):
This function translates a nucleic acid sequence into a
protein sequence, until the end or until it comes across
a stop codon
# Changes all the T into U, DNA to RNA
seq = seq.replace('T','U') # Make sure we have RNA sequence
proteinSeq = [] # Initializing the proteinSeq list to store the output
i = 0
while i+2 < len(seq):
# Get codons of three letters
codon = seq[i:i+3]
# Get the match-up aminoacid
aminoAcid = geneticCode[codon]
# If found stop looping
if aminoAcid is None: # Found stop codon
break
# Other wise add that aminoacid to proteinSeq list
proteinSeq.append(aminoAcid)
i += 3
return proteinSeq
print proteinTranslation(dnaSeq, STANDARD_GENETIC_CODE)
print ('-'*30)
# You can also directly change DNA to RNA and save it as RNAseq
rnaSeq = dnaSeq.replace('T','U')
print rnaSeq
Explanation: Biology with Python
By Fatih Enes Kemal Ergin
In this small tutorial I will talk about biological concepts with theory and implementation in Python
Before we go into the implementation, we should cover the theoratical side of the biology. This beginning part will be for the programmers who has a little background on biology and to remind them, if you don't know anything about biology you should go to this post. If you have good amount of information about the theoratical part of the Biology, you may skip this reminder part and go directly to the implementations...
Short Biology
Chemicals and molecules are the main constituent of the life, in here we will start examining biology with 3 main molecule structures: Proteins, RNA, and DNA.
Proteins:
Proteins are the most commonly used in life forms, we may say that with out proteins we would not be different from each other.
They are sequences made up with 20 different amino acids by polypeptides.
Structure of the protein effects the chemical activity of the body/life directly.
Most proteins are in 3D structure which gives them an ability to vary their functionality in the life forms.
Protein structure and it's structure is exceedingly complex and not easy to predict which makes it very good subject to focus on research.
In computer science we will represent them as letters (20 different) put to gether as a sequence, where each letter represents different amino acids.
Sequence alco can be represented by the three letter aminoacid code form.
The amino acids that are linked into a protein chain are often referred to as residues.
You will see the term residue used when one wants to refer to a particular amino acid in a particular position of a protein chain.
The process of making a protein by cellular ribosomes, and the information of which amino-acid will go where is coming with another molecule called RNA.
RNA:
RNA is a molecule which has different roles in the cellular program, such as messenger, transfer, ribosomal and much more...
They made up with small entities called nucleotides.
Information within RNA comes from DNA
Not all RNAs are using for protein synthesis, they have also other duties in cell.
Components of RNA are mostly short-lived copy of DNA
DNA:
DNA is the information of everything happening inside a life form. It store every feature's information and it's offsprings.
The regions of DNA which makes the RNA is called genes.
They made up with also small entities called nucleotides; Adenine(A), Guanine(G), Timine(T), Cytosine.(C)
A DNA chain is commonly represented as a one-letter sequence
DNA sequence is really a representation of a double-stranded molecule which a relation of A-T and G-C
one long pair of DNA strands inside a cell is called a chromosome.
Complementary strand is the strand with relational strand from double-stranded DNA
ATGACGT is one strand, other is TACTGCA
Now let's talk about some essential concepts that all bioinformatics scientists must know to start their career.
Transcription
The process of reading DNA and creating RNA from it is called transcription. In computation side of it we will only use the same DNA sequence and will replace DNA nucleotide 'T' into 'U'. dna.replace('T','U') does the job in Python
The DNA strand which has the same sequence as the RNA is called the coding strand.
Translate
Most RNA molecules go on to specify protein amino acid sequences in a process called translation; these are called messenger RNAs (mRNA).
Each subsequent group of three bases, called codon.
The regions of an RNA chain that are removed are called introns, and those remaining are called exons.
Introns' presence makes it significantly more difficult to detect which bits of a gene are actually used to make protein sequences.
DNA Sequencing
Nowadays, sequencing in DNA, RNA, or proteins are coming from DNA, mostly. Since we know how to convert DNA to RNA to Proteins, we don't need every informations, if you know the gene-coding regions. DNA is sequenced with a special kind of chemical reaction, which these days is often performed by a computerised machine.
Now we have some background in biology let's dive into the algorithms and their explanations...
Using biological sequences in computing
Here are some important tips:
Fit the sequences into some kind of data structures and make sure it will be reusable.
The Commonest and the simplest is storing your sequence as text
DNA and RNA will be represented as strings combined with 4 different characters (A, C, G, T/U)
Proteins have the same style with 20 different characters
Representing unusual or artificial amino-acids, prone: 'PRO', hydroxyproline: 'HYP'
1. Translating a DNA sequence into protein:
In the first script, we will translate the given DNA sequence into the protein according to pre-defined structures for aa-protein representation. Right now, we won't think of starting codon, stop codon, or special codons, but we will only implement the main concept of the translating...
I will explain the code with comments...
End of explanation
# Define function with seq and molType Protein
def estimateMolMass(seq, molType='protein'):
Calculate the molecular weight of a biological sequence assuming
normal isotopic ratios and protonation/modification states
# Define a function with Molecule Masses
residueMasses = {
"DNA": {"G":329.21, "C":289.18, "A":323.21, "T":304.19},
"RNA": {"G":345.21, "C":305.18, "A":329.21, "U":302.16},
"protein": {"A": 71.07, "R":156.18, "N":114.08, "D":115.08,
"C":103.10, "Q":128.13, "E":129.11, "G": 57.05,
"H":137.14, "I":113.15, "L":113.15, "K":128.17,
"M":131.19, "F":147.17, "P": 97.11, "S": 87.07,
"T":101.10, "W":186.20, "Y":163.17, "V": 99.13}}
# Get the molType from the dictionary residueMasses
massDict = residueMasses[molType]
# Begin with mass of extra end atoms H + OH
molMass = 18.02
# Loop through each letter in sequence
for letter in seq:
# Add the molecule mass according to match-up and sum them up
molMass += massDict.get(letter, 0.0)
# Return to molMass
return molMass
# Test Case 1
proteinSeq = 'IRTNGTHMQPLLKLMKFQKFLLELFTLQKRKPEKGYNLPIISLNQ' # Protein Sequence is defined
print estimateMolMass(proteinSeq) # function called with proteinSeq variable and default value protein
# Test Case 2
print estimateMolMass(dnaSeq, molType='DNA') # function called with dnaSeq variable and DNA molType
Explanation: 2. Estimating the Molecular Mass:
This next script estimates the mass of a DNA, RNA or protein molecule (in units of daltons). This is only an estimate because various residues reversibly bind hydrogen ions under different conditions (i.e. pH affects whether H+ ions are joined to the acidic and basic sites) and we are assuming standard proportions of the various isotopes.
Steps:
Define a function with 2 arguments, sequence, and MoleculeType
Define a dictionary inside the function that stores the average molecular weights of the different kinds of residue
Define a variable to hold the total for the molecular mass
End of explanation
profile = {
'A':[ 61, 16,352, 3,354,268,360,222,155, 56, 83, 82, 82, 68, 77],
'C':[145, 46, 0, 10, 0, 0, 3, 2, 44,135,147,127,118,107,101],
'G':[152, 18, 2, 2, 5, 0, 10, 44,157,150,128,128,128,139,140],
'T':[ 31,309, 35,374, 30,121, 6,121, 33, 48, 31, 52, 61, 75, 71]}
def matchDnaProfile(seq, profile):
Find the best-matching position and score when comparing a DNA
sequence with a DNA sequence profile
bestScore = 0 # Just to start with
bestPosition = None # Just to start with
width = len(profile['A'])
for i in range(len(seq)-width):
score = 0
for j in range(width):
letter = seq[i+j]
score += profile[letter][j]
if score > bestScore:
bestScore = score
bestPosition = i
return bestScore, bestPosition
# Test Case 1
score, position = matchDnaProfile(dnaSeq, profile)
print(score, position, dnaSeq[position:position+15])
Explanation: 3. Finding a Sequence Motif
The next example script is designed to find a particular smaller sub-sequence within a larger sequence. This kind of operation is useful because specific small sequences, called motifs, often have important biological roles.
Here is simple example of how to find a fixed sub-sequence within a larger sequence:
Python
seq = 'AGCTCGCTCGCTGCGTATAAAATCGCATCGCGCGCAGC'
position1 = seq.find('TATAAA')
position2 = seq.find('GAGGAG')
In many cases, however, it is not just one single well-defined sub-sequence that corresponds to a motif with a biological function.
This particular example attempts to find the region of a DNA sequence called the ‘TATA box’. The biological role of this sequence is to help define where the start of a gene is. Note that only some genes use the TATA box system.
End of explanation
%matplotlib inline
# Define a function with sequence and window Size of 10
def calcGcContent(seq, winSize=10):
gcValues = [] # Initializing the gcValues
for i in range(len(seq)-winSize):
subSeq = seq[i:i+winSize] # Slice subsequence with starting point i to i+windowsSize
numGc = subSeq.count('G') + subSeq.count('C') # Calculate G and C numbers inside the subSeq
value = numGc/float(winSize) # Get the GC rate by dividing it into winSize
gcValues.append(value) # Add the value found to gcValues
return gcValues # Return the gcValues list
# Test Case 1
from matplotlib import pyplot # Call the plotting library
gcResults = calcGcContent(dnaSeq) # get the result from function above and save result to gcResults
pyplot.plot(gcResults) # plot it
pyplot.show() # Show it
Explanation: 4. GC Content
The next example investigates a DNA sequence by measuring its GC content: i.e. the percentage of the total base pairs that are G:C (rather than A:T). All we need to do for this is to take the sequence of one strand of DNA and simply count how many of the nucleotides are G or C.
End of explanation
%matplotlib inline
# Defining a scale of hydrophobicity
GES_SCALE = {'F':-3.7,'M':-3.4,'I':-3.1,'L':-2.8,'V':-2.6,
'C':-2.0,'W':-1.9,'A':-1.6,'T':-1.2,'G':-1.0,
'S':-0.6,'P': 0.2,'Y': 0.7,'H': 3.0,'Q': 4.1,
'N': 4.8,'E': 8.2,'K': 8.8,'D': 9.2,'R':12.3}
# Define a function to scan a protein sequence
def hydrophobicitySearch(seq, scale, winSize=15):
Scan a protein sequence for hydrophobic regions using the GES
hydrophobicity scale.
# Initialize score to None
score = None
# Initialize an empty scoreList
scoreList = []
# Loop through sequence
for i in range(len(seq)- winSize):
#
j = i + winSize
if score is None:
score = 0
for k in range(i,j):
score += scale[seq[k]]
else:
score += scale[seq[j-1]]
score -= scale[seq[i-1]]
scoreList.append(score)
return scoreList
# Test Case 1
from matplotlib import pyplot
scores = hydrophobicitySearch(proteinSeq, GES_SCALE)
pyplot.plot(scores)
pyplot.show()
Explanation: 5. Protein hydrophobicity plot
Now we will move on to another example which produces data which we can display as a graph, but this time it will be for a protein sequence. The task here is to generate a plot of how water-hating, or to use the proper term hydrophobic, a given stretch of residues is.
The next example function aims to predict whether a protein possesses a sufficiently hydrophobic segment of residues (which will fold into a helix) that will allow it to be inserted into a cell’s system of membranes.
Initially we define a hydrophobicity scale, then we define the function that will perform the search so that it accepts a protein sequence and hydrophobicity scale dictionary as mandatory inputs, and an optional input to specify a search window size.
End of explanation
%matplotlib inline
def calcRelativeEntropy(seq, resCodes):
Calculate a relative entropy value for the residues in a
sequence compared to a uniform null hypothesis.
from math import log
N = float(len(seq))
base = 1.0/len(resCodes)
prop = {}
for r in resCodes:
prop[r] = 0
for r in seq:
prop[r] += 1
for r in resCodes:
prop[r] /= N
H = 0
for r in resCodes:
if prop[r] != 0.0:
h = prop[r]* log(prop[r]/base, 2.0)
H += h
H /= log(base, 2.0)
return H
def relativeEntropySearch(seq, winSize, isProtein=False):
Scan a sequence for repetitiveness by calculating relative
information entropy.
lenSeq = len(seq)
scores = [0.0] * lenSeq
extraSeq = seq[:winSize]
seq += extraSeq
if isProtein:
resCodes = 'ACDEFGHIKLMNPQRSTVWY'
else:
resCodes = 'GCAT'
for i in range(lenSeq):
subSeq = seq[i:i+winSize]
scores[i] = calcRelativeEntropy(subSeq, resCodes)
return scores
# Test Case 1
from matplotlib import pyplot
dnaScores = relativeEntropySearch(dnaSeq, 6)
proteinScores = relativeEntropySearch(proteinSeq, 10, isProtein=True)
pyplot.plot(dnaScores)
pyplot.plot(proteinScores)
pyplot.show()
Explanation: 6. Measuring Repetitiveness
We will refer to the formulation we use for this comparative measure of repetitiveness as the relative entropy, also known as the Kullback-Leibler divergence.
The actual example code will be broken up into two separate functions; one will calculate the relative entropy and the other will scan through a sequence compiling the results.
End of explanation
def estimateCharge(sequence, pH):
Using pKa values estimate the charge of a sequence of
amino acids at a given pH
pKaDict = {'+': 8.0,'-': 3.1,'K':10.0,'R':12.0,
'H': 6.5,'E': 4.4,'D': 4.4,'Y':10.0,'C': 8.5}
isAcid = {'+':False,'-':True,'K':False,'R':False,
'H':False,'E':True,'D':True,'Y':True,'C':True}
total = 0.0
for aminoAcid in sequence:
pKa = pKaDict.get(aminoAcid)
if pKa is not None:
r = 10.0 ** (pH-pKa)
dissociated = r/(r+1.0)
if isAcid[aminoAcid]:
charge = -1.0 * dissociated
else:
charge = 1.0 - dissociated
total += charge
return total
Explanation: 7. Protein Isoelectric Point
This is an example that involves an optimisation.
It is commonplace to come across problems where the values we are interested in are not directly accessible.
The topic of this example is the estimation of the isoelectric point of a protein, which we will call the pI. This is a measurable property of a protein: it is the pH at which the protein carries no overall electric charge.
To calculate the pI we must find the pH where we think the positive and negative charges in the protein balance, hence must first have a method for estimating the charge of a protein chain at a given pH.
The optimisation algorithm we will use employs a divide-and-conquer strategy. We test various pH values by stepping between test points and for a given pH value and whether the resulting charge is above or below zero (positive or negative) tells us in which direction we must search next for a better answer. Also, if we come across a better guess for the pI (i.e. a pH that predicts a charge closer to zero) then we reduce the step size (how far to go for the next guess) by half so that we get increasingly close to the optimum value and don’t overshoot far.
The function estimateCharge is designed to estimate the charge of a given sequence at a given pH. The function takes two input variables which are helpfully named and returns a single charge value.
End of explanation
def estimateIsoelectric(sequence):
Estimate the charge neutral pH of a protein sequence.
This is just a guess as pKa values will vary according to
protein sequence, conformation and conditions.
sequence = '+' + sequence + '-' # assumes seq is a string
bestValue = 0.0
minCharge = estimateCharge(sequence, bestValue)
increment = 7.0
while abs(minCharge) > 0.001:
pHtest = bestValue + increment
charge = estimateCharge(sequence, pHtest)
if abs(charge) < abs(minCharge):
minCharge = charge
bestValue = pHtest
else:
increment = abs(increment)/2.0
if minCharge < 0.0:
increment *= -1
return bestValue
# Test Case1
pI = estimateIsoelectric(proteinSeq)
pI
Explanation: The estimateIsoelectric function uses the estimateCharge function defined above to estimate the pH at which a protein sequence will be neutrally charged. To the input sequence of letters we add the + and - symbols to represent the charge groups at the N and C termini (strictly speaking these don’t have to be at the ends because order is unimportant).
End of explanation
# Calling the BioPython library and it's sub-library SeqIO
from Bio import SeqIO
# Opening and reading the fasta file, store them in fileObj.
fileObj = open("sequence.fasta", "rU")
# Loop through in fileObj and parse it in each iteration
for protein in SeqIO.parse(fileObj, 'fasta'):
# Print the id of the sequence
print(protein.id)
# Print the sequence itself
print(protein.seq)
# Print the Isoelectric point of the seq
print(estimateIsoelectric(protein.seq))
# We should close the file object after using it!
fileObj.close()
Explanation: Obtaining Sequences with BioPython
You will naturally want to get your sequences from a database or file where they are stored, rather than having to type sequence letters into a Python file, if you want to use some algorithms from above...
There are a lot of tools on bioinformatics but we mostly use and rely on BioPython, since it has a lot of methods to use it.
Reading and Writing FASTA files
To read a FASTA-format file using BioPython we use the SeqIO module, which in this case takes an open file object and extracts each sequence of the file, in turn creating a special object for each record.
End of explanation
from Bio.SeqRecord import SeqRecord # SeqRecord to make right type of object
from Bio.Seq import Seq # To use seq methods
from Bio.Alphabet import IUPAC #
# Will create an output file and write it as output.fasta
fileObj = open("output.fasta", "w")
# Make a Seq with accepting only 1 protein character according to IUPAC
seqObj = Seq(proteinSeq, IUPAC.protein)
# Add id to the sequence "Test"
proteinObj = SeqRecord(seqObj, id="TEST")
# Writing the sequences into defined format
SeqIO.write([proteinObj,], fileObj, 'fasta')
# Close the fileObj
fileObj.close()
Explanation: Writing a FASTA file using BioPython is slightly trickier because we have to first create the right type of BioPython objects (SeqRecord), which we then pass into a function for writing.
We make several more imports from the BioPython library. The SeqRecord is the final object we wish to make, and which will be written out. The Seq object is needed internally to make a SeqRecord and IUPAC is needed to check the sequence letters according to some (the IUPAC) standard.
End of explanation
from Bio import Entrez # Importing the library to use Entrez
Entrez.email = '[email protected]' # Defining email for NCBI
# fetch the data from database:protein, typeofdata:fasta, and id
socketObj = Entrez.efetch(db="protein", rettype="fasta", id="71066805") # fetch the data from database: protein type
# Read it with SeqIO and save the fasta into dnaObj
dnaObj = SeqIO.read(socketObj, "fasta")
# close the socketObj
socketObj.close()
# show the fetched sequence description
print(dnaObj.description)
# show the fetched sequence itself
print(dnaObj.seq)
# In a similar way we can read from SWISSPROT record using the ExPASy
# another library to fetch sequence
from Bio import ExPASy
# Open the connection and get the sequence which is HBB_HUMAN
socketObj = ExPASy.get_sprot_raw('HBB_HUMAN')
# read the swiss file fetched and save it to proteinObj
proteinObj = SeqIO.read(socketObj, "swiss")
# Close the connection
socketObj.close()
# show the description of sequence fetched
print(proteinObj.description)
# Show the sequence itself
print(proteinObj.seq)
Explanation: Accessing Public Databases
Sometimes, we wish to get data directly from a database then there are a few helper functions in BioPython that allow easy access to some large sequence databases via Internet-based services, rather than having to talk to the database directly.
Here is the example to extract FASTA files from NCBI's GenBank database;
import the Entrez module
set the email address attribute (to identify ourselves, as encouraged by the database
call a function to fetch a given entry based on a given database type “protein”, return format type “fasta” and sequence identifier number
End of explanation |
11,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
new_data['residuals1'] = results.resid
Step1: Subseting the data
Three different methods for subsetting the data.
1. Using a systematic selection by index modulus
2. Using a random uniform selection by indices.
2. A geographic subselection (Clip)
Systematic selection
Step2: Random (Uniform) selection
Step4: Geographic subselection
Step5: Model Analysis with the empirical variogram
Step6: Analysis and Results for the systematic sample
Step7: Test for analysis | Python Code:
#new_data = prepareDataFrame("/RawDataCSV/idiv_share/plotsClimateData_11092017.csv")
## En Hec
#new_data = prepareDataFrame("/home/hpc/28/escamill/csv_data/idiv/plotsClimateData_11092017.csv")
## New "official" dataset
new_data = prepareDataFrame("/RawDataCSV/idiv_share/FIA_Plots_Biomass_11092017.csv")
#IN HEC
#new_data = prepareDataFrame("/home/hpc/28/escamill/csv_data/idiv/FIA_Plots_Biomass_11092017.csv")
Explanation: new_data['residuals1'] = results.resid
End of explanation
def systSelection(dataframe,k):
n = len(dataframe)
idxs = range(0,n,k)
systematic_sample = dataframe.iloc[idxs]
return systematic_sample
##################
k = 10 # The k-th element to take as a sample
systematic_sample = systSelection(new_data,k)
ax= systematic_sample.plot(column='logBiomass',figsize=(16,10),cmap=plt.cm.Blues,edgecolors='')
Explanation: Subseting the data
Three different methods for subsetting the data.
1. Using a systematic selection by index modulus
2. Using a random uniform selection by indices.
2. A geographic subselection (Clip)
Systematic selection
End of explanation
def randomSelection(dataframe,p):
n = len(dataframe)
idxs = np.random.choice(n,p,replace=False)
random_sample = dataframe.iloc[idxs]
return random_sample
#################
n = len(new_data)
p = 3000 # The amount of samples taken (let's do it without replacement)
Explanation: Random (Uniform) selection
End of explanation
def subselectDataFrameByCoordinates(dataframe,namecolumnx,namecolumny,minx,maxx,miny,maxy):
Returns a subselection by coordinates using the dataframe/
minx = float(minx)
maxx = float(maxx)
miny = float(miny)
maxy = float(maxy)
section = dataframe[lambda x: (x[namecolumnx] > minx) & (x[namecolumnx] < maxx) & (x[namecolumny] > miny) & (x[namecolumny] < maxy) ]
return section
# COnsider the the following subregion
minx = -100
maxx = -85
miny = 30
maxy = 35
section = subselectDataFrameByCoordinates(new_data,'LON','LAT',minx,maxx,miny,maxy)
#section = new_data[lambda x: (x.LON > minx) & (x.LON < maxx) & (x.LAT > miny) & (x.LAT < maxy) ]
section.plot(column='logBiomass')
Explanation: Geographic subselection
End of explanation
# old variogram (using all data sets)
#gvg,tt = createVariogram("/apps/external_plugins/spystats/HEC_runs/results/logbiomas_logsppn_res.csv",new_data)
## New variogram for new data
gvg,tt = createVariogram("/apps/external_plugins/spystats/HEC_runs/results/variogram/data_envelope.csv",new_data)
#For HEC
#gvg,tt = createVariogram("/home/hpc/28/escamill/spystats/HEC_runs/results/variogram/data_envelope.csv",new_data)
import numpy as np
xx = np.linspace(0,1000000,1000)
gvg.plot(refresh=False)
plt.plot(xx,gvg.model.f(xx),lw=2.0,c='k')
plt.title("Empirical Variogram with fitted Whittle Model")
gvg.model
samples = map(lambda i : systSelection(new_data,i), range(20,2,-1))
samples = map(lambda i : randomSelection(new_data,3000),range(100))
s = samples[0]
vs = tools.Variogram(s,'logBiomas',model=gvg.model)
%timeit vs.distance_coordinates
%time vs.model.f(vs.distance_coordinates.flatten())
## let's try to use a better model
vs.model.f(vs.distance_coordinates.flatten())
%time vs.model.corr_f(vs.distance_coordinates.flatten()).reshape(vs.distance_coordinates.shape)
matern_model = tools.MaternVariogram(sill=0.340125401705,range_a=5577.83789733, nugget=0.33, kappa=4)
whittle_model = tools.WhittleVariogram(sill=0.340288288241, range_a=40963.3203528, nugget=0.329830410223, alpha=1.12279978135)
exp_model = tools.ExponentialVariogram(sill=0.340294258738, range_a=38507.8253768, nugget=0.329629457808)
gaussian_model = tools.GaussianVariogram(sill=0.340237044718, range_a=44828.0323827, nugget=0.330734960804)
spherical_model = tools.SphericalVariogram(sill=266491706445.0, range_a=3.85462485193e+19, nugget=0.3378323178453)
%time matern_model.f(vs.distance_coordinates.flatten())
%time whittle_model.f(vs.distance_coordinates.flatten())
%time exp_model.f(vs.distance_coordinates.flatten())
%time gaussian_model.f(vs.distance_coordinates.flatten())
%time spherical_model.f(vs.distance_coordinates.flatten())
%time mcf = matern_model.corr_f(vs.distance_coordinates.flatten())
%time wcf = whittle_model.corr_f(vs.distance_coordinates.flatten())
%time ecf = exp_model.corr_f(vs.distance_coordinates.flatten())
%time gcf = gaussian_model.corr_f(vs.distance_coordinates.flatten())
%time scf = spherical_model.corr_f(vs.distance_coordinates.flatten())
%time mcf0 = matern_model.corr_f_old(vs.distance_coordinates.flatten())
%time wcf0 = whittle_model.corr_f_old(vs.distance_coordinates.flatten())
%time ecf0 = exp_model.corr_f_old(vs.distance_coordinates.flatten())
%time gcf0 = gaussian_model.corr_f_old(vs.distance_coordinates.flatten())
%time scf0 = spherical_model.corr_f_old(vs.distance_coordinates.flatten())
w = matern_model.corr_f(vs.distance_coordinates.flatten())
w2 = matern_model.corr_f_old(vs.distance_coordinates.flatten())
print(np.array_equal(mcf,mcf0))
print(np.array_equal(wcf,wcf0))
print(np.array_equal(ecf,ecf0))
print(np.array_equal(gcf,gcf0))
#np.array_equal(scf,scf0)
np.array_equal()
%time vs.calculateCovarianceMatrix()
Explanation: Model Analysis with the empirical variogram
End of explanation
### read csv files
conf_ints = pd.read_csv("/outputs/gls_confidence_int.csv")
params = pd.read_csv("/outputs/params_gls.csv")
params2 = pd.read_csv("/outputs/params2_gls.csv")
pvals = pd.read_csv("/outputs/pvalues_gls.csv")
pnobs = pd.read_csv("/outputs/n_obs.csv")
prsqs = pd.read_csv("/outputs/rsqs.csv")
params
conf_ints
pvals
plt.plot(pnobs.n_obs,prsqs.rsq)
plt.title("$R^2$ statistic for GLS on logBiomass ~ logSppn using Sp.autocor")
plt.xlabel("Number of observations")
tt = params.transpose()
tt.columns = tt.iloc[0]
tt = tt.drop(tt.index[0])
plt.plot(pnobs.n_obs,tt.Intercept)
plt.title("Intercept parameter")
plt.plot(pnobs.n_obs,tt.logSppN)
plt.title("logSppn parameter")
Explanation: Analysis and Results for the systematic sample
End of explanation
ccs = map(lambda s : bundleToGLS(s,gvg.model),samples)
#bundleToGLS(samples[22],gvg.model)
covMat = buildSpatialStructure(samples[8],gvg.model)
#np.linalg.pinv(covMat)
calculateGLS(samples[8],covMat)
#tt = covMat.flatten()
secvg = tools.Variogram(samples[8],'logBiomass',model=gvg.model)
DM = secvg.distance_coordinates
dm = DM.flatten()
dm.sort()
pdm = pd.DataFrame(dm)
xxx = pdm.loc[pdm[0] > 0].sort()
xxx.shape
8996780 + 3000 - (3000 * 3000)
pdm.shape
dd = samples[22].drop_duplicates(subset=['newLon','newLat'])
secvg2 = tools.Variogram(dd,'logBiomass',model=gvg.model)
covMat = buildSpatialStructure(dd,gvg.model)
calculateGLS(dd,covMat)
samples[22].shape
gvg.model.corr_f(xxx.values())
kk
gvg.model.corr_f([100])
gvg.model.corr_f([10])
Explanation: Test for analysis
End of explanation |
11,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding a discharge point source to a LEM
(Greg Tucker, CSDMS / CU Boulder, fall 2020)
This notebook shows how to add one or more discharge point sources to a Landlab-built landscape evolution model (LEM), using the flow routing components. The basic idea is to modify the water__unit_flux_in field to include a large flux (which could be represented as either drainage area or discharge) at one or more locations along the edge of a grid.
Step1: Docstring example from FlowAccumulator
The following is a tiny example from the FlowAccumulator documentation
Step2: We can extend this tiny example to show that you can subsequently modify the rnff array and it will take effect when you re-run the FlowAccumulator
Step3: Larger example
In this example, we create a slightly larger grid, with a surface that slopes down toward the south / bottom boundary. We will introduce a runoff point source at a node in the middle of the top-most non-boundary row.
Start by defining some parameters
Step4: Create grid and topography, and set boundaries
Step5: The FlowAccumulator component takes care of identifying drainage directions (here using the D8 method) and calculating the cumulative drainage area and surface water discharge.
Note that in this case we are assuming a default runoff value of unity, meaning that the calculated surface_water__discharge is actually just drainage area. To introduce the drainage area of a river entering at the top, we will use a large value for runoff. Because we are considering drainage area as the primary variable, with unit "runoff", our input runoff is a dimensionless variable
Step6: Changing the amount and/or location of input
We can change the input drainage area / discharge amount or location simply by modifying the water__unit_flux_in field. Here we will shift it to the left and double its magnitude.
Step7: Note that the drainage_area field does not recognize any runoff input. It continues to track only the local drainage area
Step8: This means that you should use the surface_water__discharge field rather than the drainage_area field, regardless of whether the former is meant to represent discharge (volume per time) or effective drainage area (area).
Combining with a Landscape Evolution Model
Here we'll set up a simple LEM that uses the river input. | Python Code:
from landlab import RasterModelGrid, imshow_grid
from landlab.components import FlowAccumulator
import numpy as np
Explanation: Adding a discharge point source to a LEM
(Greg Tucker, CSDMS / CU Boulder, fall 2020)
This notebook shows how to add one or more discharge point sources to a Landlab-built landscape evolution model (LEM), using the flow routing components. The basic idea is to modify the water__unit_flux_in field to include a large flux (which could be represented as either drainage area or discharge) at one or more locations along the edge of a grid.
End of explanation
mg = RasterModelGrid((5, 4), xy_spacing=(10., 10))
topographic__elevation = np.array([0., 0., 0., 0.,
0., 21., 10., 0.,
0., 31., 20., 0.,
0., 32., 30., 0.,
0., 0., 0., 0.])
_ = mg.add_field("topographic__elevation", topographic__elevation, at="node")
mg.set_closed_boundaries_at_grid_edges(True, True, True, False)
fa = FlowAccumulator(
mg,
'topographic__elevation',
flow_director='FlowDirectorSteepest'
)
runoff_rate = np.arange(mg.number_of_nodes, dtype=float)
rnff = mg.add_field("water__unit_flux_in", runoff_rate, at="node", clobber=True)
fa.run_one_step()
print(mg.at_node['surface_water__discharge'].reshape(5, 4))
# array([ 0., 500., 5200., 0.,
# 0., 500., 5200., 0.,
# 0., 900., 4600., 0.,
# 0., 1300., 2700., 0.,
# 0., 0., 0., 0.])
Explanation: Docstring example from FlowAccumulator
The following is a tiny example from the FlowAccumulator documentation:
End of explanation
rnff[:] = 1.0
fa.run_one_step()
print(mg.at_node['surface_water__discharge'].reshape(5, 4))
Explanation: We can extend this tiny example to show that you can subsequently modify the rnff array and it will take effect when you re-run the FlowAccumulator:
End of explanation
# Parameters
nrows = 41
ncols = 41
dx = 100.0 # grid spacing in m
slope_gradient = 0.01 # gradient of topographic surface
noise_amplitude = 0.2 # amplitude of random noise
input_runoff = 10000.0 # equivalent to a drainage area of 10,000 dx^2 or 10^8 m2
Explanation: Larger example
In this example, we create a slightly larger grid, with a surface that slopes down toward the south / bottom boundary. We will introduce a runoff point source at a node in the middle of the top-most non-boundary row.
Start by defining some parameters:
End of explanation
# Create a grid, and a field for water input
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
# Have just one edge (south / bottom) be open
grid.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Create an elevation field as a ramp with random noise
topo = grid.add_zeros('topographic__elevation', at='node')
topo[:] = slope_gradient * grid.y_of_node
np.random.seed(0)
topo[grid.core_nodes] += noise_amplitude * np.random.randn(grid.number_of_core_nodes)
Explanation: Create grid and topography, and set boundaries:
End of explanation
# Create a FlowAccumulator component
fa = FlowAccumulator(grid, flow_director='FlowDirectorD8')
# Create a runoff input field, and set one of its nodes to have a large input
runoff = grid.add_ones('water__unit_flux_in', at='node', clobber=True)
top_middle_node = grid.number_of_nodes - int(1.5 * ncols)
runoff[top_middle_node] = input_runoff
fa.run_one_step()
imshow_grid(grid, 'surface_water__discharge')
Explanation: The FlowAccumulator component takes care of identifying drainage directions (here using the D8 method) and calculating the cumulative drainage area and surface water discharge.
Note that in this case we are assuming a default runoff value of unity, meaning that the calculated surface_water__discharge is actually just drainage area. To introduce the drainage area of a river entering at the top, we will use a large value for runoff. Because we are considering drainage area as the primary variable, with unit "runoff", our input runoff is a dimensionless variable: the number of contributing grid cell equivalents. We will set this to unity at all the nodes in the model except the point-source location.
End of explanation
runoff[top_middle_node] = 1.0 # go back to being a "regular" node
runoff[top_middle_node - 15] = 2 * input_runoff # shift 15 cells left and double amount
fa.run_one_step()
imshow_grid(grid, 'surface_water__discharge')
Explanation: Changing the amount and/or location of input
We can change the input drainage area / discharge amount or location simply by modifying the water__unit_flux_in field. Here we will shift it to the left and double its magnitude.
End of explanation
imshow_grid(grid, 'drainage_area')
Explanation: Note that the drainage_area field does not recognize any runoff input. It continues to track only the local drainage area:
End of explanation
from landlab.components import StreamPowerEroder, LinearDiffuser
# Parameters
K = 4.0e-5
D = 0.01
uplift_rate = 0.0001
nrows = 51
ncols = 51
dx = 10.0 # grid spacing in m
slope_gradient = 0.01 # gradient of topographic surface
noise_amplitude = 0.04 # amplitude of random noise
input_runoff = 10000.0 # equivalent to a drainage area of 10,000 dx^2 or 10^6 m2
run_duration = 25.0 / uplift_rate
dt = dx / (K * (dx * dx * input_runoff)**0.5)
num_steps = int(run_duration / dt)
print(str(num_steps) + ' steps.')
# Create a grid, and a field for water input
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
# Have just one edge (south / bottom) be open
grid.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Create an elevation field as a ramp with random noise
topo = grid.add_zeros('topographic__elevation', at='node')
topo[:] = slope_gradient * grid.y_of_node
np.random.seed(0)
topo[grid.core_nodes] += noise_amplitude * np.random.randn(grid.number_of_core_nodes)
# Create components
fa = FlowAccumulator(grid, flow_director='FlowDirectorD8')
sp = StreamPowerEroder(grid, K_sp=K, discharge_field='surface_water__discharge')
ld = LinearDiffuser(grid, linear_diffusivity=D)
runoff = grid.add_ones('water__unit_flux_in', at='node', clobber=True)
top_middle_node = grid.number_of_nodes - int(1.5 * ncols)
runoff[top_middle_node] = input_runoff
for _ in range(num_steps):
topo[grid.core_nodes] += uplift_rate * dt
fa.run_one_step()
ld.run_one_step(dt)
sp.run_one_step(dt)
imshow_grid(grid, topo)
Explanation: This means that you should use the surface_water__discharge field rather than the drainage_area field, regardless of whether the former is meant to represent discharge (volume per time) or effective drainage area (area).
Combining with a Landscape Evolution Model
Here we'll set up a simple LEM that uses the river input.
End of explanation |
11,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
11,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Programming Assignment
Step1: Составление корпуса
Step2: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов
Step3: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос
Step4: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон
Step5: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины
Step6: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
Step7: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели
Step8: Также выведите содержимое переменной .alpha второй модели
Step9: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель
Step10: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
Step11: Для такого большого количества классов это неплохая точность. Вы можете попроовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$
Step12: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn. | Python Code:
import json
with open("recipes.json") as f:
recipes = json.load(f)
print(recipes[0])
Explanation: Programming Assignment:
Готовим LDA по рецептам
Как вы уже знаете, в тематическом моделировании делается предположение о том, что для определения тематики порядок слов в документе не важен; об этом гласит гипотеза «мешка слов». Сегодня мы будем работать с несколько нестандартной для тематического моделирования коллекцией, которую можно назвать «мешком ингредиентов», потому что на состоит из рецептов блюд разных кухонь. Тематические модели ищут слова, которые часто вместе встречаются в документах, и составляют из них темы. Мы попробуем применить эту идею к рецептам и найти кулинарные «темы». Эта коллекция хороша тем, что не требует предобработки. Кроме того, эта задача достаточно наглядно иллюстрирует принцип работы тематических моделей.
Для выполнения заданий, помимо часто используемых в курсе библиотек, потребуются модули json и gensim. Первый входит в дистрибутив Anaconda, второй можно поставить командой
pip install gensim
Построение модели занимает некоторое время. На ноутбуке с процессором Intel Core i7 и тактовой частотой 2400 МГц на построение одной модели уходит менее 10 минут.
Загрузка данных
Коллекция дана в json-формате: для каждого рецепта известны его id, кухня (cuisine) и список ингредиентов, в него входящих. Загрузить данные можно с помощью модуля json (он входит в дистрибутив Anaconda):
End of explanation
from gensim import corpora, models
import numpy as np
Explanation: Составление корпуса
End of explanation
texts = [recipe["ingredients"] for recipe in recipes]
dictionary = corpora.Dictionary(texts) # составляем словарь
corpus = [dictionary.doc2bow(text) for text in texts] # составляем корпус документов
print(texts[2])
print(corpus[2])
Explanation: Наша коллекция небольшая, и целиком помещается в оперативную память. Gensim может работать с такими данными и не требует их сохранения на диск в специальном формате. Для этого коллекция должна быть представлена в виде списка списков, каждый внутренний список соответствует отдельному документу и состоит из его слов. Пример коллекции из двух документов:
[["hello", "world"], ["programming", "in", "python"]]
Преобразуем наши данные в такой формат, а затем создадим объекты corpus и dictionary, с которыми будет работать модель.
End of explanation
from gensim.models import LdaModel
np.random.seed(76543)
# здесь код для построения модели:
lda = LdaModel(corpus, num_topics=40, passes=5)
lda.show_topics(num_topics=40, num_words=10)
top_words = lda.show_topics(num_topics=40, num_words=10, formatted=False)
top_words = [tup[1] for tup in top_words]
print(top_words)
ingrids = ["salt", "sugar", "water", "mushrooms", "chicken", "eggs"]
ids = [dictionary.token2id[x] for x in ingrids]
counts = {"salt": 0, "sugar": 0, "water": 0, "mushrooms": 0, "chicken": 0, "eggs": 0}
for ingrid_list in top_words:
for x in ingrid_list:
for i in range(len(ids)):
if ids[i] == int(x[0]):
counts[ingrids[i]] += 1
def save_answers1(c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs):
with open("cooking_LDA_pa_task1.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [c_salt, c_sugar, c_water, c_mushrooms, c_chicken, c_eggs]]))
save_answers1(counts["salt"], counts["sugar"], counts["water"], counts["mushrooms"], counts["chicken"], counts["eggs"])
Explanation: У объекта dictionary есть полезная переменная dictionary.token2id, позволяющая находить соответствие между ингредиентами и их индексами.
Обучение модели
Вам может понадобиться документация LDA в gensim.
Задание 1. Обучите модель LDA с 40 темами, установив количество проходов по коллекции 5 и оставив остальные параметры по умолчанию.
Затем вызовите метод модели show_topics, указав количество тем 40 и количество токенов 10, и сохраните результат (топы ингредиентов в темах) в отдельную переменную. Если при вызове метода show_topics указать параметр formatted=True, то топы ингредиентов будет удобно выводить на печать, если formatted=False, будет удобно работать со списком программно. Выведите топы на печать, рассмотрите темы, а затем ответьте на вопрос:
Сколько раз ингредиенты "salt", "sugar", "water", "mushrooms", "chicken", "eggs" встретились среди топов-10 всех 40 тем? При ответе не нужно учитывать составные ингредиенты, например, "hot water".
Передайте 6 чисел в функцию save_answers1 и загрузите сгенерированный файл в форму.
У gensim нет возможности фиксировать случайное приближение через параметры метода, но библиотека использует numpy для инициализации матриц. Поэтому, по утверждению автора библиотеки, фиксировать случайное приближение нужно командой, которая написана в следующей ячейке. Перед строкой кода с построением модели обязательно вставляйте указанную строку фиксации random.seed.
End of explanation
import copy
dictionary2 = copy.deepcopy(dictionary)
print(dictionary2.dfs)
Explanation: Фильтрация словаря
В топах тем гораздо чаще встречаются первые три рассмотренных ингредиента, чем последние три. При этом наличие в рецепте курицы, яиц и грибов яснее дает понять, что мы будем готовить, чем наличие соли, сахара и воды. Таким образом, даже в рецептах есть слова, часто встречающиеся в текстах и не несущие смысловой нагрузки, и поэтому их не желательно видеть в темах. Наиболее простой прием борьбы с такими фоновыми элементами — фильтрация словаря по частоте. Обычно словарь фильтруют с двух сторон: убирают очень редкие слова (в целях экономии памяти) и очень частые слова (в целях повышения интерпретируемости тем). Мы уберем только частые слова.
End of explanation
too_often_tokens = [x[0] for x in dictionary2.dfs.items() if x[1] > 4000]
print(len(too_often_tokens))
dictionary2.filter_tokens(too_often_tokens)
dict_size_before = len(dictionary)
dict_size_after = len(dictionary2)
print(dict_size_before)
print(dict_size_after)
new_corpus = [dictionary2.doc2bow(text) for text in texts]
corpus_size_before = 0
corpus_size_after = 0
for doc1, doc2 in zip(corpus, new_corpus):
corpus_size_before += len(doc1)
corpus_size_after += len(doc2)
print(corpus_size_before)
print(corpus_size_after)
def save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after):
with open("cooking_LDA_pa_task2.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [dict_size_before, dict_size_after, corpus_size_before, corpus_size_after]]))
save_answers2(dict_size_before, dict_size_after, corpus_size_before, corpus_size_after)
Explanation: Задание 2. У объекта dictionary2 есть переменная dfs — это словарь, ключами которого являются id токена, а элементами — число раз, сколько слово встретилось во всей коллекции. Сохраните в отдельный список ингредиенты, которые встретились в коллекции больше 4000 раз. Вызовите метод словаря filter_tokens, подав в качестве первого аргумента полученный список популярных ингредиентов. Вычислите две величины: dict_size_before и dict_size_after — размер словаря до и после фильтрации.
Затем, используя новый словарь, создайте новый корпус документов, corpus2, по аналогии с тем, как это сделано в начале ноутбука. Вычислите две величины: corpus_size_before и corpus_size_after — суммарное количество ингредиентов в корпусе (для каждого документа вычислите число различных ингредиентов в нем и просуммируйте по всем документам) до и после фильтрации.
Передайте величины dict_size_before, dict_size_after, corpus_size_before, corpus_size_after в функцию save_answers2 и загрузите сгенерированный файл в форму.
End of explanation
np.random.seed(76543)
lda2 = LdaModel(new_corpus, id2word=dictionary2, num_topics=40, passes=5)
coherensions = lda.top_topics(corpus)
coherensions2 = lda2.top_topics(new_corpus)
coherence = np.array(x[1] for tup in coherensions for x in tup).mean()
coherence2 = np.array(x[1] for tup in coherensions2 for x in tup).mean()
def save_answers3(coherence, coherence2):
with open("cooking_LDA_pa_task3.txt", "w") as fout:
fout.write(" ".join(["%3f"%el for el in [coherence, coherence2]]))
save_answers3(coherence, coherence2)
Explanation: Сравнение когерентностей
Задание 3. Постройте еще одну модель по корпусу corpus2 и словарю dictionary2, остальные параметры оставьте такими же, как при первом построении модели. Сохраните новую модель в другую переменную (не перезаписывайте предыдущую модель). Не забудьте про фиксирование seed!
Затем воспользуйтесь методом top_topics модели, чтобы вычислить ее когерентность. Передайте в качестве аргумента соответствующий модели корпус. Метод вернет список кортежей (топ токенов, когерентность), отсортированных по убыванию последней. Вычислите среднюю по всем темам когерентность для каждой из двух моделей и передайте в функцию save_answers3.
End of explanation
print(lda2.get_document_topics(new_corpus[2]))
Explanation: Считается, что когерентность хорошо соотносится с человеческими оценками интерпретируемости тем. Поэтому на больших текстовых коллекциях когерентность обычно повышается, если убрать фоновую лексику. Однако в нашем случае этого не произошло.
Изучение влияния гиперпараметра alpha
В этом разделе мы будем работать со второй моделью, то есть той, которая построена по сокращенному корпусу.
Пока что мы посмотрели только на матрицу темы-слова, теперь давайте посмотрим на матрицу темы-документы. Выведите темы для нулевого (или любого другого) документа из корпуса, воспользовавшись методом get_document_topics второй модели:
End of explanation
print(lda2.alpha)
Explanation: Также выведите содержимое переменной .alpha второй модели:
End of explanation
np.random.seed(76543)
lda3 = LdaModel(corpus=new_corpus, id2word=dictionary2, alpha=1, num_topics=40, passes=5)
print(lda3.get_document_topics(new_corpus[0]))
sum2 = 0
sum3 = 0
for text in new_corpus:
sum2 += len(lda2.get_document_topics(text, minimum_probability=0.01))
sum3 += len(lda3.get_document_topics(text, minimum_probability=0.01))
def save_answers4(count_model2, count_model3):
with open("cooking_LDA_pa_task4.txt", "w") as fout:
fout.write(" ".join([str(el) for el in [count_model2, count_model3]]))
save_answers4(sum2, sum3)
Explanation: У вас должно получиться, что документ характеризуется небольшим числом тем. Попробуем поменять гиперпараметр alpha, задающий априорное распределение Дирихле для распределений тем в документах.
Задание 4. Обучите третью модель: используйте сокращенный корпус (corpus2 и dictionary2) и установите параметр alpha=1, passes=5. Не забудьте про фиксацию seed! Выведите темы новой модели для нулевого документа; должно получиться, что распределение над множеством тем практически равномерное. Чтобы убедиться в том, что во второй модели документы описываются гораздо более разреженными распределениями, чем в третьей, посчитайте суммарное количество элементов, превосходящих 0.01, в матрицах темы-документы обеих моделей. Другими словами, запросите темы модели для каждого документа с параметром minimum_probability=0.01 и просуммируйте число элементов в получаемых массивах. Передайте две суммы (сначала для модели с alpha по умолчанию, затем для модели в alpha=1) в функцию save_answers4.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
X = np.zeros((len(recipes), lda2.num_topics))
for doc_num in range(len(new_corpus)):
for topic_num, topic_proba in lda2.get_document_topics(new_corpus[doc_num]):
X[doc_num][topic_num] = topic_proba
y = np.array([recipe['cuisine'] for recipe in recipes])
print(len(y))
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X, y)
cv_score = cross_val_score(rfc, X, y, cv=3).mean()
print(cv_score)
def save_answers5(accuracy):
with open("cooking_LDA_pa_task5.txt", "w") as fout:
fout.write(str(accuracy))
save_answers5(cv_score)
Explanation: Таким образом, гиперпараметр alpha влияет на разреженность распределений тем в документах. Аналогично гиперпараметр eta влияет на разреженность распределений слов в темах.
LDA как способ понижения размерности
Иногда, распределения над темами, найденные с помощью LDA, добавляют в матрицу объекты-признаки как дополнительные, семантические, признаки, и это может улучшить качество решения задачи. Для простоты давайте просто обучим классификатор рецептов на кухни на признаках, полученных из LDA, и измерим точность (accuracy).
Задание 5. Используйте модель, построенную по сокращенной выборке с alpha по умолчанию (вторую модель). Составьте матрицу $\Theta = p(t|d)$ вероятностей тем в документах; вы можете использовать тот же метод get_document_topics, а также вектор правильных ответов y (в том же порядке, в котором рецепты идут в переменной recipes). Создайте объект RandomForestClassifier со 100 деревьями, с помощью функции cross_val_score вычислите среднюю accuracy по трем фолдам (перемешивать данные не нужно) и передайте в функцию save_answers5.
End of explanation
def generate_recipe(model, num_ingredients):
theta = np.random.dirichlet(model.alpha)
for i in range(num_ingredients):
t = np.random.choice(np.arange(model.num_topics), p=theta)
topic = model.show_topic(t, topn=model.num_terms)
topic_distr = [x[1] for x in topic]
terms = [x[0] for x in topic]
w = np.random.choice(terms, p=topic_distr)
print(w)
generate_recipe(lda2, 5)
Explanation: Для такого большого количества классов это неплохая точность. Вы можете попроовать обучать RandomForest на исходной матрице частот слов, имеющей значительно большую размерность, и увидеть, что accuracy увеличивается на 10–15%. Таким образом, LDA собрал не всю, но достаточно большую часть информации из выборки, в матрице низкого ранга.
LDA — вероятностная модель
Матричное разложение, использующееся в LDA, интерпретируется как следующий процесс генерации документов.
Для документа $d$ длины $n_d$:
1. Из априорного распределения Дирихле с параметром alpha сгенерировать распределение над множеством тем: $\theta_d \sim Dirichlet(\alpha)$
1. Для каждого слова $w = 1, \dots, n_d$:
1. Сгенерировать тему из дискретного распределения $t \sim \theta_{d}$
1. Сгенерировать слово из дискретного распределения $w \sim \phi_{t}$.
Подробнее об этом в Википедии.
В контексте нашей задачи получается, что, используя данный генеративный процесс, можно создавать новые рецепты. Вы можете передать в функцию модель и число ингредиентов и сгенерировать рецепт :)
End of explanation
import pandas
import seaborn
from matplotlib import pyplot as plt
%matplotlib inline
def compute_topic_cuisine_matrix(model, corpus, recipes):
# составляем вектор целевых признаков
targets = list(set([recipe["cuisine"] for recipe in recipes]))
# составляем матрицу
tc_matrix = pandas.DataFrame(data=np.zeros((model.num_topics, len(targets))), columns=targets)
for recipe, bow in zip(recipes, corpus):
recipe_topic = model.get_document_topics(bow)
for t, prob in recipe_topic:
tc_matrix[recipe["cuisine"]][t] += prob
# нормируем матрицу
target_sums = pandas.DataFrame(data=np.zeros((1, len(targets))), columns=targets)
for recipe in recipes:
target_sums[recipe["cuisine"]] += 1
return pandas.DataFrame(tc_matrix.values/target_sums.values, columns=tc_matrix.columns)
def plot_matrix(tc_matrix):
plt.figure(figsize=(10, 10))
seaborn.heatmap(tc_matrix, square=True)
plot_matrix(compute_topic_cuisine_matrix(lda2, new_corpus, recipes))
Explanation: Интерпретация построенной модели
Вы можете рассмотреть топы ингредиентов каждой темы. Большиснтво тем сами по себе похожи на рецепты; в некоторых собираются продукты одного вида, например, свежие фрукты или разные виды сыра.
Попробуем эмпирически соотнести наши темы с национальными кухнями (cuisine). Построим матрицу $A$ размера темы $x$ кухни, ее элементы $a_{tc}$ — суммы $p(t|d)$ по всем документам $d$, которые отнесены к кухне $c$. Нормируем матрицу на частоты рецептов по разным кухням, чтобы избежать дисбаланса между кухнями. Следующая функция получает на вход объект модели, объект корпуса и исходные данные и возвращает нормированную матрицу $A$. Ее удобно визуализировать с помощью seaborn.
End of explanation |
11,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Crear una serie
Se pueden crear series desde listas, arreglos de numpy y diccionarios
Step2: Usando listas
Step3: Arreglos Numpy
Step4: Diccionarios
Step5: Informacion en Series
Una serie de pandas puede tener varios tipos de objetos
Step6: Usando indices
La clave al usar series es el entender como se utilizan los indices, ya que pandas los utiliza para hacer consultas rapidas de informacion
Step7: Operations are then also done based off of index | Python Code:
# librerias
import numpy as np
import pandas as pd
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Series
El primer tipo de dato que vamos a aprender en pandas es Series
Una series es muy similar a un arreglo de Numpy, la diferencia es que una serie tiene etiquetas en su eje, por tal motivo podemos generar indices por etiquetas en vez de un numero.
End of explanation
labels = ['a','b','c']
my_list = [10,20,30]
arr = np.array([10,20,30])
d = {'a':10,'b':20,'c':30}
Explanation: Crear una serie
Se pueden crear series desde listas, arreglos de numpy y diccionarios
End of explanation
pd.Series(data=my_list)
pd.Series(data=my_list,index=labels)
pd.Series(my_list,labels)
Explanation: Usando listas
End of explanation
pd.Series(arr)
pd.Series(arr,labels)
Explanation: Arreglos Numpy
End of explanation
pd.Series(d)
Explanation: Diccionarios
End of explanation
pd.Series(data=labels)
# Inclusive funciones
pd.Series([sum,print,len])
Explanation: Informacion en Series
Una serie de pandas puede tener varios tipos de objetos
End of explanation
ser1 = pd.Series([1,2,3,4],index = ['USA', 'Germany','USSR', 'Japan'])
ser1
ser2 = pd.Series([1,2,5,4],index = ['USA', 'Germany','Italy', 'Japan'])
ser2
ser1['USA']
Explanation: Usando indices
La clave al usar series es el entender como se utilizan los indices, ya que pandas los utiliza para hacer consultas rapidas de informacion
End of explanation
ser1 + ser2
Explanation: Operations are then also done based off of index:
End of explanation |
11,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assembling detector data into images
The X-ray detectors at XFEL are made up of a number of small pieces. To get an image from the data, or analyse it spatially, we need to know where each piece is located.
This example reassembles some commissioning data from LPD, a detector which has 4 quadrants, 16 modules, and 256 tiles.
Elements (especially the quadrants) can be repositioned; talk to the detector group to ensure that you have the right
geometry information for your data.
Step1: Extract the detector images into a single Numpy array
Step2: To show the images, we sometimes need to 'clip' extreme high and low values, otherwise the colour map makes everything else the same colour.
Step3: Let's look at the iamge from a single module. You can see where it's divided up into tiles
Step4: Here's a single tile
Step5: Load the geometry from a file, along with the quadrant positions used here.
In the future, geometry information will be stored in the calibration catalogue.
Step6: Reassemble and show a detector image using the geometry
Step7: Reassemble detector data into a numpy array for further analysis. The areas without data have the special value nan to mark them as missing. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import h5py
from karabo_data import RunDirectory, stack_detector_data
from karabo_data.geometry2 import LPD_1MGeometry
run = RunDirectory('/gpfs/exfel/exp/FXE/201830/p900020/proc/r0221/')
run.info()
# Find a train with some data in
empty = np.asarray([])
for tid, train_data in run.trains():
module_imgs = sum(d.get('image.data', empty).shape[0] for d in train_data.values())
if module_imgs:
print(tid, module_imgs)
break
tid, train_data = run.train_from_id(54861797)
print(tid)
for dev in sorted(train_data.keys()):
print(dev, end='\t')
try:
print(train_data[dev]['image.data'].shape)
except KeyError:
print("No image.data")
Explanation: Assembling detector data into images
The X-ray detectors at XFEL are made up of a number of small pieces. To get an image from the data, or analyse it spatially, we need to know where each piece is located.
This example reassembles some commissioning data from LPD, a detector which has 4 quadrants, 16 modules, and 256 tiles.
Elements (especially the quadrants) can be repositioned; talk to the detector group to ensure that you have the right
geometry information for your data.
End of explanation
modules_data = stack_detector_data(train_data, 'image.data')
modules_data.shape
Explanation: Extract the detector images into a single Numpy array:
End of explanation
def clip(array, min=-10000, max=10000):
x = array.copy()
finite = np.isfinite(x)
# Suppress warnings comparing numbers to nan
with np.errstate(invalid='ignore'):
x[finite & (x < min)] = np.nan
x[finite & (x > max)] = np.nan
return x
plt.figure(figsize=(10, 5))
a = modules_data[5][2]
plt.subplot(1, 2, 1).hist(a[np.isfinite(a)])
a = clip(a, min=-400, max=400)
plt.subplot(1, 2, 2).hist(a[np.isfinite(a)]);
Explanation: To show the images, we sometimes need to 'clip' extreme high and low values, otherwise the colour map makes everything else the same colour.
End of explanation
plt.figure(figsize=(8, 8))
clipped_mod = clip(modules_data[10][2], -400, 500)
plt.imshow(clipped_mod, origin='lower')
Explanation: Let's look at the iamge from a single module. You can see where it's divided up into tiles:
End of explanation
splitted = LPD_1MGeometry.split_tiles(clipped_mod)
plt.figure(figsize=(8, 8))
plt.imshow(splitted[11])
Explanation: Here's a single tile:
End of explanation
# From March 18; converted to XFEL standard coordinate directions
quadpos = [(11.4, 299), (-11.5, 8), (254.5, -16), (278.5, 275)] # mm
geom = LPD_1MGeometry.from_h5_file_and_quad_positions('lpd_mar_18_axesfixed.h5', quadpos)
Explanation: Load the geometry from a file, along with the quadrant positions used here.
In the future, geometry information will be stored in the calibration catalogue.
End of explanation
geom.plot_data_fast(clip(modules_data[12], max=5000))
Explanation: Reassemble and show a detector image using the geometry:
End of explanation
res, centre = geom.position_modules_fast(modules_data)
print(res.shape)
plt.figure(figsize=(8, 8))
plt.imshow(clip(res[12, 250:750, 450:850], min=-400, max=5000), origin='lower')
Explanation: Reassemble detector data into a numpy array for further analysis. The areas without data have the special value nan to mark them as missing.
End of explanation |
11,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear regresion - part 2
Many variables
Step1: For many variables we will use vectorized implementation
$$X=\left[\begin{array}{cc}
1 & (\vec x^{(1)})^T \
1 & (\vec x^{(2)})^T \
\vdots & \vdots\
1 & (\vec x^{(m)})^T \
\end{array}\right]
= \left[\begin{array}{cccc}
1 & x_1^{(1)} & \cdots & x_n^{(1)} \
1 & x_1^{(2)} & \cdots & x_n^{(2)} \
\vdots & \vdots & \ddots & \vdots\
1 & x_1^{(m)} & \cdots & x_n^{(m)} \
\end{array}\right] $$
$$\vec{y} =
\left[\begin{array}{c}
y^{(1)}\
y^{(2)}\
\vdots\
y^{(m)}\
\end{array}\right]
\quad
\theta = \left[\begin{array}{c}
\theta_0\
\theta_1\
\vdots\
\theta_n\
\end{array}\right]$$
Vectorized implementation is much faster than that one from previous Lecture.
Step2: Cost function
$$J(\theta)=\dfrac{1}{2|\vec y|}\left(X\theta-\vec{y}\right)^T\left(X\theta-\vec{y}\right)$$
Step3: How we count derivatives?
Let's count gradinet | Python Code:
# imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns
from IPython.display import (
display,
Math,
Latex
)
%matplotlib inline
Explanation: Linear regresion - part 2
Many variables
End of explanation
df = pd.read_csv("ex1data1.txt", header=None)
df.columns = columns=['x', 'y']
X = np.matrix(df.x.values[:, np.newaxis])
# adding theta_0
m = len(X)
X = np.concatenate((np.ones((1,m)).T, X), axis=1)
y = np.matrix(df.y.values[:, np.newaxis])
theta = np.matrix([-5, 1.3]).reshape(2, 1)
print 'X', X[:10]
print 'y', y[:10]
print 'theta', theta
Explanation: For many variables we will use vectorized implementation
$$X=\left[\begin{array}{cc}
1 & (\vec x^{(1)})^T \
1 & (\vec x^{(2)})^T \
\vdots & \vdots\
1 & (\vec x^{(m)})^T \
\end{array}\right]
= \left[\begin{array}{cccc}
1 & x_1^{(1)} & \cdots & x_n^{(1)} \
1 & x_1^{(2)} & \cdots & x_n^{(2)} \
\vdots & \vdots & \ddots & \vdots\
1 & x_1^{(m)} & \cdots & x_n^{(m)} \
\end{array}\right] $$
$$\vec{y} =
\left[\begin{array}{c}
y^{(1)}\
y^{(2)}\
\vdots\
y^{(m)}\
\end{array}\right]
\quad
\theta = \left[\begin{array}{c}
\theta_0\
\theta_1\
\vdots\
\theta_n\
\end{array}\right]$$
Vectorized implementation is much faster than that one from previous Lecture.
End of explanation
def JMx(theta, X, y):
m = len(y)
J = 1.0/(2.0*m)*((X*theta-y).T*(X*theta-y))
return J.item()
error = JMx(theta, X, y)
display(Math(r'\Large J(\theta) = %.4f' % error))
Explanation: Cost function
$$J(\theta)=\dfrac{1}{2|\vec y|}\left(X\theta-\vec{y}\right)^T\left(X\theta-\vec{y}\right)$$
End of explanation
from sklearn.linear_model import (
LinearRegression,
SGDRegressor
)
Explanation: How we count derivatives?
Let's count gradinet:
$$\nabla J(\theta) = \frac{1}{|\vec y|} X^T\left(X\theta-\vec y\right)$$
Gradinet Descent (vectorized)
$$ \theta = \theta - \alpha \nabla J(\theta) $$
Assigment 1.
Implement vectorized GD Algorithm
Normal matrix method
We can count $\hat\theta$ using this equation:
$$\theta = (X^TX)^{-1}X^T \vec y$$
Assigment 2.
Implement normal matrix method and check if $\theta$ vector is the same in GD method and normal Matrix method
Use pinv for computing inverse matrix. It's more numerical stable
|Gradient Method | Normal matrix|
|----------------|--------------|
|need to choose $\alpha$| no need to choose $\alpha$|
|needs many iterations | no iterations |
|it works for large amount of features (x)| slow for large amount of features (x)
| | we need to count inverse matrix |
Assigment 3.
Use library scikit-learn (normal matrix and gradient method) to fit model & predict y for x = 1, 10, 100
End of explanation |
11,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The problem with perfect phylogenies
Previously, I wrote a blog post, exploring the Gusfield algorithm for building phylogenetic trees from binary traits. While the algorithm works well if you have a clean matrix that just-so happes to form a perfect phylogeny, if you experiment with different matrix permutations of 1s and 0s, you'll quickly find that matrices which create perfect phylogenies are scarce. Indeed, if there are any pairs of columns $[C1,C2]$ that have the values $[0,1]$, $[1,0]$ and $[1,1]$, then a perfect phylogeny cannot be built. Imperfect data sources and noisy data can contribute to the difficulties of building a valid phylogeny. To allow for some slack for running the phylogeny-finding algorithm, one option is to use the imperfect phylogeny algorithm, which is described (and implemented in near-linear time) in a paper by Pe'er et al. (pay-walled, unfortunately). I will explain and implement this algorithm in code in this post. If you're after the mathematical proofs or description, refer to the original paper, or this (more digestable) review.
A motivation for using the imperfect phylogeny algorithm
In modelling phylogenetic trees from binary data, it is unlikely that you'll have absolute certainty in observing data values. In my early experiments of my PhD project, I was utilising the Gusfield algorithm for structural variation (SV) data in multiple tumour biopsies from the same patient. In my case, observing the same structural variation across several tumour samples of the same patient gave me high confidence that the variation existed. However, I was much less confident when I did not see variant in one patient sample, when it was clearly present in others. Thus, my '1's were certain, but my '0's were uncertain. In the case of structural variants, there are several reasons why we might not detect a variant in a particular sample (the depth of coverage may be too low, or the sample may have a low amount of tumour content). In this case, when I constructed my matrix of shared SV events, I would mark the rows of my low-quality samples with 'incomplete/unknown' values to represent this uncertainty. This strategy allowed me to find valid phylogenies where previously they could not be found.
The incomplete phylogeny problem
In our feature matrix, we now allow a third feature type - unknown or incomplete, denoted by '?'. Hence our values can be 0, 1 or ? for any cell. The incomplete phylogeny algorithm will then infer the unknown values by determining whether the sample belongs to the 'clade' associated with the feature - the clade being the evolutionary group with a common ancestor, that the sample/species evolved from.
The algorithm will output a perfect phylogeny tree $T$ (if present), as well as the corresponding completed matrix $M$. The algorithm is composed of the following steps
Step1: We now build our arrays for our $S$s and $C$s, as well as or M matrix. Let's use -1 to represent the 'incomplete' state of our matrix.
Step2: The variable m_pairs (above) displays the graph connections between our $S$s and $C$s. We now get the first set of connections in the graph, and determine the $K$ vector (which is the union of all $S$s and $C$s involved in the graph connection). Below is a drawing of the graph connections between the $S$ and $C$ nodes. I have colour-coded the connections corresponding to contiguous sets of connections - these will correspond to the clades, as we will discover through the algorithm.
Step3: To illustrate the algorithm step by step, we will do one full iteration of step 3.
Step4: Based on the $S$s returned in the connections ($K \cup S'$), we now return the slice of the $M_3$ array, on the rows in $S'$.
Step5: Now we return our $U$ vector, which is defined as any $C$s where there are no $0$s (of the above slice).
Step6: Now any elements that are in the character columns contained in $U$, and in the sample rows contained in $S'$ (i.e. they belong to the same clade), will have any incomplete fields inferred as present. (This is skipping ahead to step 4, but just to demonstrate.)
Step7: Hence in the above example, the matrix $M_3$ cell corresponding to $S3$ and $C2$ has been inferred to be present (i.e. 1).
Finally we can put the whole algorithm together. This example follows the example from this paper, which also provides an illustration of the node connections in the "$\sum$-free graph".
Step8: Hence our final tree looks like | Python Code:
from mgraph import MGraph
Explanation: The problem with perfect phylogenies
Previously, I wrote a blog post, exploring the Gusfield algorithm for building phylogenetic trees from binary traits. While the algorithm works well if you have a clean matrix that just-so happes to form a perfect phylogeny, if you experiment with different matrix permutations of 1s and 0s, you'll quickly find that matrices which create perfect phylogenies are scarce. Indeed, if there are any pairs of columns $[C1,C2]$ that have the values $[0,1]$, $[1,0]$ and $[1,1]$, then a perfect phylogeny cannot be built. Imperfect data sources and noisy data can contribute to the difficulties of building a valid phylogeny. To allow for some slack for running the phylogeny-finding algorithm, one option is to use the imperfect phylogeny algorithm, which is described (and implemented in near-linear time) in a paper by Pe'er et al. (pay-walled, unfortunately). I will explain and implement this algorithm in code in this post. If you're after the mathematical proofs or description, refer to the original paper, or this (more digestable) review.
A motivation for using the imperfect phylogeny algorithm
In modelling phylogenetic trees from binary data, it is unlikely that you'll have absolute certainty in observing data values. In my early experiments of my PhD project, I was utilising the Gusfield algorithm for structural variation (SV) data in multiple tumour biopsies from the same patient. In my case, observing the same structural variation across several tumour samples of the same patient gave me high confidence that the variation existed. However, I was much less confident when I did not see variant in one patient sample, when it was clearly present in others. Thus, my '1's were certain, but my '0's were uncertain. In the case of structural variants, there are several reasons why we might not detect a variant in a particular sample (the depth of coverage may be too low, or the sample may have a low amount of tumour content). In this case, when I constructed my matrix of shared SV events, I would mark the rows of my low-quality samples with 'incomplete/unknown' values to represent this uncertainty. This strategy allowed me to find valid phylogenies where previously they could not be found.
The incomplete phylogeny problem
In our feature matrix, we now allow a third feature type - unknown or incomplete, denoted by '?'. Hence our values can be 0, 1 or ? for any cell. The incomplete phylogeny algorithm will then infer the unknown values by determining whether the sample belongs to the 'clade' associated with the feature - the clade being the evolutionary group with a common ancestor, that the sample/species evolved from.
The algorithm will output a perfect phylogeny tree $T$ (if present), as well as the corresponding completed matrix $M$. The algorithm is composed of the following steps:
Remove any columns in our matrix $M$ with no 0s or all 0s.
Construct a graph where 1s represent edges between features (columns) and samples (rows).
Get each set of connections where there are >1 connections (call these K) in the graph $G$, for each K (while $G \neq \emptyset$):
i. get all the samples ($S$ or rows) in $K$, call this $S'$.
ii. for this set of $S'$, get all the features ($C$ or columns) where there are no 0 entries, call this $U$.
iii. if there are no elements in $U$, stop.
iv. otherwise remove $U$ from the graph $G$ (also removing any connections), and add $S'$ to our tree $T$.
For any $C$s that are associated with the clade $S$, set these to 1, otherwise 0.
Return $T$ and $M'$.
The vector $T$ now represents our phylogenetic tree. This represents the clades that are present in our tree. For instance, if $T = {{s1,s2,s3,s4},{s1,s2},{s3,s4}}$, this means that our 'super-clade' has 4 elements, ${s1,s2,s3,s4}$, and two clades consisting of ${s1,s2}$ and ${s3,s4}$, which are more closely related than the other pairs. The tree could be drawn as:
/\
/ \
/ \
/\ /\
/ \ / \
s2 s2 s3 s4
To implement the algorithm, Let's work from the following matrix (already sorted by its binary values, see previous post for more info):
| | C1 | C2 | C3 | C4 | C5 |
|----|----|----|----|----|----|
| S1 | 1 | 1 | 0 | 0 | ? |
| S2 | 0 | ? | 1 | 0 | ? |
| S3 | 1 | ? | 0 | 0 | 1 |
| S4 | 0 | 0 | 1 | 1 | ? |
| S5 | 0 | 1 | 0 | 0 | ? |
We will now transform this matrix into a graph, creating connections between the nodes, $C$s and $S$s. Edges are defined where $C$ and $S$ share a value of '1'. To do this, we first code up an graph implementation, with nodes and edges. One approach can be found here - the base graph code, as well as the $M$ Graph implementation are in the mgraph.py file in this repository.
End of explanation
from mgraph import MGraph
import numpy as np
s = np.array(['s1','s2','s3','s4','s5'])
c = np.array(['c1','c2','c3','c4','c5'])
m = np.array([[ 1, 1, 0, 0, -1],
[ 0, -1, 1, 0, -1],
[ 1, -1, 0, 0, 1],
[ 0, 0, 1, 1, -1],
[ 0, 1, 0, 0, -1]])
# create working matrix M and column C vector copies
m3 = m.copy()
c3 = c.copy()
# remove any columns where there are no 0 entries
mi = np.empty(0,dtype='int')
ncol = len(m3[0])
idxs_to_delete = [i for i in range(ncol) if not np.any(m3[:,i:i+1]==0)]
m3 = np.delete(m3,idxs_to_delete,axis=1)
c3 = np.delete(c3,idxs_to_delete)
m_graph = MGraph()
m_graph.build_graph(m3,s,c3)
m_pairs = m_graph.get_edge_pairs()
print('Nodes in our graph')
print(m_graph.getNodes())
print('\nPairwise connections between out nodes')
print(m_graph.get_edge_pairs())
Explanation: We now build our arrays for our $S$s and $C$s, as well as or M matrix. Let's use -1 to represent the 'incomplete' state of our matrix.
End of explanation
from IPython.display import Image
Image(filename='m3_graph.png')
Explanation: The variable m_pairs (above) displays the graph connections between our $S$s and $C$s. We now get the first set of connections in the graph, and determine the $K$ vector (which is the union of all $S$s and $C$s involved in the graph connection). Below is a drawing of the graph connections between the $S$ and $C$ nodes. I have colour-coded the connections corresponding to contiguous sets of connections - these will correspond to the clades, as we will discover through the algorithm.
End of explanation
def get_k(k,q,m_graph):
'''
return the first K vector with E[K] >= 1
(the K vector contains all elements of a chain of connected
vertices with more than just one connection between them)
'''
if not q:
return k
else:
q1 = q.pop()
k.add(q1)
pairs = m_graph.get_pairs_containing(q1)
pairs = set([node for pair in pairs for node in pair]) #flatten set of elements
for node in pairs:
if node not in k:
q.add(node)
return get_k(k,q,m_graph)
q = set(m_pairs[0]) #pick the first elements as k
print(q)
k = get_k(set(),q,m_graph)
print(k)
Explanation: To illustrate the algorithm step by step, we will do one full iteration of step 3.
End of explanation
s_prime = set(s).intersection(k)
s_indexes = np.array([np.where(s_i==s)[0][0] for s_i in s_prime])
m_tmp = m3.copy()[s_indexes]
m_tmp
Explanation: Based on the $S$s returned in the connections ($K \cup S'$), we now return the slice of the $M_3$ array, on the rows in $S'$.
End of explanation
u = []
c_in_k = set(c).intersection(k)
for c_i in c_in_k:
c_index = np.where(c_i==c3)[0]
m_col = m_tmp[:,c_index:c_index+1]
if np.all(m_col!=0):
cm = np.where(c_i==c3)[0]
u.append(c_i)
print(u)
Explanation: Now we return our $U$ vector, which is defined as any $C$s where there are no $0$s (of the above slice).
End of explanation
m_new = m3.copy()
c_indexes = [np.where(c3==x)[0][0] for x in u]
for c_idx in c_indexes:
m_col = m_new[s_indexes,c_idx:c_idx+1]
if np.any(m_col==-1):
m_col[m_col==-1] = 1
m_new[s_indexes,c_idx:c_idx+1] = m_col
print("Incomplete matrix prior to algorithm step")
print(m3)
print("\nMatrix after first round of algorithm")
print(m_new)
Explanation: Now any elements that are in the character columns contained in $U$, and in the sample rows contained in $S'$ (i.e. they belong to the same clade), will have any incomplete fields inferred as present. (This is skipping ahead to step 4, but just to demonstrate.)
End of explanation
m_graph = MGraph()
m_graph.build_graph(m3,s,c3)
m_pairs = m_graph.get_edge_pairs()
q = set(m_pairs[0]) #pick the first elements as k
k = get_k(set(),q,m_graph)
t = [set(s)] #initialise a tree
while len(m_pairs) > 1:
while len(k) < 3: #
for n in k:
m_graph.delNode(n)
m_pairs = m_graph.get_edge_pairs()
if len(m_pairs) > 1:
q = set(m_pairs[0])
k = get_k(set(),q,m_graph)
else:
break
s_prime = set(s).intersection(k)
s_indexes = np.array([np.where(s_i==s)[0][0] for s_i in s_prime])
m_tmp = m3.copy()[s_indexes]
print('k:')
print(k)
print("S':")
print(s_prime)
u = []
c_in_k = set(c).intersection(k)
for c_i in c_in_k:
c_index = np.where(c_i==c3)[0]
m_col = m_tmp[:,c_index:c_index+1]
if np.all(m_col!=0):
cm = np.where(c_i==c3)[0]
u.append(c_i)
if not u:
break
else:
print('u:')
print(u)
t.append(s_prime)
for n in u:
m_graph.delNode(n)
m_pairs = m_graph.get_edge_pairs()
k = get_k(set(),set(m_pairs[0]),m_graph)
print('Tree:')
print(t)
Explanation: Hence in the above example, the matrix $M_3$ cell corresponding to $S3$ and $C2$ has been inferred to be present (i.e. 1).
Finally we can put the whole algorithm together. This example follows the example from this paper, which also provides an illustration of the node connections in the "$\sum$-free graph".
End of explanation
m_new = m3.copy()
c_indexes = [np.where(c3==x)[0][0] for x in u]
for s_set in t[1:]:
s_prime = set(s).intersection(s_set)
s_indexes = np.array([np.where(s_i==s)[0][0] for s_i in s_prime])
m_tmp = m3.copy()[s_indexes]
for c_i in c3:
c_idx = np.where(c_i==c3)[0]
m_col = m_tmp[:,c_index:c_index+1]
if np.all(m_col!=0) and np.any(m_col==-1):
cm = np.where(c_i==c3)[0]
m_col[m_col==-1] = 1
m_new[s_indexes,c_idx:c_idx+1] = m_col
for i in range(len(c3)):
tcol = m_new[:,i:i+1]
tcol[tcol==-1] = 0
m_new[:,i:i+1] = tcol
print("Incomplete matrix")
print(m3)
print("\nInferred matrix")
print(m_new)
Explanation: Hence our final tree looks like:
/\
/ \
/ /\
/ s5 \
/\ /\
s2 s4 s3 s1
Now for all the clade sets in $T$ that we have inferred (except the super-clade set in $t[0]$), set all $C$s (for the associated clade set) to 1, if the column contains no 0s (i.e. containing only 1 or -1), otherwise set it to 0.
End of explanation |
11,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order
Step1: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
Step2: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
Step3: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
Step4: General Approach for Parameter Tuning
We are going to preform the steps as follows
Step5: Step 2
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 6
Step10: Cross Validation
Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
Step11: Model from all data set
Step12: Use final model to predict the given test data set | Python Code:
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
seed = 123
np.random.seed(seed)
import pandas as pd
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
Explanation: In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order:
•Background
•Exploratory Data Analysis
•Data Prepration and Model Selection
•Final Results
Background
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1.Nonmarine sandstone
2.Nonmarine coarse siltstone
3.Nonmarine fine siltstone
4.Marine siltstone and shale
5.Mudstone (limestone)
6.Wackestone (limestone)
7.Dolomite
8.Packstone-grainstone (limestone)
9.Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies/ Label/ Adjacent Facies
1 SS 2
2 CSiS 1,3
3 FSiS 2
4 SiSh 5
5 MS 4,6
6 WS 5,7
7 D 6,8
8 PS 6,7,9
9 BS 7,8
Exprolatory Data Analysis
After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
End of explanation
import xgboost as xgb
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
train = X_train.copy()
train['Facies']=Y_train
train.head()
Explanation: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
target='Facies'
Explanation: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
End of explanation
def modelfit(alg, dtrain, features, useTrainCV=True,
cv_fold=10,early_stopping_rounds = 50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgb_param['num_class']=9
xgtrain = xgb.DMatrix(train[features].values,label = train[target].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=
alg.get_params()['n_estimators'],nfold=cv_fold,
metrics='merror',early_stopping_rounds = early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[features], dtrain[target],eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(dtrain[features])
dtrain_predprob = alg.predict_proba(dtrain[features])[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(dtrain[target],
dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(dtrain[target],
dtrain_prediction,average='weighted'))
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
features =[x for x in X_train.columns]
features
Explanation: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
End of explanation
from xgboost import XGBClassifier
xgb1 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma = 0,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
seed = 123,
)
modelfit(xgb1, train, features)
xgb1
Explanation: General Approach for Parameter Tuning
We are going to preform the steps as follows:
1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems.
2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.
3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees.
4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
5.Lower the learning rate and decide the optimal parameters.
Step 1:Fix learning rate and number of estimators for tuning tree-based parameters
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
1.max_depth = 5
2.min_child_weight = 1
3.gamma = 0
4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value.
5.scale_pos_weight = 1
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
End of explanation
from sklearn.model_selection import GridSearchCV
param_test1={
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs1.fit(train[features],train[target])
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
param_test2={
'max_depth':[8,9,10],
'min_child_weight':[1,2]
}
gs2 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5,
min_child_weight=1, n_estimators=290, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs2.fit(train[features],train[target])
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
Explanation: Step 2: Tune max_depth and min_child_weight
End of explanation
param_test3={
'gamma':[i/10.0 for i in range(0,5)]
}
gs3 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=370, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs3.fit(train[features],train[target])
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
xgb2 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb2,train,features)
xgb2
Explanation: Step 3: Tune gamma
End of explanation
param_test4={
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gs4 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4.fit(train[features],train[target])
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
param_test4b={
'subsample':[i/10.0 for i in range(5,7)],
}
gs4b = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4b,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4b.fit(train[features],train[target])
gs4b.grid_scores_, gs4b.best_params_,gs4b.best_score_
Explanation: Step 4:Tune subsample and colsample_bytree
End of explanation
param_test5={
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gs5 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs5.fit(train[features],train[target])
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
param_test6={
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gs6 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test6,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs6.fit(train[features],train[target])
gs6.grid_scores_, gs6.best_params_,gs6.best_score_
xgb3 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb3,train,features)
xgb3
model = XGBClassifier(base_score=0.5, colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=122, nthread=4,
objective='multi:softprob', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=True, subsample=0.6)
model.fit(X_train, Y_train)
xgb.plot_importance(model)
Explanation: Step 5: Tuning Regularization Parameters
End of explanation
xgb4 = XGBClassifier(
learning_rate = 0.01,
n_estimators=5000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb4,train,features)
xgb4
Explanation: Step 6: Reducing Learning Rate
End of explanation
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Final recommended model based on the extensive parameters search
model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,
colsample_bytree=0.8, gamma=0.2,
learning_rate=0.01, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=432, nthread=4,
objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=1,
subsample=0.6)
# Train the model based on training data
model_final.fit( train_X , train_Y , eval_metric = 'merror' )
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
Explanation: Cross Validation
Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
End of explanation
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Split data for training and testing
X_train_all = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train_all = data['Facies' ] - 1
X_train_all = X_train_all.drop(['Well Name'], axis = 1)
# Final recommended model based on the extensive parameters search
model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,
colsample_bytree=0.8, gamma=0.2,
learning_rate=0.01, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=432, nthread=4,
objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=1,
subsample=0.6)
# Train the model based on training data
model_final.fit(X_train_all , Y_train_all , eval_metric = 'merror' )
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
#print(test_Y)
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
Explanation: Model from all data set
End of explanation
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction3.csv')
test_data
Explanation: Use final model to predict the given test data set
End of explanation |
11,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the freesurfer
segmentation file.
Step1: Setup the source spaces
Step2: Plot the positions of each source space
Step3: Compare volume source locations to segmentation file in freeview | Python Code:
# Author: Alan Leggitt <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
from scipy.spatial import ConvexHull
from mayavi import mlab
from mne import setup_source_space, setup_volume_source_space
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
subj = 'sample'
aseg_fname = subjects_dir + '/sample/mri/aseg.mgz'
Explanation: Generate a left cerebellum volume source space
Generate a volume source space of the left cerebellum and plot its vertices
relative to the left cortical surface source space and the freesurfer
segmentation file.
End of explanation
# setup a cortical surface source space and extract left hemisphere
surf = setup_source_space(subj, fname=None, subjects_dir=subjects_dir,
add_dist=False)
lh_surf = surf[0]
# setup a volume source space of the left cerebellum cortex
volume_label = 'Left-Cerebellum-Cortex'
sphere = (0, 0, 0, 120)
lh_cereb = setup_volume_source_space(subj, mri=aseg_fname, sphere=sphere,
volume_label=volume_label,
subjects_dir=subjects_dir)
Explanation: Setup the source spaces
End of explanation
# extract left cortical surface vertices, triangle faces, and surface normals
x1, y1, z1 = lh_surf['rr'].T
faces = lh_surf['use_tris']
normals = lh_surf['nn']
# normalize for mayavi
normals /= np.sum(normals * normals, axis=1)[:, np.newaxis]
# extract left cerebellum cortex source positions
x2, y2, z2 = lh_cereb[0]['rr'][lh_cereb[0]['inuse'].astype(bool)].T
# open a 3d figure in mayavi
mlab.figure(1, bgcolor=(0, 0, 0))
# plot the left cortical surface
mesh = mlab.pipeline.triangular_mesh_source(x1, y1, z1, faces)
mesh.data.point_data.normals = normals
mlab.pipeline.surface(mesh, color=3 * (0.7,))
# plot the convex hull bounding the left cerebellum
hull = ConvexHull(np.c_[x2, y2, z2])
mlab.triangular_mesh(x2, y2, z2, hull.simplices, color=3 * (0.5,), opacity=0.3)
# plot the left cerebellum sources
mlab.points3d(x2, y2, z2, color=(1, 1, 0), scale_factor=0.001)
# adjust view parameters
mlab.view(173.78, 101.75, 0.30, np.array([-0.03, -0.01, 0.03]))
mlab.roll(85)
Explanation: Plot the positions of each source space
End of explanation
# Export source positions to nift file
nii_fname = data_path + '/MEG/sample/mne_sample_lh-cerebellum-cortex.nii'
# Combine the source spaces
src = surf + lh_cereb
src.export_volume(nii_fname, mri_resolution=True)
# Uncomment the following lines to display source positions in freeview.
'''
# display image in freeview
from mne.utils import run_subprocess
mri_fname = subjects_dir + '/sample/mri/brain.mgz'
run_subprocess(['freeview', '-v', mri_fname, '-v',
'%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',
'%s:colormap=jet:colorscale=0,2' % nii_fname, '-slice',
'157 75 105'])
'''
Explanation: Compare volume source locations to segmentation file in freeview
End of explanation |
11,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: Creating and Manipulating Tensors
Learning Objectives
Step2: Vector Addition
You can perform many typical mathematical operations on tensors (TF API). The code below creates the following vectors (1-D tensors), all having exactly six elements
Step3: Printing a tensor returns not only its value, but also its shape (discussed in the next section) and the type of value stored in the tensor. Calling the numpy method of a tensor returns the value of the tensor as a numpy array
Step4: Tensor Shapes
Shapes are used to characterize the size and number of dimensions of a tensor. The shape of a tensor is expressed as list, with the ith element representing the size along dimension i. The length of the list then indicates the rank of the tensor (i.e., the number of dimensions).
For more information, see the TensorFlow documentation.
A few basic examples
Step5: Broadcasting
In mathematics, you can only perform element-wise operations (e.g. add and equals) on tensors of the same shape. In TensorFlow, however, you may perform operations on tensors that would traditionally have been incompatible. TensorFlow supports broadcasting (a concept borrowed from numpy), where the smaller array in an element-wise operation is enlarged to have the same shape as the larger array. For example, via broadcasting
Step6: Exercise #1
Step7: Solution
Click below for a solution.
Step8: Matrix Multiplication
In linear algebra, when multiplying two matrices, the number of columns of the first matrix must
equal the number of rows in the second matrix.
It is valid to multiply a 3x4 matrix by a 4x2 matrix. This will result in a 3x2 matrix.
It is invalid to multiply a 4x2 matrix by a 3x4 matrix.
Step9: Tensor Reshaping
With tensor addition and matrix multiplication each imposing constraints
on operands, TensorFlow programmers must frequently reshape tensors.
You can use the tf.reshape method to reshape a tensor.
For example, you can reshape a 8x2 tensor into a 2x8 tensor or a 4x4 tensor
Step10: You can also use tf.reshape to change the number of dimensions (the "rank") of the tensor.
For example, you could reshape that 8x2 tensor into a 3-D 2x2x4 tensor or a 1-D 16-element tensor.
Step11: Exercise #2
Step12: Solution
Click below for a solution.
Remember, when multiplying two matrices, the number of columns of the first matrix must equal the number of rows in the second matrix.
One possible solution is to reshape a into a 2x3 matrix and reshape b into a a 3x1 matrix, resulting in a 2x1 matrix after multiplication
Step13: An alternative solution would be to reshape a into a 6x1 matrix and b into a 1x3 matrix, resulting in a 6x3 matrix after multiplication.
Variables, Initialization and Assignment
So far, all the operations we performed were on static values (tf.constant); calling numpy() always returned the same result. TensorFlow allows you to define Variable objects, whose values can be changed.
When creating a variable, you can set an initial value explicitly, or you can use an initializer (like a distribution)
Step14: To change the value of a variable, use the assign op
Step15: When assigning a new value to a variable, its shape must be equal to its previous shape
Step16: There are many more topics about variables that we didn't cover here, such as loading and storing. To learn more, see the TensorFlow docs.
Exercise #3
Step17: Solution
Click below for a solution.
We're going to place dice throws inside two separate 10x1 matrices, die1 and die2. The summation of the dice rolls will be stored in dice_sum, then the resulting 10x3 matrix will be created by concatenating the three 10x1 matrices together into a single matrix.
Alternatively, we could have placed dice throws inside a single 10x2 matrix, but adding different columns of the same matrix would be more complicated. We also could have placed dice throws inside two 1-D tensors (vectors), but doing so would require transposing the result. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import tensorflow as tf
try:
tf.contrib.eager.enable_eager_execution()
print("TF imported with eager execution!")
except ValueError:
print("TF already imported with eager execution!")
Explanation: Creating and Manipulating Tensors
Learning Objectives:
* Initialize and assign TensorFlow Variables
* Create and manipulate tensors
* Refresh your memory about addition and multiplication in linear algebra (consult an introduction to matrix addition and multiplication if these topics are new to you)
* Familiarize yourself with basic TensorFlow math and array operations
End of explanation
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes
)
ones = tf.ones([6], dtype=tf.int32)
print("ones:", ones)
just_beyond_primes = tf.add(primes, ones)
print("just_beyond_primes:", just_beyond_primes)
twos = tf.constant([2, 2, 2, 2, 2, 2], dtype=tf.int32)
primes_doubled = primes * twos
print("primes_doubled:", primes_doubled)
Explanation: Vector Addition
You can perform many typical mathematical operations on tensors (TF API). The code below creates the following vectors (1-D tensors), all having exactly six elements:
A primes vector containing prime numbers.
A ones vector containing all 1 values.
A vector created by performing element-wise addition over the first two vectors.
A vector created by doubling the elements in the primes vector.
End of explanation
some_matrix = tf.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int32)
print(some_matrix)
print("\nvalue of some_matrix is:\n", some_matrix.numpy())
Explanation: Printing a tensor returns not only its value, but also its shape (discussed in the next section) and the type of value stored in the tensor. Calling the numpy method of a tensor returns the value of the tensor as a numpy array:
End of explanation
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.numpy())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.numpy())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.numpy())
Explanation: Tensor Shapes
Shapes are used to characterize the size and number of dimensions of a tensor. The shape of a tensor is expressed as list, with the ith element representing the size along dimension i. The length of the list then indicates the rank of the tensor (i.e., the number of dimensions).
For more information, see the TensorFlow documentation.
A few basic examples:
End of explanation
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
print("primes:", primes)
one = tf.constant(1, dtype=tf.int32)
print("one:", one)
just_beyond_primes = tf.add(primes, one)
print("just_beyond_primes:", just_beyond_primes)
two = tf.constant(2, dtype=tf.int32)
primes_doubled = primes * two
print("primes_doubled:", primes_doubled)
Explanation: Broadcasting
In mathematics, you can only perform element-wise operations (e.g. add and equals) on tensors of the same shape. In TensorFlow, however, you may perform operations on tensors that would traditionally have been incompatible. TensorFlow supports broadcasting (a concept borrowed from numpy), where the smaller array in an element-wise operation is enlarged to have the same shape as the larger array. For example, via broadcasting:
If an operand requires a size [6] tensor, a size [1] or a size [] tensor can serve as an operand.
If an operation requires a size [4, 6] tensor, any of the following sizes can serve as an operand:
[1, 6]
[6]
[]
If an operation requires a size [3, 5, 6] tensor, any of the following sizes can serve as an operand:
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
NOTE: When a tensor is broadcast, its entries are conceptually copied. (They are not actually copied for performance reasons. Broadcasting was invented as a performance optimization.)
The full broadcasting ruleset is well described in the easy-to-read numpy broadcasting documentation.
The following code performs the same tensor arithmetic as before, but instead uses scalar values (instead of vectors containing all 1s or all 2s) and broadcasting.
End of explanation
# Write your code for Task 1 here.
Explanation: Exercise #1: Arithmetic over vectors.
Perform vector arithmetic to create a "just_under_primes_squared" vector, where the ith element is equal to the ith element in primes squared, minus 1. For example, the second element would be equal to 3 * 3 - 1 = 8.
Make use of either the tf.multiply or tf.pow ops to square the value of each element in the primes vector.
End of explanation
# Task: Square each element in the primes vector, then subtract 1.
def solution(primes):
primes_squared = tf.multiply(primes, primes)
neg_one = tf.constant(-1, dtype=tf.int32)
just_under_primes_squared = tf.add(primes_squared, neg_one)
return just_under_primes_squared
def alternative_solution(primes):
primes_squared = tf.pow(primes, 2)
one = tf.constant(1, dtype=tf.int32)
just_under_primes_squared = tf.subtract(primes_squared, one)
return just_under_primes_squared
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
just_under_primes_squared = solution(primes)
print("just_under_primes_squared:", just_under_primes_squared)
Explanation: Solution
Click below for a solution.
End of explanation
# A 3x4 matrix (2-d tensor).
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# A 4x2 matrix (2-d tensor).
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`; result is 3x2 matrix.
matrix_multiply_result = tf.matmul(x, y)
print(matrix_multiply_result)
Explanation: Matrix Multiplication
In linear algebra, when multiplying two matrices, the number of columns of the first matrix must
equal the number of rows in the second matrix.
It is valid to multiply a 3x4 matrix by a 4x2 matrix. This will result in a 3x2 matrix.
It is invalid to multiply a 4x2 matrix by a 3x4 matrix.
End of explanation
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x8_matrix = tf.reshape(matrix, [2, 8])
reshaped_4x4_matrix = tf.reshape(matrix, [4, 4])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.numpy())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.numpy())
Explanation: Tensor Reshaping
With tensor addition and matrix multiplication each imposing constraints
on operands, TensorFlow programmers must frequently reshape tensors.
You can use the tf.reshape method to reshape a tensor.
For example, you can reshape a 8x2 tensor into a 2x8 tensor or a 4x4 tensor:
End of explanation
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant(
[[1, 2], [3, 4], [5, 6], [7, 8], [9, 10], [11, 12], [13, 14], [15, 16]],
dtype=tf.int32)
reshaped_2x2x4_tensor = tf.reshape(matrix, [2, 2, 4])
one_dimensional_vector = tf.reshape(matrix, [16])
print("Original matrix (8x2):")
print(matrix.numpy())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.numpy())
print("1-D vector:")
print(one_dimensional_vector.numpy())
Explanation: You can also use tf.reshape to change the number of dimensions (the "rank") of the tensor.
For example, you could reshape that 8x2 tensor into a 3-D 2x2x4 tensor or a 1-D 16-element tensor.
End of explanation
# Write your code for Task 2 here.
Explanation: Exercise #2: Reshape two tensors in order to multiply them.
The following two vectors are incompatible for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
Reshape these vectors into compatible operands for matrix multiplication.
Then, invoke a matrix multiplication operation on the reshaped tensors.
End of explanation
# Task: Reshape two tensors in order to multiply them
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
reshaped_a = tf.reshape(a, [2, 3])
reshaped_b = tf.reshape(b, [3, 1])
c = tf.matmul(reshaped_a, reshaped_b)
print("reshaped_a (2x3):")
print(reshaped_a.numpy())
print("reshaped_b (3x1):")
print(reshaped_b.numpy())
print("reshaped_a x reshaped_b (2x1):")
print(c.numpy())
Explanation: Solution
Click below for a solution.
Remember, when multiplying two matrices, the number of columns of the first matrix must equal the number of rows in the second matrix.
One possible solution is to reshape a into a 2x3 matrix and reshape b into a a 3x1 matrix, resulting in a 2x1 matrix after multiplication:
End of explanation
# Create a scalar variable with the initial value 3.
v = tf.contrib.eager.Variable([3])
# Create a vector variable of shape [1, 4], with random initial values,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.contrib.eager.Variable(tf.random_normal([1, 4], mean=1.0, stddev=0.35))
print("v:", v.numpy())
print("w:", w.numpy())
Explanation: An alternative solution would be to reshape a into a 6x1 matrix and b into a 1x3 matrix, resulting in a 6x3 matrix after multiplication.
Variables, Initialization and Assignment
So far, all the operations we performed were on static values (tf.constant); calling numpy() always returned the same result. TensorFlow allows you to define Variable objects, whose values can be changed.
When creating a variable, you can set an initial value explicitly, or you can use an initializer (like a distribution):
End of explanation
v = tf.contrib.eager.Variable([3])
print(v.numpy())
tf.assign(v, [7])
print(v.numpy())
v.assign([5])
print(v.numpy())
Explanation: To change the value of a variable, use the assign op:
End of explanation
v = tf.contrib.eager.Variable([[1, 2, 3], [4, 5, 6]])
print(v.numpy())
try:
print("Assigning [7, 8, 9] to v")
v.assign([7, 8, 9])
except ValueError as e:
print("Exception:", e)
Explanation: When assigning a new value to a variable, its shape must be equal to its previous shape:
End of explanation
# Write your code for Task 3 here.
Explanation: There are many more topics about variables that we didn't cover here, such as loading and storing. To learn more, see the TensorFlow docs.
Exercise #3: Simulate 10 rolls of two dice.
Create a dice simulation, which generates a 10x3 2-D tensor in which:
Columns 1 and 2 each hold one throw of one six-sided die (with values 1–6).
Column 3 holds the sum of Columns 1 and 2 on the same row.
For example, the first row might have the following values:
Column 1 holds 4
Column 2 holds 3
Column 3 holds 7
You'll need to explore the TensorFlow API reference to solve this task.
End of explanation
# Task: Simulate 10 throws of two dice. Store the results in a 10x3 matrix.
die1 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
die2 = tf.contrib.eager.Variable(
tf.random_uniform([10, 1], minval=1, maxval=7, dtype=tf.int32))
dice_sum = tf.add(die1, die2)
resulting_matrix = tf.concat(values=[die1, die2, dice_sum], axis=1)
print(resulting_matrix.numpy())
Explanation: Solution
Click below for a solution.
We're going to place dice throws inside two separate 10x1 matrices, die1 and die2. The summation of the dice rolls will be stored in dice_sum, then the resulting 10x3 matrix will be created by concatenating the three 10x1 matrices together into a single matrix.
Alternatively, we could have placed dice throws inside a single 10x2 matrix, but adding different columns of the same matrix would be more complicated. We also could have placed dice throws inside two 1-D tensors (vectors), but doing so would require transposing the result.
End of explanation |
11,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 3
Step1: Note
Step2: Now, let's start with the ANTs normalization workflow!
Imports (ANTs)
First, we need to import all the modules we later want to use.
Step3: Experiment parameters (ANTs)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script. And remember that we decided to run the group analysis without subject sub-01, sub-06 and sub-10 because they are left-handed (see this section).
Step4: Note if you're not using the corresponding docker image, than the template file might not be in your data directory. To get mni_icbm152_nlin_asym_09c, either download it from this website, unpack it and move it to /data/ds000114/derivatives/fmriprep/ or run the following command in a cell
Step5: Specify input & output stream (ANTs)
Specify where the input data can be found & where and how to save the output data.
Step6: Specify Workflow (ANTs)
Create a workflow and connect the interface nodes and the I/O stream to each other.
Step7: Visualize the workflow (ANTs)
It always helps to visualize your workflow.
Step8: Run the Workflow (ANTs)
Now that everything is ready, we can run the ANTs normalization workflow. Change n_procs to the number of jobs/cores you want to use.
Step9: Normalization with SPM12
The normalization with SPM12 is rather straightforward. The only thing we need to do is run the Normalize12 module. So let's start!
Imports (SPM12)
First, we need to import all the modules we later want to use.
Step10: Experiment parameters (SPM12)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script. And remember that we decided to run the group analysis without subject sub-01, sub-06 and sub-10 because they are left-handed (see this section).
Step11: Specify Nodes (SPM12)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
Step12: Specify input & output stream (SPM12)
Specify where the input data can be found & where and how to save the output data.
Step13: Specify Workflow (SPM12)
Create a workflow and connect the interface nodes and the I/O stream to each other.
Step14: Visualize the workflow (SPM12)
It always helps to visualize your workflow.
Step15: Run the Workflow (SPM12)
Now that everything is ready, we can run the SPM normalization workflow. Change n_procs to the number of jobs/cores you want to use.
Step16: Comparison between ANTs and SPM normalization
Now that we ran the normalization with ANTs and SPM, let us compare their output.
Step17: First, let's compare the normalization of the anatomical images
Step18: And what about the contrast images for Finger > others? | Python Code:
%%bash
datalad get -J 4 -d /data/ds000114 /data/ds000114/derivatives/fmriprep/sub-0[2345789]/anat/*h5
Explanation: Example 3: Normalize data to MNI template
This example covers the normalization of data. Some people prefer to normalize the data during the preprocessing, just before smoothing. I prefer to do the 1st-level analysis completely in subject space and only normalize the contrasts for the 2nd-level analysis. But both approaches are fine.
For the current example, we will take the computed 1st-level contrasts from the previous experiment (again once done with fwhm=4mm and fwhm=8mm) and normalize them into MNI-space. To show two different approaches, we will do the normalization once with ANTs and once with SPM.
Preparation
Before we can start with the ANTs example, we first need to download the already computed deformation field. The data can be found in the derivatives/fmriprep folder of the dataset and can be downloaded with the following datalad command:
End of explanation
!ls /data/ds000114/derivatives/fmriprep/sub-*/anat/*h5
Explanation: Note: This might take a while, as datalad needs to download ~710MB of data
Alternatively: Prepare yourself
We're using the precomputed warp field from fmriprep, as this step otherwise would take up to 10 hours or more for all subjects to complete. If you're nonetheless interested in computing the warp parameters with ANTs yourself, without using fmriprep, either check out the script ANTS_registration.py or even quicker, use RegistrationSynQuick, Nipype's implementation of antsRegistrationSynQuick.sh.
Normalization with ANTs
The normalization with ANTs requires that you first compute the transformation matrix that would bring the anatomical images of each subject into template space. Depending on your system this might take a few hours per subject. To facilitate this step, the transformation matrix is already computed for the T1 images.
The data for it can be found under:
End of explanation
from os.path import join as opj
from nipype import Workflow, Node, MapNode
from nipype.interfaces.ants import ApplyTransforms
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.interfaces.fsl import Info
Explanation: Now, let's start with the ANTs normalization workflow!
Imports (ANTs)
First, we need to import all the modules we later want to use.
End of explanation
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers (remember we use only right handed subjects)
subject_list = ['02', '03', '04', '05', '07', '08', '09']
# task name
task_name = "fingerfootlips"
# Smoothing widths used during preprocessing
fwhm = [4, 8]
# Template to normalize to
template = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
Explanation: Experiment parameters (ANTs)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script. And remember that we decided to run the group analysis without subject sub-01, sub-06 and sub-10 because they are left-handed (see this section).
End of explanation
# Apply Transformation - applies the normalization matrix to contrast images
apply2con = MapNode(ApplyTransforms(args='--float',
input_image_type=3,
interpolation='BSpline',
invert_transform_flags=[False],
num_threads=1,
reference_image=template,
terminal_output='file'),
name='apply2con', iterfield=['input_image'])
Explanation: Note if you're not using the corresponding docker image, than the template file might not be in your data directory. To get mni_icbm152_nlin_asym_09c, either download it from this website, unpack it and move it to /data/ds000114/derivatives/fmriprep/ or run the following command in a cell:
```bash
%%bash
curl -L https://files.osf.io/v1/resources/fvuh8/providers/osfstorage/580705089ad5a101f17944a9 \
-o /data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c.tar.gz
tar xf /data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c.tar.gz \
-C /data/ds000114/derivatives/fmriprep/.
rm /data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c.tar.gz
```
Specify Nodes (ANTs)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
End of explanation
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
'sub-{subject_id}/fwhm-{fwhm_id}', '???_00??.nii'),
'transform': opj('/data/ds000114/derivatives/fmriprep/', 'sub-{subject_id}', 'anat',
'sub-{subject_id}_t1w_space-mni152nlin2009casym_warp.h5')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s_fwhm%s' % (sub, f))
for f in fwhm
for sub in subject_list]
subjFolders += [('_apply2con%s/' % (i), '') for i in range(9)] # number of contrast used in 1stlevel an.
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
Explanation: Specify input & output stream (ANTs)
Specify where the input data can be found & where and how to save the output data.
End of explanation
# Initiation of the ANTs normalization workflow
antsflow = Workflow(name='antsflow')
antsflow.base_dir = opj(experiment_dir, working_dir)
# Connect up the ANTs normalization components
antsflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, apply2con, [('con', 'input_image'),
('transform', 'transforms')]),
(apply2con, datasink, [('output_image', 'norm_ants.@con')]),
])
Explanation: Specify Workflow (ANTs)
Create a workflow and connect the interface nodes and the I/O stream to each other.
End of explanation
# Create ANTs normalization graph
antsflow.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(antsflow.base_dir, 'antsflow', 'graph.png'))
Explanation: Visualize the workflow (ANTs)
It always helps to visualize your workflow.
End of explanation
antsflow.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the Workflow (ANTs)
Now that everything is ready, we can run the ANTs normalization workflow. Change n_procs to the number of jobs/cores you want to use.
End of explanation
from os.path import join as opj
from nipype.interfaces.spm import Normalize12
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.algorithms.misc import Gunzip
from nipype import Workflow, Node
Explanation: Normalization with SPM12
The normalization with SPM12 is rather straightforward. The only thing we need to do is run the Normalize12 module. So let's start!
Imports (SPM12)
First, we need to import all the modules we later want to use.
End of explanation
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers
subject_list = ['02', '03', '04', '05', '07', '08', '09']
# task name
task_name = "fingerfootlips"
# Smoothing withds used during preprocessing
fwhm = [4, 8]
template = '/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
Explanation: Experiment parameters (SPM12)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script. And remember that we decided to run the group analysis without subject sub-01, sub-06 and sub-10 because they are left-handed (see this section).
End of explanation
# Gunzip - unzip the anatomical image
gunzip = Node(Gunzip(), name="gunzip")
# Normalize - normalizes functional and structural images to the MNI template
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[1, 1, 1]),
name="normalize")
Explanation: Specify Nodes (SPM12)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
End of explanation
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
'sub-{subject_id}/fwhm-{fwhm_id}', '???_00??.nii'),
'anat': opj('/data/ds000114/derivatives', 'fmriprep', 'sub-{subject_id}',
'anat', 'sub-{subject_id}_t1w_preproc.nii.gz')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s_fwhm%s' % (sub, f))
for f in fwhm
for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
Explanation: Specify input & output stream (SPM12)
Specify where the input data can be found & where and how to save the output data.
End of explanation
# Specify Normalization-Workflow & Connect Nodes
spmflow = Workflow(name='spmflow')
spmflow.base_dir = opj(experiment_dir, working_dir)
# Connect up SPM normalization components
spmflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, normalize, [('con', 'apply_to_files')]),
(selectfiles, gunzip, [('anat', 'in_file')]),
(gunzip, normalize, [('out_file', 'image_to_align')]),
(normalize, datasink, [('normalized_files', 'norm_spm.@files'),
('normalized_image', 'norm_spm.@image'),
]),
])
Explanation: Specify Workflow (SPM12)
Create a workflow and connect the interface nodes and the I/O stream to each other.
End of explanation
# Create SPM normalization graph
spmflow.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(spmflow.base_dir, 'spmflow', 'graph.png'))
Explanation: Visualize the workflow (SPM12)
It always helps to visualize your workflow.
End of explanation
spmflow.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the Workflow (SPM12)
Now that everything is ready, we can run the SPM normalization workflow. Change n_procs to the number of jobs/cores you want to use.
End of explanation
from nilearn.plotting import plot_stat_map
%matplotlib inline
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
Explanation: Comparison between ANTs and SPM normalization
Now that we ran the normalization with ANTs and SPM, let us compare their output.
End of explanation
plot_stat_map(
'/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_space-mni152nlin2009casym_preproc.nii.gz',
title='anatomy - ANTs (normalized to ICBM152)', bg_img=anatimg,
threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10));
plot_stat_map(
'/output/datasink/norm_spm/sub-02_fwhm4/wsub-02_t1w_preproc.nii',
title='anatomy - SPM (normalized to SPM\'s TPM)', bg_img=anatimg,
threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10));
Explanation: First, let's compare the normalization of the anatomical images:
End of explanation
plot_stat_map(
'/output/datasink/norm_ants/sub-02_fwhm8/con_0005_trans.nii', title='contrast5 - fwhm=8 - ANTs',
bg_img=anatimg, threshold=2, vmax=5, display_mode='ortho', cut_coords=(-39, -37, 56));
plot_stat_map(
'/output/datasink/norm_spm/sub-02_fwhm8/wcon_0005.nii', title='contrast5 - fwhm=8 - SPM',
bg_img=anatimg, threshold=2, vmax=5, display_mode='ortho', cut_coords=(-39, -37, 56));
from nilearn.plotting import plot_glass_brain
plot_glass_brain(
'/output/datasink/norm_ants/sub-02_fwhm8/con_0005_trans.nii', colorbar=True,
threshold=3, display_mode='lyrz', black_bg=True, vmax=6, title='contrast5 - fwhm=8 - ANTs')
plot_glass_brain(
'/output/datasink/norm_spm/sub-02_fwhm8/wcon_0005.nii', colorbar=True,
threshold=3, display_mode='lyrz', black_bg=True, vmax=6, title='contrast5 - fwhm=8 - SPM');
Explanation: And what about the contrast images for Finger > others?
End of explanation |
11,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
5. Imaging
Previous
Step1: Import section specific modules
Step2: 5.3 Gridding and Degridding for using the FFT <a id='imaging
Step3: Figure
Step4: Figure
Step5: Figure
Step6: Figure
Step7: Figure
Step9: Figure
Step11: Next comes a simplified imaging step using the FFT and gridding. The visibilities on the irregularly-spaced u,v tracks are resampled onto regular coordinates. This is done by weighting and smearing each measured visibility out onto the regular coordinates in the vacinity of its u,v coordinate. After resampling the inverse FFT is used to transform the measurements in the spatial frequency domain to those in the spacial domain, thereby approximately reconstructing the model sky we started with. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
from IPython.display import Image, display, clear_output
from ipywidgets import HBox, Label, FloatSlider, Layout
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
5. Imaging
Previous: 5.2 Sampling functions and PSFs
Next: 5.4 The Dirty Image and Visibility Weights
Import standard modules:
End of explanation
from IPython.display import Image
import track_simulator
import AA_filter
Explanation: Import section specific modules:
End of explanation
Image(filename="figures/gridding_illustration.png")
Explanation: 5.3 Gridding and Degridding for using the FFT <a id='imaging:sec:gridding'></a>
In the previous section several sampling functions were presented. There the sampling functions were already neatly discretized and displayed as images. Each image was a grid of pixels (all with the same size). Fourier inverting such regularly sampled data is done with a fast Fourier Transform (FFT) algorithm. This is called the "fast" Fourier Transform because it is computationally more efficient that the Direct Fourier Transform (DFT). To give an idea of how fast this algorithm is, if there are $N^2$ pixels in the image, the FFT takes roughly $2N^2\log(N)$ computational operations. In contrast, the complexity of the DFT also depends on the number of visibilities $M$ and takes $N^2M$ steps. Here $M\approx N^2$ and for each pixel we must take $M$ complex exponentiations and multiplications. Specifically, the DFT calculates the intensity of each pixel:
\begin{equation}
I(l,m) = \sum_{k=0}^{M-1}V_k(u,v)e^{2\pi i (lu+mv)}\text{, }V_k\text{ are the M measurements taken by the telescope}
\end{equation}
From this it should be clear that as the number of baselines or observation time is increased, the FFT approach would be far less time-consuming than the the direct approach. Unfortunately radio interferometers don't take measurements at regular intervals, and thus an FFT cannot be used on the observation data directly. Instead the data has to be resampled onto a grid with points spaced at regular intervals before taking the FFT. This resampling process (called gridding) and its inverse (called degridding) is the topic of this section. The big idea here is that we do gridding and degridding because it enables us to create an image faster than if we did the DFT.
As you will see later some u,v space image deconvolution algorithms such as the Cotton-Schwab ➞ major-minor cycle algorithm require that sources in image space are reconverted back into the non-regular measurement space. Here an accurate degridding operation is required to "interpolate" regularly sampled visibilities back onto the u,v tracks shown above.
In addition to the issue of resampling when using the FFT transform approach, is the issue of aliasing. The FFT assumes that the input signal (here the spatial frequency domain) is periodic in nature. The resultant image constructed by resampling and inverse FFT therefore repeats at regular intervals: sources near the top of the image are aliased back into the image at the bottom for instance. This introduces the necessity to filter the image with a filter that only passes signal that falls within the field of view being reconstructed. Aliasing is an effect of Nyquist sampling ($\S$ 2.9 ➞) the visibilities based on the grid size. An example of this form of aliasing will be given later on.
The following points will be discussed in this chapter:
1. Image resolution and pixel size
2. Gridding and degridding, along with a discussion on the use anti-aliasing filters
3. Sample code for degridding a model sky, and gridding and inverting visibilities to form a dirty image.
5.3.1 Image Resolution and Pixel Size
When generating an image from visibilities, using either a direct or fast Fourier transform, two parameters need to be defined: the resolution of each pixel and the extent of the image either as the number of pixels or as the size of the field of view (depending on the particular imager). An image will be a two-dimensional array of size $N_l \times N_m$ and each pixel will have a resolution of $(\Delta \theta_l, \Delta \theta_m)$ if we are making the small angle approximation for the field of view. Recall that the image size is $l' = \cos{\theta_l}$, $m' = \cos{\theta_m}$, the resolution is $\Delta l = \cos{\Delta \theta_l}$, $\Delta m = \cos{\Delta \theta_m}$ and in the small angle approximation $\Delta l \sim \Delta \theta_l$, $\Delta m \sim \Delta \theta_m$. Though, many imagers can create images which break the small angle approximation the notation is retained. There are a number of techniques for representing a point in spherical coordinates, via a non-linear transform, on a two-dimensional plane. In radio interferometry the standard technique is SIN-projection, see <cite data-cite='Greisen1994'>AIPS Memo 27</cite> ⤴ for a detailed discussion of different coordinate projections.
Given the the resolution $(\Delta \theta_l, \Delta \theta_m)$ and the desired field of view $(\theta_l, \theta_m)$, the number of pixels in the image (the image size) is
$$N_l = \frac{\theta_l}{\Delta \theta_l}$$
$$N_m = \frac{\theta_m}{\Delta \theta_m}$$
Recall that the uv tracks samples spatial frequency, so therefore the chosen image resolution (cell size) must satisfy the Nyquist relation. For a given interferometer resolution and image size the image domain resolution/grid size $(\Delta \theta_l, \Delta \theta_m)$ is
$$\Delta \theta_l = \frac{1}{2N_l \Delta u} = \frac{1}{2\max{(||\min{u}||,\max{u})}} \text{ radians}$$
$$\Delta \theta_m = \frac{1}{2N_m \Delta v} = \frac{1}{2\max{(||\min{v}||,\max{v})}} \text{ radians}$$
And the number of pixels is unchanged $N_u = N_l$, $N_v = N_m$.
An important note about the number of pixels is that one should try to use values which are powers of 2, i.e. $N_l = 2^j$, $N_m = 2^k, $ for some positive $j,k$. This is because of how FFT's are implemented, the optimal run-time efficiency of an FFT is with input lengths which are powers of 2 and are least efficient when the input length is a prime number. For example, the time required to generate a 256 by 256 pixel ($2^8$ by $2^8$) image will be less than a 251 by 251 pixel image even though the resulting image will have more pixels. Also, note, interferometric images are almost always square by convention.
When using the Fast Fourier Transform for the inversion between uv and image space the image has to be scaled to the correct size using the Similarity property of the FFT:
$$V(au, av) \rightleftharpoons \frac{1}{|a|}I(\frac{l}{a},\frac{m}{a})$$
This implies that we can scale the image to $-0.5N_x\Delta\theta_l \leq l \leq 0.5N_x\Delta\theta_l$ and $-0.5N_y\Delta\theta_m \leq m \leq 0.5N_y\Delta\theta_m$ by scaling with the uv tracks with the image size (in radians). This is in contrast to the approach taken when using the DFT: in the direct per pixel evaluation of the fourier sum the resolution can be specified directly.
5.3.2 Gridding and Degridding
As you may suspect there are many ways to interpolate data to and from regularly-spaced coordinates. The most widely-used interpolation technique used in radio imaging programs, such as lwimager, is known as "convolutional-resampling". In this technique each visibility is weighted and "smeared out" onto grid points that lie within a small distance from the original coordinate.
End of explanation
N = 30
dx = 1
a = np.arange(N)
M = 75
tap_pos = (N//2)*dx
conv_hsup = 5*dx
conv_x = np.linspace(-conv_hsup,conv_hsup,1000)
vis_x = np.sort(np.random.rand(M)*N*dx)
vis = 1.5+np.random.rand(M)*0.7 + 0.3
plt.figure(figsize=(13,5))
ax1 = plt.axes()
ax1.axes.get_yaxis().set_visible(False)
for x in a*dx:
plt.plot([x,x],[-1.5,-1.0],'k')
plt.plot(vis_x,vis,'b.')
plt.plot(vis_x,np.ones([M])*-1.25,'g.')
plt.plot([vis_x[0],vis_x[0]],[-1.25,vis[0]],'b--')
plt.plot(tap_pos+conv_x,np.sinc(conv_x),'r')
plt.arrow(tap_pos-conv_hsup, -0.25, 0, 2.5+0.5, head_width=0.0, head_length=0.0, fc='m', ec='m')
plt.arrow(tap_pos+conv_hsup, -0.25, 0, 2.5+0.5, head_width=0.0, head_length=0.0, fc='m', ec='m')
plt.arrow(tap_pos-conv_hsup, -0.25, conv_hsup*2, 0, head_width=0.0, head_length=0.0, fc='m', ec='m')
plt.arrow(tap_pos-conv_hsup, 2.5+0.25, conv_hsup*2, 0, head_width=0.0, head_length=0.0, fc='m', ec='m')
plt.text(tap_pos+0.75, -0.70, "$\sum{\mathscr{V}(x_i)C(a\Delta{x}-x_i)}$", fontsize=11,color='m')
plt.text(tap_pos+conv_hsup+0.15, 0, "$C$", fontsize=13,color='r')
plt.text(0.5, 2.5, "$\mathscr{V}(x_i),x_i\in\mathbb{R}$", fontsize=16,color='b')
plt.arrow(tap_pos, -0.25, 0, -0.45, head_width=0.75, head_length=0.3, fc='m', ec='m')
plt.ylim(-1.75,3.0)
plt.xlim(-0.5*dx,N*dx)
plt.xlabel("Grid position ($a\Delta{x}$)",fontsize=15)
plt.show()
Explanation: Figure: Each observed visibility is centered at some sampling coordinate
in continuous u,v space and is weighted with some function $C(u,v)$, which extends only to finite “full support” region as illustrated. The result is either binned in a regularly spaced grid when gridding or gathered from this grid when degridding. After all of the observed visibilities have been gridded an Inverse Fast Fourier Transform is performed to create an image of the sky. The reverse operations are done when simulating a set of visibility measurements from a model sky.
The value at each grid point, then, is a weighted accumulation of all the nearby visibilties. In one dimension this can be stated as (visually illustrated below):
\begin{equation}
(\forall a \in {1,2,\dots,N}) \mathscr{V}(a\Delta{x}) = \sum_{\substack{
i | x_i \geq a\Delta{x}-\text{half support}, \
x_i \leq a\Delta{x}+\text{half support}}
}{\mathscr{V}(x_i) \, C(a\Delta{x}-x_i)}
\end{equation}
End of explanation
Image(filename="figures/oversampled_filter_illustration.png")
Explanation: Figure: Here we have illustrated the gridding process in one dimension. Given a continuous visibility function sampled at some, non-regular, points in $x$ and a convolution function, $C$, the points stored at regular intervals (black bars) is approximately a convolution between the visibility function and the convolution function. The coordinates the visibility function is sampled at, is plotted with green dots in-between the regularly sampled grid positions.
The weighting function, $C$ can be any number of functions proposed in the literature. These include linear, Lagrange, sinc (including one of the many window functions), Gaussian, modified B-spline, etc.
You may have noticed that the interpolating function above is remarkably close to that of a discrete convolution. If the resampling was done on data that was regularly sampled and the convolution function evaluated at these regular discrete steps then the function would just the ordinary discrete convolution. However, the function as it stands is not quite a convolution by the strictest definition of the word. Gridding and degridding should be thought of as approximations to the discrete convolution. Nevertheless we will use the regular convolution notation in our discussion.
For those coming from a signal processing background it is useful to think of the convolutional gridding and degridding operations in terms of the ordinary upsampling and downsampling operations. In gridding, as with traditional upsampling, the space in-between samples are filled with zero values. The only difference is that with gridding the original measurements are not regularly-spaced, as would be the case with upsampling. Just as with upsampling it is then necessary to assign values to these new zero values in-between the measured values. With gridding the values are smeared out over the grid points within a some area of support.
This is a very important point. During the gridding process, because the uv plane is not fully sampled, many of the grid points are assigned a value fo zero! Now, there is essantially no chance that a gridded visibility value is actually zero, but since we have not sampled that point in the visibility domain the only option is to assign that pixel some value. Zero is convenient, but not the true value. We will come back to this point in the next chapter on deconvolution ➞ which requires us to include additional knowledge to make an informed guess about what the value of these pixels could be.
With this understanding in hand we can define gridding and degridding more rigorously:
\begin{equation}
\begin{split}
V_\text{gridded}[u,v]&=[(\mathscr{V}(u,v) \, S(u,v))\circ C(u,v)] \, III[u,v]\
V_\text{degridded}(u,v)&=[V_\text{gridded}[u,v]\circ C(u,v)] \, S(u,v)\
\end{split}
\end{equation}
In gridding the sampled visibilities are convolved with a convolution function then discretized onto regular points by the shah (bed-of-nails) function ($\S$ 2.2 ➞). In degridding the opposite is done: the regularly sampled discerete values are convolved and sampled along the sampling tracks in the u,v plane. The convolution function smears (gridding) and gathers (degridding) the visibilities over / from some area of support before discretizing the visibilities to new coordinates. Ideally this function would be computed during the gridding and degridding operations, however, considering that the processing costs of gridding and degridding both scale as $MC_\text{sup}^2$ these functions can be too computationally expensive to compute for every visibility and is normally pretabulated for a given support size. Additionally it is important to sample this function much more densely than the spacings between grid cells; interferometers take measurements in the spatial frequency domain and any large snapping / rounding operation on the coordinates of the samples will result in a decorrelation in the structural information about the image. The figure below illustrates how values are picked from the oversampled filter.
End of explanation
Image(filename="figures/NN_interpolation_aliasing.png", width=512)
Image(filename="figures/AA_kernel_alias_reduction.png", width=512)
Explanation: Figure: Here the indexing for a padded, oversampled filter is illustrated for a 3-cell full-support region (half support of 1 to both sides of the centre value), padded with one value on both sides. The filter is 5x oversampled, as indicated by the spaces between the asterisks. The bars represent the grid resolution ($\Delta{u}$ or $\Delta{v}$). If the measured uv coordinate falls exactly on the nearest grid cell (red dot) then values 6,11 and 16 are selected as interpolation coefficients. If the uv value is slightly offset, for instance $\text{round}(\text{fraction}(u, v)m_\text{oversample factor})$ = 2 (green dot), then 8, 13 and 18 are selected for the 3 interpolation coefficients. In other words: a denser bed of nails is placed over the bed of nails of the grid and the closest set of coefficients for the convolution are selected.
More importantly, the alias-reduction properties of the convolution filter being used are essential to the FFT approach. By the convolution theorem the reconstructed image of the radio sky can be stated as follows:
\begin{equation}
I_\text{dirty}[l,m] = ([I(l,m)\circ\text{PSF}(l,m)] \, c(l,m))\circ\mathscr{F}{III}[l,m]
\end{equation}
The Fourier transform of the shah function $\mathscr{F}{III}[l,m]$ is a series of periodic functions in the image domain. Convolution with these periodic functions replicates the field of view at a period of $M\Delta{\theta_l}$ and $N\Delta{\theta_m}$ for an $M\times N$ pixel image, and it is this aliasing effect that must be stopped. To that end one would hope that the Fourier transform of the convolution filter, $c(l,m)$, maximizes the following ratio:
\begin{equation}
\frac{\int_\text{FOV} \lvert c(l,m) \rvert^2dS}{\int_{-\infty}^\infty \lvert c(l,m) \rvert^2dS}
\end{equation}
Simply stated, it is desirable that the function $c$ is only non-zero over a small central region: the field of view.
Both the remarks about accuracy and anti-aliasing properties of the filter precludes using a nearest-neighbour approach to interpolating points to and from regular coordinates. Interpolation accuracy takes presidence in degridding, while alias-reduction is important for gridding. Nearest-neighbour interpolation (also known as cell-summing in older literature) simply accumulates the neighbouring points that fall within a rectangular region around the new coordinate, without considering the distance those points are from the new coordinate. The Fourier transform of this box function is an infinite sinc function, which ripples out slowly towards infinity, and doesn't stop much of the aliasing effect. Convolutional gridding/degridding is therefore a more attractive approach, because the distance between the grid point and the measured uv point is taken into account when selecting a set of convolution weights.
The observation about the Fourier transform of the box function leads us to a partial solution for the aliasing problem, in that convolving with an infinite sinc will yield an image tapered by a box function. Unfortunately this is not computationally feasible and instead the best option is to convolve with either a truncated sinc function, or some other function that has a similar centre-heavy Fourier transform and preferably tapers off reasonably quickly. The images below illustrates the significant improvement using a truncated sinc function instead of nearest-neighbour interpolation.
End of explanation
half_sup = 6
oversample = 15
full_sup_wo_padding = (half_sup * 2 + 1)
full_sup = full_sup_wo_padding + 2 #+ padding
no_taps = full_sup + (full_sup - 1) * (oversample - 1)
taps = np.arange(-no_taps//2,no_taps//2 + 1)/float(oversample)
#unit box
box = np.where((taps >= -0.5) & (taps <= 0.5),
np.ones([len(taps)]),np.zeros([len(taps)]))
fft_box = np.abs(np.fft.fftshift(np.fft.fft(np.fft.ifftshift(box))))
#truncated (boxed) sinc
sinc = np.sinc(taps)
fft_sinc = np.abs(np.fft.fftshift(np.fft.fft(np.fft.ifftshift(sinc))))
#gaussian sinc
alpha_1 = 1.55
alpha_2 = 2.52
gsinc = np.sin(np.pi/alpha_1*(taps+0.00000000001))/(np.pi*(taps+0.00000000001))*np.exp(-(taps/alpha_2)**2)
fft_gsinc = np.abs(np.fft.fftshift(np.fft.fft(np.fft.ifftshift(gsinc))))
#plot it up
plt.figure(figsize=(7, 5), dpi=80)
l = np.arange(-(no_taps)//2,(no_taps)//2+1) * (1.0/oversample)
a, = plt.plot(2*l, 10.*np.log10(fft_box))
b, = plt.plot(2*l, 10.*np.log10(fft_sinc))
c, = plt.plot(2*l, 10.*np.log10(fft_gsinc))
ax = plt.gca()
ax.set_xlim(0,no_taps//2 * (1.0/oversample))
#ax.set_yscale("log", nonposy='clip')
plt.legend([a,b,c],["Box","Sinc","Gaussian Sinc"])
plt.xlabel("$2\Delta{u}l$")
plt.ylabel("Magnitude of $c(l)$ (dB)")
plt.title("Magnitude of Fourier transforms of several convolution functions")
plt.show()
Explanation: Figure: Above two synthesized images of a grid of point sources. The first using cell-summing (nearest neighbour) interpolation and the second using convolutional resampling with a simple truncated sinc function. In the first the sources of this grid sky pattern that fall slightly outside the field of view are aliased back into the field of view. In the second the aliasing energy is limited by the box response of the sinc function.
Below the magnitude of the sidelobes of the Fourier transforms of several functions are plotted. The sidelobes of the Fourier transform of the box function is significantly higher than that of truncated and windowed sinc functions.
End of explanation
%run jvla_d_constants
RA = 0.0
DECLINATION = 90.0
global Nx, Ny, uvw, scaled_uv, model_sky, model_regular, max_uv, cell_size_l, cell_size_m
cellsize_slider=FloatSlider(min=0.25, #subnyquist
max=2.5,
value=1.0,
step=0.1,
continuous_update=False)
sampling_field = HBox([Label("Nyquist rate scaling factor (1x=critical sampling)"), cellsize_slider])
imsize_slider=FloatSlider(min=1,
max=3,
value=2.0,
step=0.1,
continuous_update=False)
imsize_field = HBox([Label("Image size scaling factor (1x=1deg)"), imsize_slider])
def interact_plot(key):
global Nx, Ny, uvw, scaled_uv, model_sky, model_regular, max_uv, cell_size_l, cell_size_m
#User defined scaling factors:
clear_output(wait=True)
sampling_tweek_factor = cellsize_slider.value
im_size_tweek_factor = imsize_slider.value
#Set up interferometer sampling pattern
uvw = track_simulator.sim_uv(RA, DECLINATION, 1.5, 60/3600.0, ENU, ARRAY_LATITUDE)
#Work out the required sampling rate (let's just do a square grid for simplicity)
max_uv = np.max(np.abs(uvw[:,0:2]/CENTRE_CHANNEL))
print "Maximum extent in uv: %f" % (max_uv)
cell_size_l = cell_size_m = np.rad2deg((1 / (2 * max_uv)) / sampling_tweek_factor)
print "Cell sizes in l,m: (%f,%f) arcsecs" % (cell_size_l*3600.0, cell_size_m*3600.0)
#arbitrarily choose the field size to be 1.0 square degrees
FIELD_SIZE = 1.0 * im_size_tweek_factor
#Then work out the number of pixels required to create a field of this size, given the nyquist image resolution
Nx = int(np.round(FIELD_SIZE / cell_size_l))
Ny = int(np.round(FIELD_SIZE / cell_size_m))
print "Image size in pixels to cover %.3f square degrees: (%d,%d)" % (FIELD_SIZE, Nx, Ny)
#Setup a random model sky consisting of point sources:
model_sky = np.zeros([Nx,Ny])
for i in range(15):
model_sky[int(np.random.rand()*Nx),int(np.random.rand()*Ny)] = np.random.rand()*5.0
model_regular = np.fft.fftshift(np.fft.fft2(np.fft.ifftshift(model_sky)))
#In the DFT we can pick the sampling rate in l and m directly when we evaluate a sum for each pixel.
#However when using the FFT we have to employ the similarity theorem: a multiplication in one domain
#results in a division in the other domain. Thus we scale the invertion such that it ranges from
#-0.5*N*cell_size <= pixel <= 0.5*N*cell_size
scaled_uv = np.copy(uvw[:,0:2])
scaled_uv[:,0] *= np.deg2rad(cell_size_l * Nx)
scaled_uv[:,1] *= np.deg2rad(cell_size_m * Ny)
#Finally plot up the results
plt.figure(figsize=(15, 15))
plt.subplot(131)
plt.title("Model sky")
plt.ticklabel_format(useOffset=False)
plt.imshow(model_sky,cmap="gray", extent=[RA - Nx / 2 * cell_size_l, RA + Nx / 2 * cell_size_l,
DECLINATION - Ny / 2 * cell_size_m, DECLINATION + Ny / 2 * cell_size_m])
plt.xlabel("RA")
plt.ylabel("DEC")
plt.subplot(132)
plt.title("Logscale amplitudes of visibilliy space")
plt.imshow(10*np.log10(np.abs(model_regular+0.000000000001)), extent=[-max_uv/1e4, +max_uv/1e4,
-max_uv/1e4, +max_uv/1e4])
plt.plot(uvw[:,0]/CENTRE_CHANNEL/1e4,
uvw[:,1]/CENTRE_CHANNEL/1e4,
"k.",label="Baselines", markersize=1)
plt.xlabel("uu $(k\lambda)$")
plt.ylabel("vv $(k\lambda)$")
plt.subplot(133)
plt.title("Phase of visibility space")
plt.imshow(np.angle(model_regular), extent=[-max_uv/1e4, +max_uv/1e4,
-max_uv/1e4, +max_uv/1e4])
plt.plot(uvw[:,0]/CENTRE_CHANNEL/1e4,
uvw[:,1]/CENTRE_CHANNEL/1e4,
"k.",label="Baselines", markersize=1)
plt.xlabel("uu $(k\lambda)$")
plt.ylabel("vv $(k\lambda)$")
plt.figure(figsize=(15, 15))
plt.subplot(121)
plt.title("Scaled tracks over log amplitude grid")
plt.imshow(10*np.log10(np.abs(model_regular+0.000000000001)), extent = [0,Nx,0,Ny])
plt.plot(scaled_uv[:,0]/CENTRE_CHANNEL + Nx / 2,
scaled_uv[:,1]/CENTRE_CHANNEL + Ny / 2,
"k.",label="Baselines", markersize=1)
plt.xlabel("$N_x$")
plt.ylabel("$N_y$")
plt.subplot(122)
plt.title("Scaled tracks over phase grid")
plt.imshow(np.angle(model_regular), extent = [0,Nx,0,Ny])
plt.plot(scaled_uv[:,0]/CENTRE_CHANNEL + Nx / 2,
scaled_uv[:,1]/CENTRE_CHANNEL + Ny / 2,
"k.",label="Baselines", markersize=1)
plt.xlabel("$N_x$")
plt.ylabel("$N_y$")
plt.show()
cellsize_slider.observe(interact_plot)
display(sampling_field)
imsize_slider.observe(interact_plot)
display(imsize_field)
interact_plot("")
Explanation: Figure: The magnitudes of the Fourier transforms of various functions. It is desirable that most of the energy of these functions fall within some central region and that the response drops off sharply at the edge of this central region
After Fourier transformation the effects of the convolution function on the image can be mitigated by point-wise dividing the image through by the Fourier transform of the convolution function, $c(l,m)$. This has the effect of flattening the response of the passband, by removing the tapering towards the edges of the image, but raises the amplitude of any aliased sources at the edge of the image.
In practice the proloid spheroidal functions are used in imaging programs such as lwimager, but the definition of these functions are beyond the scope of the introductory discussion here and the reader is referred to the work of Donald Rhodes, <cite data-cite='rhodes1970spheroidal'>On the Spheroidal Functions</cite> ⤴ for a detailed discussion of their definition and proof of their aliasing reduction properties.
It is also worth noting that the convolution functions used in gridding and degridding need not be the same function. In degridding the focus is solidly on the accuracy of the predicted visibility. Here it can be advantageous to minimize the difference between a direct transformation approach and a Fast Fourier Transform approach with degridding, see for instance the discussion by Sze Tan, <cite data-cite='tan1986aperture'>Aperture-synthesis mapping and parameter estimation</cite> ⤴ for further detail.
5.3.3 Example Simulator and Imager
We conclude this section with some sample code to illustrate prediction and imaging using resampling and the FFT. To start off let's set up a uv tracks as would be seen by the JVLA in D configuration. Recall that image resolution is given by the longest track in u and v and that the baseline is always measured in wavelengths. You can play around with the nyquist rate and image sizes. Notice how subnyquist sampling adversely affect the angular resolution of your image.
End of explanation
# %load convolutional_degridder.py
import numpy as np
def fft_degrid(model_image, uvw, ref_lda, Nx, Ny, convolution_filter):
Convolutional gridder (continuum)
Keyword arguments:
model_image --- Model image
uvw --- interferometer's scaled uvw coordinates
(Prerequisite: these uv points are already scaled by the similarity
theorem, such that -N_x*Cell_l*0.5 <= theta_l <= N_x*Cell_l*0.5 and
-N_y*Cell_m*0.5 <= theta_m <= N_y*Cell_m*0.5)
ref_lda --- array of reference lambdas (size of vis channels)
Nx,Ny --- size of image in pixels
convolution_filter --- pre-instantiated AA_filter anti-aliasing
filter object
assert model_image.ndim == 3
filter_index = \
np.arange(-convolution_filter.half_sup,convolution_filter.half_sup+1)
model_vis_regular = np.zeros(model_image.shape, dtype=np.complex64)
for p in xrange(model_image.shape[0]):
model_vis_regular[p, :, :] = \
np.fft.fftshift(np.fft.fft2(np.fft.ifftshift(model_image[p, :, :])))
vis = \
np.zeros([uvw.shape[0],
ref_lda.shape[0],
model_image.shape[0]],
dtype=np.complex)
for r in xrange(uvw.shape[0]):
for c in xrange(vis.shape[1]):
scaled_uv = uvw[r,:] / ref_lda[c]
disc_u = int(np.round(scaled_uv[0]))
disc_v = int(np.round(scaled_uv[1]))
frac_u_offset = int((1 + convolution_filter.half_sup +
(-scaled_uv[0] + disc_u)) *
convolution_filter.oversample)
frac_v_offset = int((1 + convolution_filter.half_sup +
(-scaled_uv[1] + disc_v)) *
convolution_filter.oversample)
if (disc_v + Ny // 2 + convolution_filter.half_sup >= Ny or
disc_u + Nx // 2 + convolution_filter.half_sup >= Nx or
disc_v + Ny // 2 - convolution_filter.half_sup < 0 or
disc_u + Nx // 2 - convolution_filter.half_sup < 0):
continue
for conv_v in filter_index:
v_tap = \
convolution_filter.filter_taps[conv_v *
convolution_filter.oversample
+ frac_v_offset]
grid_pos_v = disc_v + conv_v + Ny // 2
for conv_u in filter_index:
u_tap = \
convolution_filter.filter_taps[conv_u *
convolution_filter.oversample
+ frac_u_offset]
conv_weight = v_tap * u_tap
grid_pos_u = disc_u + conv_u + Nx // 2
for p in range(vis.shape[2]):
vis[r, c, p] += \
model_vis_regular[p,
grid_pos_v,
grid_pos_u] * conv_weight
return vis
tabulated_filter = AA_filter.AA_filter(3,63,"sinc")
vis = fft_degrid(model_sky.reshape(1, Ny, Nx), scaled_uv, np.array([CENTRE_CHANNEL]), Nx, Ny, tabulated_filter)
Explanation: Figure: The simulated model sky (l,m space) and its fourier transform in the visibility space (u,v space). Overlayed on top are the uv tracks for JVLA D. In the bottom plots the scaled tracks are overlayed on the grid predicted from the model. As you can see if subnyquist rates are chosen angular resolution is lost because the long baselines fall outside the grid and must be discarded during imaging
To complete the prediction (also known as the "forward" step) the measurements are resampled onto the u,v tracks of the interferometer using the degridding algorithm discussed above. Measurements are gathered and weighted from the vacinity of each of the points along the sampling track in order to "predict" a value at the u,v coordinate.
End of explanation
# %load convolutional_gridder.py
import numpy as np
def grid_ifft(vis, uvw, ref_lda, Nx, Ny, convolution_filter):
Convolutional gridder (continuum)
Keyword arguments:
vis --- Visibilities as sampled by the interferometer
uvw --- interferometer's scaled uvw coordinates
(Prerequisite: these uv points are already scaled by the similarity
theorem, such that -N_x*Cell_l*0.5 <= theta_l <= N_x*Cell_l*0.5 and
-N_y*Cell_m*0.5 <= theta_m <= N_y*Cell_m*0.5)
ref_lda --- array of reference lambdas (size of vis channels)
Nx,Ny --- size of image in pixels
convolution_filter --- pre-instantiated AA_filter anti-aliasing
filter object
assert vis.shape[1] == ref_lda.shape[0], (vis.shape[1], ref_lda.shape[0])
filter_index = \
np.arange(-convolution_filter.half_sup,convolution_filter.half_sup+1)
# one grid for the resampled visibilities per correlation:
measurement_regular = \
np.zeros([vis.shape[2],Ny,Nx],dtype=np.complex)
# for deconvolution the PSF should be 2x size of the image (see
# Hogbom CLEAN for details), one grid for the sampling function:
sampling_regular = \
np.zeros([2*Ny,2*Nx],dtype=np.complex)
for r in xrange(uvw.shape[0]):
for c in xrange(vis.shape[1]):
scaled_uv = uvw[r,:] / ref_lda[c]
disc_u = int(np.round(scaled_uv[0]))
disc_v = int(np.round(scaled_uv[1]))
frac_u_offset = int((1 + convolution_filter.half_sup +
(-scaled_uv[0] + disc_u)) *
convolution_filter.oversample)
frac_v_offset = int((1 + convolution_filter.half_sup +
(-scaled_uv[1] + disc_v)) *
convolution_filter.oversample)
disc_u_psf = int(np.round(scaled_uv[0]*2))
disc_v_psf = int(np.round(scaled_uv[1]*2))
frac_u_offset_psf = int((1 + convolution_filter.half_sup +
(-scaled_uv[0]*2 + disc_u_psf)) *
convolution_filter.oversample)
frac_v_offset_psf = int((1 + convolution_filter.half_sup +
(-scaled_uv[1]*2 + disc_v_psf)) *
convolution_filter.oversample)
if (disc_v + Ny // 2 + convolution_filter.half_sup >= Ny or
disc_u + Nx // 2 + convolution_filter.half_sup >= Nx or
disc_v + Ny // 2 - convolution_filter.half_sup < 0 or
disc_u + Nx // 2 - convolution_filter.half_sup < 0):
continue
for conv_v in filter_index:
v_tap = \
convolution_filter.filter_taps[conv_v *
convolution_filter.oversample
+ frac_v_offset]
v_tap_psf = \
convolution_filter.filter_taps[conv_v *
convolution_filter.oversample
+ frac_v_offset_psf]
grid_pos_v = disc_v + conv_v + Ny // 2
grid_pos_v_psf = disc_v_psf + conv_v + Ny
for conv_u in filter_index:
u_tap = \
convolution_filter.filter_taps[conv_u *
convolution_filter.oversample
+ frac_u_offset]
u_tap_psf = \
convolution_filter.filter_taps[conv_u *
convolution_filter.oversample
+ frac_u_offset_psf]
conv_weight = v_tap * u_tap
conv_weight_psf = v_tap_psf * u_tap_psf
grid_pos_u = disc_u + conv_u + Nx // 2
grid_pos_u_psf = disc_u_psf + conv_u + Nx
for p in range(vis.shape[2]):
measurement_regular[p, grid_pos_v, grid_pos_u] += \
vis[r, c, p] * conv_weight
# assuming the PSF is the same for different correlations:
sampling_regular[grid_pos_v_psf, grid_pos_u_psf] += \
(1+0.0j) * conv_weight_psf
dirty = np.zeros(measurement_regular.shape, dtype=measurement_regular.dtype)
psf = np.zeros(sampling_regular.shape, dtype=sampling_regular.dtype)
for p in range(vis.shape[2]):
dirty[p,:,:] = np.fft.fftshift(np.fft.ifft2(np.fft.ifftshift(measurement_regular[p,:,:])))
psf[:,:] = np.fft.fftshift(np.fft.ifft2(np.fft.ifftshift(sampling_regular[:,:])))
return dirty,psf
tabulated_filter = AA_filter.AA_filter(3,63,"sinc")
dirty_sky, psf = grid_ifft(vis, scaled_uv, np.array([CENTRE_CHANNEL]), Nx, Ny, tabulated_filter)
#plot it up :-)
plt.figure(figsize=(15, 45))
plt.subplot(311)
plt.title("Model sky")
plt.imshow(model_sky,cmap="gray", extent=[RA - Nx / 2 * cell_size_l, RA + Nx / 2 * cell_size_l,
DECLINATION - Ny / 2 * cell_size_m, DECLINATION + Ny / 2 * cell_size_m])
plt.xlabel("RA")
plt.ylabel("DEC")
plt.subplot(312)
plt.title("PSF")
plt.imshow(np.real(psf[:, :]),cmap="gray", extent=[RA - Nx * 2 / 2 * cell_size_l, RA + Nx * 2 / 2 * cell_size_l,
DECLINATION - Ny * 2 / 2 * cell_size_m, DECLINATION + Ny * 2 / 2 * cell_size_m])
plt.xlabel("RA")
plt.ylabel("DEC")
plt.subplot(313)
plt.title("Dirty map")
plt.imshow(np.real(dirty_sky[0, :, :]),cmap="gray", extent=[RA - Nx / 2 * cell_size_l, RA + Nx / 2 * cell_size_l,
DECLINATION - Ny / 2 * cell_size_m, DECLINATION + Ny / 2 * cell_size_m])
plt.xlabel("RA")
plt.ylabel("DEC")
plt.show()
Explanation: Next comes a simplified imaging step using the FFT and gridding. The visibilities on the irregularly-spaced u,v tracks are resampled onto regular coordinates. This is done by weighting and smearing each measured visibility out onto the regular coordinates in the vacinity of its u,v coordinate. After resampling the inverse FFT is used to transform the measurements in the spatial frequency domain to those in the spacial domain, thereby approximately reconstructing the model sky we started with.
End of explanation |
11,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Original Voce-Chaboche Model Fitting Example 1
An example of fitting the original Voce-Chaboche model to a set of test data is provided.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
Step1: Run optimization with multiple test data set
This is a simple example for fitting the Voce-Chaboche model to a set of test data.
We only use two backstresses in this model, additional backstresses can be specified by adding pairs of 0.1's to the list of x_0.
E.g., three backstresses would be
x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
Likewise, one backstress can be specified by removing a pair of 0.1's from the list below.
The overall steps to calibrate the model parameters are as follows
Step2: Plot results
After the analysis is finished we can plot the test data versus the fitted model.
Note that we add two dummy parameters to the list of final parameters because the plotting function was written for the updated Voce-Chaboche model that has two additional parameters.
Setting the first of these two additional parameters equal to zero neglects the effects of the updated model.
If we set output_dir='./output/', for example, instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below.
The function uvc_data_multi_plotter is also provided to give more fine-grained control over the plotting process, and can compare multiple analyses. | Python Code:
import RESSPyLab as rpl
import numpy as np
Explanation: Original Voce-Chaboche Model Fitting Example 1
An example of fitting the original Voce-Chaboche model to a set of test data is provided.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
End of explanation
# Specify the true stress-strain to be used in the calibration
data_files = ['example_1.csv', 'example_2.csv']
# Set initial parameters for the Voce-Chaboche model with two backstresses
# [E, \sigma_{y0}, Q_\infty, b, C_1, \gamma_1, C_2, \gamma_2]
x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
# Log files for the parameters at each step, and values of the objective function at each step
x_log = './output/x_log.txt'
fxn_log = './output/fxn_log.txt'
# Run the calibration
# Set filter_data=True if you have NOT already filtered/reduced the data
# We recommend that you filter/reduce the data beforehand (i.e., filter_data=False is recommended)
x_sol = rpl.vc_param_opt(x_0, data_files, x_log, fxn_log, filter_data=False)
Explanation: Run optimization with multiple test data set
This is a simple example for fitting the Voce-Chaboche model to a set of test data.
We only use two backstresses in this model, additional backstresses can be specified by adding pairs of 0.1's to the list of x_0.
E.g., three backstresses would be
x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
Likewise, one backstress can be specified by removing a pair of 0.1's from the list below.
The overall steps to calibrate the model parameters are as follows:
1. Load the set of test data
2. Choose a starting point
3. Set the location to save the analysis history
4. Run the analysis
End of explanation
data = rpl.load_data_set(data_files)
# Added parameters are necessary for plotting the Voce-Chaboche model
x_sol_2 = np.insert(x_sol, 4, [0., 1.])
rpl.uvc_data_plotter(x_sol_2, data, output_dir='', file_name='vc_example_plots', plot_label='Fitted')
Explanation: Plot results
After the analysis is finished we can plot the test data versus the fitted model.
Note that we add two dummy parameters to the list of final parameters because the plotting function was written for the updated Voce-Chaboche model that has two additional parameters.
Setting the first of these two additional parameters equal to zero neglects the effects of the updated model.
If we set output_dir='./output/', for example, instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below.
The function uvc_data_multi_plotter is also provided to give more fine-grained control over the plotting process, and can compare multiple analyses.
End of explanation |
11,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Global Ocean Waves Analysis
As a part of the continuous Marine Data support, this time Planet OS Team releases a Meteo France Global Ocean Waves Analysis and and Meteo France WAve Model (MFWAM) Global Forecast to the Planet OS Datahub. Coastal communities, businesses, and professionals are relying on the quality of the marine weather predictions. From major weather events like storms or hurricanes down to daily check on what is the best time to go to the beach, reliable marine weather models affect the lives of millions.
The global wave system of Météo-France is based on the wave model MFWAM which is a third generation wave model. MFWAM uses the computing code ECWAM-IFS-38R2 with a dissipation terms developed by Ardhuin et al. (2010). The model MFWAM was upgraded on november 2014 thanks to improvements obtained from the european research project « my wave » (Janssen et al. 2014). The model mean bathymetry is generated by using 2-minute gridded global topography data ETOPO2/NOAA. Native model grid is irregular with decreasing distance in the latitudinal direction close to the poles. At the equator the distance in the latitudinal direction is more or less fixed with grid size 1/10°. The operational model MFWAM is driven by 6-hourly analysis and 3-hourly forecasted winds from the IFS-ECMWF atmospheric system. The wave spectrum is discretized in 24 directions and 30 frequencies starting from 0.035 Hz to 0.58 Hz. The model MFWAM uses the assimilation of altimeters with a time step of 6 hours. The global wave system provides analysis 4 times a day, and a forecast of 5 days at 0
Step1: First we define some functions. make_imgs function makes images for animation and make_anim makes animation using images the first function made.
Step2: API documentation is available at http
Step3: Here we define dataset Global Ocean wave analysis dataset namespace, variable names, period start and end, location and name of the animation that will be saved into your computer. We choose a time period when hurricane Dorian were active near Bahamas. First we use variables VHM0_SW1 and VMDR_SW1 which are Spectral significant primary swell wave height. You can find more variables from the dataset detail page.
Step4: In the following cell we are downloading the data.
Step5: Now we open the file by using xarray.
Step6: We like to use Basemap to plot data on it. Here we define the area. You can find more information and documentation about Basemap here.
Also, we make local folder where we save images. These are the images we will use for animation. No worries, in the end, we will delete the folder from your system.
Step7: Now it is time to make images from every time step. We only show one time step here.
To better understand the images, it is important to know what exactly Significant Primary Swell Wave height is. One way to understand swell waves is to read our blogpost about it. However, we will do a brief explanation here as well.
The Bureau of Meteorology provides a good explanation about different waves.
Wave heights describe the average height of the highest third of the waves (defined as the significant wave height). It is measured by the height difference between the wave crest and the preceding wave trough. Swell waves are the regular, longer period waves generated by distant weather systems. They may travel over thousands of kilometres. There may be several sets of swell waves travelling in different directions, causing crossing swells and a confused sea state. Crossing swells may make boat handling more difficult and pose heightened risk on ocean bars. There may be swell present even if the wind is calm and there are no sea waves.
Most of the models use energy-based wave identification
Step8: This is part where we are making animation.
Step9: Now we download Significant Wave Height data to see the difference in images.
Step10: As explained above, significant wave height is the combined height of the sea and the swell waves. We can see that hurricane is seen much more round and higher height. The reason is that here, wind and swell wave are both taken into account. In the image above, only primary swell waves were shown.
Step11: We have seen the historic data about waves, but let's see what the future brings now. For that we use data from Meteo France WAve Model (MFWAM) Global Forecast. It is using the same model as the analysis.
Step12: We merge analysis and forecast datasets by using xarray concat. It merges two datasets by 'time' dimension and if there's some data conflict, it uses dd2 (analysis) data as it is more precise than forecast. | Python Code:
import os
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import imageio
import shutil
import datetime
import matplotlib as mpl
mpl.rcParams['font.family'] = 'Avenir Lt Std'
mpl.rcParams.update({'font.size': 25})
print('matplotlib', mpl.__version__)
print ('imageio',imageio.__version__)
print ('xarray',xr.__version__)
print ('numpy',np.__version__)
Explanation: Global Ocean Waves Analysis
As a part of the continuous Marine Data support, this time Planet OS Team releases a Meteo France Global Ocean Waves Analysis and and Meteo France WAve Model (MFWAM) Global Forecast to the Planet OS Datahub. Coastal communities, businesses, and professionals are relying on the quality of the marine weather predictions. From major weather events like storms or hurricanes down to daily check on what is the best time to go to the beach, reliable marine weather models affect the lives of millions.
The global wave system of Météo-France is based on the wave model MFWAM which is a third generation wave model. MFWAM uses the computing code ECWAM-IFS-38R2 with a dissipation terms developed by Ardhuin et al. (2010). The model MFWAM was upgraded on november 2014 thanks to improvements obtained from the european research project « my wave » (Janssen et al. 2014). The model mean bathymetry is generated by using 2-minute gridded global topography data ETOPO2/NOAA. Native model grid is irregular with decreasing distance in the latitudinal direction close to the poles. At the equator the distance in the latitudinal direction is more or less fixed with grid size 1/10°. The operational model MFWAM is driven by 6-hourly analysis and 3-hourly forecasted winds from the IFS-ECMWF atmospheric system. The wave spectrum is discretized in 24 directions and 30 frequencies starting from 0.035 Hz to 0.58 Hz. The model MFWAM uses the assimilation of altimeters with a time step of 6 hours. The global wave system provides analysis 4 times a day, and a forecast of 5 days at 0:00 UTC. The wave model MFWAM uses the partitioning to split the swell spectrum in primary and secondary swells.
With this release, we hope more app developers and domain experts would jump on board and create value-add applications and extend their customer base in marine navigation, aquaculture, and other related domains. We also hope that businesses could utilize such data in their business analytics to derive valuable insights for operations, planning and risk assessments.
In this Notebook we show how to use wave height data from the analysis. As the hurricane Dorian is active at the time of making this notebook, we use it as an example.
Let's start with the code now. For preventing issues with version incompatibility, we'll print out some most important module versions. Also, make sure you are using python3
End of explanation
def make_imgs(dd, lonmap,latmap, variable, vmin,vmax,folder,title):
vmin = vmin; vmax = vmax
for k in range(0,len(dd[variable])):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
fig = plt.figure(figsize = (14,12))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd[variable][k].data,vmin = vmin, vmax = vmax,cmap='rainbow')
levs = np.arange(1,7,1)
S1 = m.contour(lonmap,latmap,dd[variable][k].data,levs,colors='black',linewidths=0.4,alpha = 0.6)
plt.clabel(S1,inline=1,inline_spacing=0,fontsize=7,fmt='%1.0f',colors='black')
m.fillcontinents(color='#58606F')
m.drawcoastlines(color='#222933')
m.drawcountries(color='#222933')
m.drawstates(color='#222933')
cbar = plt.colorbar(pcm,fraction=0.035, pad=0.03)
ttl = plt.title(title + '\n ' + str(dd[variable].time[k].data)[:-10],fontweight = 'bold')
ttl.set_position([.5, 1.05])
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename)
if k == 10:
plt.show()
plt.close()
def make_anim(folder,anim_name):
files = sorted(os.listdir(folder))
fileList = []
for file in files:
if not file.startswith('.'):
complete_path = folder + file
fileList.append(complete_path)
writer = imageio.get_writer(anim_name, fps=4)
for im in fileList:
writer.append_data(imageio.imread(im))
writer.close()
print ('Animation is saved as ' + anim_name + ' under current working directory')
shutil.rmtree(folder)
Explanation: First we define some functions. make_imgs function makes images for animation and make_anim makes animation using images the first function made.
End of explanation
API_key = open('APIKEY').read().strip()
server='api.planetos.com/'
version = 'v1'
Explanation: API documentation is available at http://docs.planetos.com. If you have questions or comments, join the Planet OS Slack community to chat with our development team. For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/
End of explanation
time_start = '2019-08-30T00:00:00'
time_end = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
dataset_key = 'meteo_france_global_ocean_wave_analysis_daily'
variable1 = 'VHM0_SW1'
area = 'bah'
latitude_north = 40; latitude_south = 12
longitude_west = -89; longitude_east = -58
anim_name = variable1 + '_animation_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
Explanation: Here we define dataset Global Ocean wave analysis dataset namespace, variable names, period start and end, location and name of the animation that will be saved into your computer. We choose a time period when hurricane Dorian were active near Bahamas. First we use variables VHM0_SW1 and VMDR_SW1 which are Spectral significant primary swell wave height. You can find more variables from the dataset detail page.
End of explanation
dh=datahub.datahub(server,version,API_key)
package = package_api.package_api(dh,dataset_key,variable1,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area+variable1)
package.make_package()
package.download_package()
Explanation: In the following cell we are downloading the data.
End of explanation
dd1 = xr.open_dataset(package.local_file_name)
Explanation: Now we open the file by using xarray.
End of explanation
dd1['longitude'] = ((dd1.longitude+180) % 360) - 180
m = Basemap(projection='merc', lat_0 = 0, lon_0 = (longitude_east + longitude_west)/2,
resolution = 'l', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(dd1.longitude,dd1.latitude)
lonmap,latmap = m(lons,lats)
folder = './ani/'
if not os.path.exists(folder):
os.mkdir(folder)
Explanation: We like to use Basemap to plot data on it. Here we define the area. You can find more information and documentation about Basemap here.
Also, we make local folder where we save images. These are the images we will use for animation. No worries, in the end, we will delete the folder from your system.
End of explanation
title = 'Significant Primary Swell Wave Height [m]'
vmin = 0; vmax = 6
make_imgs(dd1, lonmap,latmap, variable1, vmin,vmax,folder,title)
Explanation: Now it is time to make images from every time step. We only show one time step here.
To better understand the images, it is important to know what exactly Significant Primary Swell Wave height is. One way to understand swell waves is to read our blogpost about it. However, we will do a brief explanation here as well.
The Bureau of Meteorology provides a good explanation about different waves.
Wave heights describe the average height of the highest third of the waves (defined as the significant wave height). It is measured by the height difference between the wave crest and the preceding wave trough. Swell waves are the regular, longer period waves generated by distant weather systems. They may travel over thousands of kilometres. There may be several sets of swell waves travelling in different directions, causing crossing swells and a confused sea state. Crossing swells may make boat handling more difficult and pose heightened risk on ocean bars. There may be swell present even if the wind is calm and there are no sea waves.
Most of the models use energy-based wave identification: They output a primary and secondary swell to refer to the height and direction of the swell with the highest (and second highest) energy component.
In the first animation we show significant primary swell wave height. But in the second animation we show Significan wave height. It is the combined height of the sea and the swell that mariners experience on open water. So, the first one only show primary swell height, while the second animation is combine height of the sea and swell.
End of explanation
make_anim(folder, anim_name)
Explanation: This is part where we are making animation.
End of explanation
variable2 = 'VHM0'
anim_name2 = variable2 + '_animation_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
package2 = package_api.package_api(dh,dataset_key,variable2,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area + variable2)
package2.make_package()
package2.download_package()
Explanation: Now we download Significant Wave Height data to see the difference in images.
End of explanation
dd2 = xr.open_dataset(package2.local_file_name)
title = 'Significant Wave Height [m]'
vmin = 0; vmax = 6
make_imgs(dd2, lonmap,latmap, variable2, vmin,vmax,folder,title)
make_anim(folder, anim_name2)
Explanation: As explained above, significant wave height is the combined height of the sea and the swell waves. We can see that hurricane is seen much more round and higher height. The reason is that here, wind and swell wave are both taken into account. In the image above, only primary swell waves were shown.
End of explanation
dataset_key2 = 'meteofrance_global_ocean_forecast'
reftime_start = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
reftime_end = str(datetime.datetime.today().strftime('%Y-%m-%d') + 'T00:00:00')#'2019-09-06T00:00:00'
package3 = package_api.package_api(dh,dataset_key2,variable2,longitude_west,longitude_east,latitude_south,latitude_north,reftime_start=reftime_start,reftime_end=reftime_end,area_name=area+variable2)
package3.make_package()
package3.download_package()
dd3 = xr.open_dataset(package3.local_file_name)
dd3 = dd3.drop('reftime')
Explanation: We have seen the historic data about waves, but let's see what the future brings now. For that we use data from Meteo France WAve Model (MFWAM) Global Forecast. It is using the same model as the analysis.
End of explanation
dd_merged = xr.concat([dd2,dd3],dim='time')
title = 'Significant Wave Height [m]'
anim_name3 = variable2 + '_animation_forecast_analysis_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4'
vmin = 0; vmax = 6
make_imgs(dd_merged, lonmap,latmap, variable2, vmin,vmax,folder,title)
make_anim(folder, anim_name3)
Explanation: We merge analysis and forecast datasets by using xarray concat. It merges two datasets by 'time' dimension and if there's some data conflict, it uses dd2 (analysis) data as it is more precise than forecast.
End of explanation |
11,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Face verification
Goals
train a network for face similarity using triplet loss
work data augmentation, generators and hard negative mining
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http
Step1: Processing the dataset
This part is similar to previous notebook on siamese nets, you may just run the cells to get the necessary inputs
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
Set USE_SUBSET to False if you want to use the full dataset (GPU only!)
Step2: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries
Step3: The following histogram shows the number of images per class
Step4: The following function builds a large number of positives/negatives pairs (train and test)
Triplet loss
In the triplet loss model, we'll define 3 inputs $(a,+,-)$ for anchor, positive and negative.
Usage and differences with siamese nets
We release the hard constraint that all data of the same class should be squashed to a single point. Rather, images representation can live on a manifold, as long as they are closer to similar class images than to different class images
On large datasets, with careful hyperparameters, triplets and more advances metric learning method beat siamese nets
Outline
We will build positive pairs, and find a way to sample negatives to obtain triplets
Note that we don't need outputs anymore (positive vs negative), we're just building triplets
Step5: We end up with 1177 different pairs, which we'll append with a random sample (as negative) in the generator
Step6: As you can see, choosing randomly the negatives can be inefficient. For example it's reasonnable to think a old man will be a too easy negative if the anchor is a young woman.
Step7: Triplet Model
The loss of the triplet model is as follows
Step8: Shared Convolutional Network
You may as well build your own
Step9: Triplet Model
Exercise
Build the triplet model, using the skeleton below using the OOP Keras API
First run the 3 inputs through the shared conv
Then compute positive and negative similarities
Then call the triplet loss function using a Lambda layer
Step10: Warning
- You will need to run on GPU if you're on the large dataset
- On the small dataset, the model sometimes takes a few epochs before starting to decrease the loss
- This can be due to the init, learning rate, or too much dropout / augmentation
Step11: Exercise
What do you observe?
Try to make changes to the model / parameters to get a better convergence, you should be able to have much better result than with the ConvNet we gave you
Try to add data augmentation, or increase the size of the training set
You might want to be on GPU for testing several architectures, even on the small set
Step12: Displaying similar images
Step13: Test Recall@k model
for each test class with > 1 image, pick image at random, and compute similarity with all other images
compute recall @k
Step15: Hard Negative Mining
We'll mine negatives based on previous epoch's model. To do so, we'll compute distances with all anchors, and sample among the most similar negatives, but not the too difficult ones
Step16: Note that we are re-creating a HardTripletGenerator at each epoch. By doing so, we re-compute the new hard negatives with the newly updated model. On larger scale this operation can take a lot of time, and could be done every X epochs (X > 1).
Step17: You should see that the train loss is barely improving while the validation loss is decreasing. Remember that we are feeding the hardest triplets to the model!
Step18: Let's run the improved convnet SharedConv2 without negative hardming in order to have a fair comparison | Python Code:
import tensorflow as tf
# If you have a GPU, execute the following lines to restrict the amount of VRAM used:
gpus = tf.config.experimental.list_physical_devices('GPU')
if len(gpus) > 1:
print("Using GPU {}".format(gpus[0]))
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
else:
print("Using CPU")
import os
import random
import itertools
import tensorflow.keras.backend as K
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Concatenate, Lambda, Dot
from tensorflow.keras.layers import Conv2D, MaxPool2D, GlobalAveragePooling2D, Flatten, Dropout
from tensorflow.keras import optimizers
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
Explanation: Face verification
Goals
train a network for face similarity using triplet loss
work data augmentation, generators and hard negative mining
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http://vis-www.cs.umass.edu/lfw/
For computing purposes, we'll only restrict ourselves to a subpart of the dataset. You're welcome to train on the whole dataset on GPU, by changing the PATH in the following cells, and in data download
We will also load pretrained weights
End of explanation
PATH = "lfw/lfw-deepfunneled/"
USE_SUBSET = True
dirs = sorted(os.listdir(PATH))
if USE_SUBSET:
dirs = dirs[:500]
name_to_classid = {d:i for i,d in enumerate(dirs)}
classid_to_name = {v:k for k,v in name_to_classid.items()}
num_classes = len(name_to_classid)
print("number of classes: "+str(num_classes))
Explanation: Processing the dataset
This part is similar to previous notebook on siamese nets, you may just run the cells to get the necessary inputs
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
Set USE_SUBSET to False if you want to use the full dataset (GPU only!)
End of explanation
# read all directories
img_paths = {c:[directory + "/" + img for img in sorted(os.listdir(PATH+directory))]
for directory,c in name_to_classid.items()}
# retrieve all images
all_images_path = []
for img_list in img_paths.values():
all_images_path += img_list
# map to integers
path_to_id = {v:k for k,v in enumerate(all_images_path)}
id_to_path = {v:k for k,v in path_to_id.items()}
# build mappings between images and class
classid_to_ids = {k:[path_to_id[path] for path in v] for k,v in img_paths.items()}
id_to_classid = {v:c for c,imgs in classid_to_ids.items() for v in imgs}
Explanation: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries:
- mappings from imagepath and image id: path_to_id and id_to_path
- mappings from class id to image ids: classid_to_ids and id_to_classid
End of explanation
from skimage.io import imread
from skimage.transform import resize
def resize100(img):
return resize(img, (100, 100), preserve_range=True, mode='reflect', anti_aliasing=True)[20:80,20:80,:]
def open_all_images(id_to_path):
all_imgs = []
for path in id_to_path.values():
all_imgs += [np.expand_dims(resize100(imread(PATH+path)),0)]
return np.vstack(all_imgs)
all_imgs = open_all_images(id_to_path)
mean = np.mean(all_imgs, axis=(0,1,2))
all_imgs -= mean
all_imgs.shape, str(all_imgs.nbytes / 1e6) + "Mo"
Explanation: The following histogram shows the number of images per class: there are many classes with only one image.
These classes are useful as negatives, only as we can't make a positive pair with them.
Now that we have a way to compute the pairs, let's open all the possible images. It will expand all the images into RAM memory. There are more than 1000 images, so 100Mo of RAM will be used, which will not cause any issue.
Note: if you plan on opening more images, you should not open them all at once, and rather build a generator
End of explanation
def build_pos_pairs_for_id(classid, max_num=50):
imgs = classid_to_ids[classid]
if len(imgs) == 1:
return []
pos_pairs = list(itertools.combinations(imgs, 2))
random.shuffle(pos_pairs)
return pos_pairs[:max_num]
def build_positive_pairs(class_id_range):
listX1 = []
listX2 = []
for class_id in class_id_range:
pos = build_pos_pairs_for_id(class_id)
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
perm = np.random.permutation(len(listX1))
return np.array(listX1)[perm], np.array(listX2)[perm]
split_num = int(num_classes * 0.8)
Xa_train, Xp_train = build_positive_pairs(range(0, split_num))
Xa_test, Xp_test = build_positive_pairs(range(split_num, num_classes-1))
# Gather the ids of all images that are used for train and test
all_img_train_idx = list(set(Xa_train) | set(Xp_train))
all_img_test_idx = list(set(Xa_test) | set(Xp_test))
Explanation: The following function builds a large number of positives/negatives pairs (train and test)
Triplet loss
In the triplet loss model, we'll define 3 inputs $(a,+,-)$ for anchor, positive and negative.
Usage and differences with siamese nets
We release the hard constraint that all data of the same class should be squashed to a single point. Rather, images representation can live on a manifold, as long as they are closer to similar class images than to different class images
On large datasets, with careful hyperparameters, triplets and more advances metric learning method beat siamese nets
Outline
We will build positive pairs, and find a way to sample negatives to obtain triplets
Note that we don't need outputs anymore (positive vs negative), we're just building triplets
End of explanation
Xa_train.shape, Xp_train.shape
from imgaug import augmenters as iaa
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
])
class TripletGenerator(tf.keras.utils.Sequence):
def __init__(self, Xa_train, Xp_train, batch_size, all_imgs, neg_imgs_idx):
self.cur_img_index = 0
self.cur_img_pos_index = 0
self.batch_size = batch_size
self.imgs = all_imgs
self.Xa = Xa_train # Anchors
self.Xp = Xp_train
self.cur_train_index = 0
self.num_samples = Xa_train.shape[0]
self.neg_imgs_idx = neg_imgs_idx
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs_a = self.Xa[low_index:high_index] # Anchors
imgs_p = self.Xp[low_index:high_index] # Positives
imgs_n = random.sample(self.neg_imgs_idx, imgs_a.shape[0]) # Negatives
imgs_a = seq.augment_images(self.imgs[imgs_a])
imgs_p = seq.augment_images(self.imgs[imgs_p])
imgs_n = seq.augment_images(self.imgs[imgs_n])
# We also a null vector as placeholder for output, but it won't be needed:
return ([imgs_a, imgs_p, imgs_n], np.zeros(shape=(imgs_a.shape[0])))
batch_size = 128
gen = TripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx)
len(all_img_test_idx), len(gen)
[xa, xp, xn], y = gen[0]
xa.shape, xp.shape, xn.shape
plt.figure(figsize=(16, 9))
for i in range(5):
plt.subplot(3, 5, i + 1)
plt.title("anchor")
plt.imshow((xa[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 6)
plt.title("positive")
plt.imshow((xp[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 11)
plt.title("negative")
plt.imshow((xn[i] + mean) / 255)
plt.axis('off')
plt.show()
Explanation: We end up with 1177 different pairs, which we'll append with a random sample (as negative) in the generator
End of explanation
gen_test = TripletGenerator(Xa_test, Xp_test, 32, all_imgs, all_img_test_idx)
len(gen_test)
Explanation: As you can see, choosing randomly the negatives can be inefficient. For example it's reasonnable to think a old man will be a too easy negative if the anchor is a young woman.
End of explanation
# Build a loss which doesn't take into account the y_true, as
# we'll be passing only 0
def identity_loss(y_true, y_pred):
return K.mean(y_pred - 0 * y_true)
# The real loss is here
def cosine_triplet_loss(X, margin=0.5):
positive_sim, negative_sim = X
# batch loss
losses = K.maximum(0.0, negative_sim - positive_sim + margin)
return K.mean(losses)
Explanation: Triplet Model
The loss of the triplet model is as follows:
$$ max(0, ||x_a - x_p||_2 - ||x_a - x_n||_2 + \alpha)$$
We'll be using cosine similarities instead of euclidean distances (seems to be working a bit better in that case), so the loss becomes:
$$ max(0, cos(x_a, x_n) - cos(x_a - x_p) + \alpha)$$
End of explanation
class SharedConv(tf.keras.Model):
def __init__(self):
super().__init__(self, name="sharedconv")
self.conv1 = Conv2D(16, 3, activation="relu", padding="same")
self.conv2 = Conv2D(16, 3, activation="relu", padding="same")
self.pool1 = MaxPool2D((2,2)) # 30,30
self.conv3 = Conv2D(32, 3, activation="relu", padding="same")
self.conv4 = Conv2D(32, 3, activation="relu", padding="same")
self.pool2 = MaxPool2D((2,2)) # 15,15
self.conv5 = Conv2D(64, 3, activation="relu", padding="same")
self.conv6 = Conv2D(64, 3, activation="relu", padding="same")
self.pool3 = MaxPool2D((2,2)) # 8,8
self.conv7 = Conv2D(64, 3, activation="relu", padding="same")
self.conv8 = Conv2D(32, 3, activation="relu", padding="same")
self.flatten = Flatten()
self.dropout1 = Dropout(0.2)
self.fc1 = Dense(40, activation="tanh")
self.dropout2 = Dropout(0.2)
self.fc2 = Dense(64)
def call(self, inputs):
x = self.pool1(self.conv2(self.conv1(inputs)))
x = self.pool2(self.conv4(self.conv3(x)))
x = self.pool3(self.conv6(self.conv5(x)))
x = self.flatten(self.conv8(self.conv7(x)))
x = self.fc1(self.dropout1(x))
return self.fc2(self.dropout2(x))
shared_conv = SharedConv()
Explanation: Shared Convolutional Network
You may as well build your own
End of explanation
class TripletNetwork(tf.keras.Model):
def __init__(self, shared_conv):
super().__init__(self, name="tripletnetwork")
# TODO
def call(self, inputs):
pass # TODO
model_triplet = TripletNetwork(shared_conv)
model_triplet.compile(loss=identity_loss, optimizer="rmsprop")
# %load solutions/triplet.py
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
best_model_fname = "triplet_checkpoint_b2.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
Explanation: Triplet Model
Exercise
Build the triplet model, using the skeleton below using the OOP Keras API
First run the 3 inputs through the shared conv
Then compute positive and negative similarities
Then call the triplet loss function using a Lambda layer
End of explanation
history = model_triplet.fit(gen,
epochs=10,
validation_data = gen_test,
callbacks=[best_model_cb])
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss');
model_triplet.load_weights("triplet_checkpoint_b2.h5")
Explanation: Warning
- You will need to run on GPU if you're on the large dataset
- On the small dataset, the model sometimes takes a few epochs before starting to decrease the loss
- This can be due to the init, learning rate, or too much dropout / augmentation
End of explanation
# You may load this model
# Trained on triplets but with larger dataset
# Far from perfect !
# model_triplet.load_weights("triplet_pretrained.h5")
Explanation: Exercise
What do you observe?
Try to make changes to the model / parameters to get a better convergence, you should be able to have much better result than with the ConvNet we gave you
Try to add data augmentation, or increase the size of the training set
You might want to be on GPU for testing several architectures, even on the small set
End of explanation
emb = shared_conv.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
pixelwise = np.reshape(all_imgs, (all_imgs.shape[0], 60*60*3))
def most_sim(idx, topn=5, mode="cosine"):
x = emb[idx]
if mode == "cosine":
x = x / np.linalg.norm(x)
sims = np.dot(emb, x)
ids = np.argsort(sims)[::-1]
return [(id,sims[id]) for id in ids[:topn]]
elif mode == "euclidean":
dists = np.linalg.norm(emb - x, axis=-1)
ids = np.argsort(dists)
return [(id,dists[id]) for id in ids[:topn]]
else:
dists = np.linalg.norm(pixelwise - pixelwise[idx], axis=-1)
ids = np.argsort(dists)
return [(id,dists[id]) for id in ids[:topn]]
def display(img):
img = img.astype('uint8')
plt.imshow(img)
plt.axis('off')
plt.show()
interesting_classes = list(filter(lambda x: len(x[1])>4, classid_to_ids.items()))
class_idx = random.choice(interesting_classes)[0]
print(class_idx)
img_idx = random.choice(classid_to_ids[class_idx])
for id, sim in most_sim(img_idx):
display(all_imgs[id] + mean)
print((classid_to_name[id_to_classid[id]], id, sim))
Explanation: Displaying similar images
End of explanation
test_ids = []
for class_id in range(split_num, num_classes-1):
img_ids = classid_to_ids[class_id]
if len(img_ids) > 1:
test_ids += img_ids
print(len(test_ids))
len([len(classid_to_ids[x]) for x in list(range(split_num, num_classes-1)) if len(classid_to_ids[x])>1])
def recall_k(k=10, mode="embedding"):
num_found = 0
for img_idx in test_ids:
image_class = id_to_classid[img_idx]
found_classes = []
if mode == "embedding":
found_classes = [id_to_classid[x] for (x, score) in most_sim(img_idx, topn=k+1)[1:]]
elif mode == "random":
found_classes = [id_to_classid[x] for x in random.sample(
list(set(all_img_test_idx + all_img_train_idx) - {img_idx}), k)]
elif mode == "image":
found_classes = [id_to_classid[x] for (x, score) in most_sim(img_idx, topn=k+1, mode="image")[1:]]
if image_class in found_classes:
num_found += 1
return num_found / len(test_ids)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: Test Recall@k model
for each test class with > 1 image, pick image at random, and compute similarity with all other images
compute recall @k: is the correct class within the k first images
End of explanation
# Naive way to compute all similarities between all images. May be optimized!
def build_similarities(conv, all_imgs):
embs = conv.predict(all_imgs)
embs = embs / np.linalg.norm(embs, axis=-1, keepdims=True)
all_sims = np.dot(embs, embs.T)
return all_sims
def intersect(a, b):
return list(set(a) & set(b))
def build_negatives(anc_idxs, pos_idxs, similarities, neg_imgs_idx, num_retries=20):
# If no similarities were computed, return a random negative
if similarities is None:
return random.sample(neg_imgs_idx,len(anc_idxs))
final_neg = []
# for each positive pair
for (anc_idx, pos_idx) in zip(anc_idxs, pos_idxs):
anchor_class = id_to_classid[anc_idx]
#positive similarity
sim = similarities[anc_idx, pos_idx]
# find all negatives which are semi(hard)
possible_ids = np.where((similarities[anc_idx] + 0.25) > sim)[0]
possible_ids = intersect(neg_imgs_idx, possible_ids)
appended = False
for iteration in range(num_retries):
if len(possible_ids) == 0:
break
idx_neg = random.choice(possible_ids)
if id_to_classid[idx_neg] != anchor_class:
final_neg.append(idx_neg)
appended = True
break
if not appended:
final_neg.append(random.choice(neg_imgs_idx))
return final_neg
class HardTripletGenerator(tf.keras.utils.Sequence):
def __init__(self, Xa_train, Xp_train, batch_size, all_imgs, neg_imgs_idx, conv):
self.batch_size = batch_size
self.imgs = all_imgs
self.Xa = Xa_train
self.Xp = Xp_train
self.num_samples = Xa_train.shape[0]
self.neg_imgs_idx = neg_imgs_idx
if conv:
print("Pre-computing similarities...", end=" ")
self.similarities = build_similarities(conv, self.imgs)
print("Done!")
else:
self.similarities = None
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs_a = self.Xa[low_index:high_index]
imgs_p = self.Xp[low_index:high_index]
imgs_n = build_negatives(imgs_a, imgs_p, self.similarities, self.neg_imgs_idx)
imgs_a = seq.augment_images(self.imgs[imgs_a])
imgs_p = seq.augment_images(self.imgs[imgs_p])
imgs_n = seq.augment_images(self.imgs[imgs_n])
return ([imgs_a, imgs_p, imgs_n], np.zeros(shape=(imgs_a.shape[0])))
batch_size = 128
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, shared_conv)
len(gen_hard)
[xa, xp, xn], y = gen_hard[0]
xa.shape, xp.shape, xn.shape
plt.figure(figsize=(16, 9))
for i in range(5):
plt.subplot(3, 5, i + 1)
plt.title("anchor")
plt.imshow((xa[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 6)
plt.title("positive")
plt.imshow((xp[i] + mean) / 255)
plt.axis('off')
for i in range(5):
plt.subplot(3, 5, i + 11)
plt.title("negative")
plt.imshow((xn[i] + mean) / 255)
plt.axis('off')
plt.show()
class SharedConv2(tf.keras.Model):
Improved version of SharedConv
def __init__(self):
super().__init__(self, name="sharedconv2")
self.conv1 = Conv2D(16, 3, activation="relu", padding="same")
self.conv2 = Conv2D(16, 3, activation="relu", padding="same")
self.pool1 = MaxPool2D((2,2)) # 30,30
self.conv3 = Conv2D(32, 3, activation="relu", padding="same")
self.conv4 = Conv2D(32, 3, activation="relu", padding="same")
self.pool2 = MaxPool2D((2,2)) # 15,15
self.conv5 = Conv2D(64, 3, activation="relu", padding="same")
self.conv6 = Conv2D(64, 3, activation="relu", padding="same")
self.pool3 = MaxPool2D((2,2)) # 8,8
self.conv7 = Conv2D(64, 3, activation="relu", padding="same")
self.conv8 = Conv2D(32, 3, activation="relu", padding="same")
self.flatten = Flatten()
self.dropout1 = Dropout(0.2)
self.fc1 = Dense(64)
def call(self, inputs):
x = self.pool1(self.conv2(self.conv1(inputs)))
x = self.pool2(self.conv4(self.conv3(x)))
x = self.pool3(self.conv6(self.conv5(x)))
x = self.flatten(self.conv8(self.conv7(x)))
return self.fc1(self.dropout1(x))
tf.random.set_seed(1337)
shared_conv2 = SharedConv2()
model_triplet2 = TripletNetwork(shared_conv2)
opt = optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model_triplet2.compile(loss=identity_loss, optimizer=opt)
gen_test = TripletGenerator(Xa_test, Xp_test, 32, all_imgs, all_img_test_idx)
len(gen_test)
# At first epoch we don't generate hard triplets so that our model can learn the easy examples first
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, None)
Explanation: Hard Negative Mining
We'll mine negatives based on previous epoch's model. To do so, we'll compute distances with all anchors, and sample among the most similar negatives, but not the too difficult ones
End of explanation
loss, val_loss = [], []
best_model_fname_hard = "triplet_checkpoint_hard.h5"
best_val_loss = float("inf")
nb_epochs = 10
for epoch in range(nb_epochs):
print("built new hard generator for epoch " + str(epoch))
history = model_triplet2.fit(
gen_hard,
epochs=1,
validation_data = gen_test)
loss.extend(history.history["loss"])
val_loss.extend(history.history["val_loss"])
if val_loss[-1] < best_val_loss:
print("Saving best model")
model_triplet2.save_weights(best_model_fname_hard)
gen_hard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, shared_conv2)
plt.plot(loss, label='train')
plt.plot(val_loss, label='validation')
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss');
Explanation: Note that we are re-creating a HardTripletGenerator at each epoch. By doing so, we re-compute the new hard negatives with the newly updated model. On larger scale this operation can take a lot of time, and could be done every X epochs (X > 1).
End of explanation
emb = shared_conv2.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: You should see that the train loss is barely improving while the validation loss is decreasing. Remember that we are feeding the hardest triplets to the model!
End of explanation
shared_conv2_nohard = SharedConv2()
model_triplet2_nohard = TripletNetwork(shared_conv2_nohard)
opt = optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model_triplet2_nohard.compile(loss=identity_loss, optimizer=opt)
gen_nohard = HardTripletGenerator(Xa_train, Xp_train, batch_size, all_imgs, all_img_train_idx, None)
history = model_triplet2_nohard.fit_generator(
generator=gen_nohard,
epochs=10,
validation_data=gen_test)
plt.plot(loss, label='train (hardmining)')
plt.plot(val_loss, label='validation (hardmining)')
plt.plot(history.history["loss"], label="train")
plt.plot(history.history["val_loss"], label="validation")
plt.ylim(0, 0.5)
plt.legend(loc='best')
plt.title('Loss hardmining vs no hardmining');
emb = shared_conv2_nohard.predict(all_imgs)
emb = emb / np.linalg.norm(emb, axis=-1, keepdims=True)
recall_k(k=10), recall_k(k=10, mode="random")
Explanation: Let's run the improved convnet SharedConv2 without negative hardming in order to have a fair comparison:
End of explanation |
11,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
Step1: Read data created in the previous chapter.
Step2: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
Step3: Our input function for predictions is the same except we don't provide a label
Step4: Create feature columns for estimator
Step5: <h3> Linear Regression with tf.Estimator framework </h3>
Step6: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
Step7: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
Step8: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
Step11: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset. | Python Code:
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
Explanation: <h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
End of explanation
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
Explanation: Read data created in the previous chapter.
End of explanation
# TODO: Create an appropriate input_fn to read the training data
def make_train_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create an appropriate input_fn to read the validation data
def make_eval_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
Explanation: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
End of explanation
# TODO: Create an appropriate prediction_input_fn
def make_prediction_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
Explanation: Our input function for predictions is the same except we don't provide a label
End of explanation
# TODO: Create feature columns
Explanation: Create feature columns for estimator
End of explanation
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
# TODO: Train a linear regression model
model = #ADD CODE HERE
model.train(#ADD CODE HERE
)
Explanation: <h3> Linear Regression with tf.Estimator framework </h3>
End of explanation
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
Explanation: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
End of explanation
# TODO: Predict from the estimator model we trained using test dataset
Explanation: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
End of explanation
# TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2]
Explanation: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
End of explanation
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
phase: 1 = train 2 = valid
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase)
else:
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
Explanation: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset.
End of explanation |
11,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IPython Widgets
IPython widgets are tools that give us interactivity within our analysis. This is most useful when looking at a complication plot and trying to figure out how it depends on a single parameter. You could make 20 different plots and vary the parameter a bit each time, or you could use an IPython slider widget. Let's first import the widgets.
Step1: The object we will learn about today is called interact. Let's find out how to use it.
Step5: We see that we need a function with parameters that we want to vary, let's make one. We will examine the lorenz equations. They exhibit chaotic behaviour and are quite beautiful.
Step6: Okay! So now you are ready to analyze the world! Just kidding. Let's make a simpler example. Consider the best fitting straight line through a set of points. When a curve fitter fits a straight line, it tries to minimize the sum of the "errors" from all the data points and the fit line. Mathematically this is represented as
$$\sum_{i=0}^{n}(f(x_i)-y_i)^2$$
Now, $f(x_i)=mx_i+b$. Your task is to write a function that plots a line and prints out the error, make an interact that allows you to vary the m and b parameters, then vary those parameters until you find the smallest error. | Python Code:
import IPython.html.widgets as widg
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
%matplotlib inline
Explanation: IPython Widgets
IPython widgets are tools that give us interactivity within our analysis. This is most useful when looking at a complication plot and trying to figure out how it depends on a single parameter. You could make 20 different plots and vary the parameter a bit each time, or you could use an IPython slider widget. Let's first import the widgets.
End of explanation
widg.interact?
Explanation: The object we will learn about today is called interact. Let's find out how to use it.
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
dx = sigma*(yvec[1]-yvec[0])
dy = yvec[0]*(rho-yvec[2])-yvec[1]
dz = yvec[0]*yvec[1]-beta*yvec[2]
return [dx,dy,dz]
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0,max_time, max_time*250)
return odeint(lorentz_derivs, ic, t, args = (sigma, rho, beta)), t
def plot_lorentz(N=1, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
f = plt.figure(figsize=(15, N*8))
np.random.seed(1)
colors = plt.cm.hot(np.linspace(0,1,N))
for n in range(N):
plt.subplot(N,1,n)
x0 = np.random.uniform(-15, 15)
y0 = np.random.uniform(-15, 15)
z0 = np.random.uniform(-15, 15)
soln, t = solve_lorentz([x0,y0,z0], max_time, sigma, rho, beta)
plt.plot(soln[:,0], soln[:, 2], color=colors[n])
plot_lorentz()
widg.interact(plot_lorentz, N=1, max_time=(0,10,.1), sigma=(0,10,.1), rho=(0,100, .1), beta=(0,10,.1))
Explanation: We see that we need a function with parameters that we want to vary, let's make one. We will examine the lorenz equations. They exhibit chaotic behaviour and are quite beautiful.
End of explanation
#Make a function that takes two parameters m and b and prints the total error and plots the the line and the data.
#Use this x and y into your function to use as the data
x=np.linspace(0,1,10)
y=(np.random.rand(10)+4)*x+5
#Make an interact as above that allows you to vary m and b.
Explanation: Okay! So now you are ready to analyze the world! Just kidding. Let's make a simpler example. Consider the best fitting straight line through a set of points. When a curve fitter fits a straight line, it tries to minimize the sum of the "errors" from all the data points and the fit line. Mathematically this is represented as
$$\sum_{i=0}^{n}(f(x_i)-y_i)^2$$
Now, $f(x_i)=mx_i+b$. Your task is to write a function that plots a line and prints out the error, make an interact that allows you to vary the m and b parameters, then vary those parameters until you find the smallest error.
End of explanation |
11,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Fibonacci Numbers
The Fibonacci numbers $F_n$ are defined by induction for all $n\in\mathbb{N}$
Step1: It seems that the Fibonacci numbers grow pretty fast. Let us plot these numbers to get a better understanding of their growth.
Step2: It looks like the Fibonacci numbers grow exponentially. Let us confirm this hypothesis by plotting the logarithm of these numbers.
Step3: This plot looks linear and confirms our hypothesis that these numbers grow exponentially.
Computing the Fibonacci numbers took quite long. Lets measure these times and plot them.
Step4: The times seem to grow exponentially.
Step5: The logarithmic plot confirms this. In order to investigate the reason for this exponential growth, we compute the computation tree. This tree shows the recursive invocations of the function.
Step6: The computation tree for the computation of fibonacci(6) shows the reason for the inefficiency
Step7: Now it is even possible to compute the $100,000^\mbox{th}$ Fibonnaci number. | Python Code:
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
[ (n,fibonacci(n)) for n in range(19) ]
Explanation: The Fibonacci Numbers
The Fibonacci numbers $F_n$ are defined by induction for all $n\in\mathbb{N}$:
- $F_0 := 0$,
- $F_1 := 1$,
- $F_{n+2} = F_{n+1} + F_n$ for all $n \in \mathbb{N}$.
Given a natural number n, the function fibonacci(n) computes the number $F_n$.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
m = 34
X = []
Y = []
for n in range(m):
X.append(n)
Y.append(fibonacci(n))
sns.set_style('darkgrid')
plt.figure(figsize=(15, 10))
plt.plot(X, Y, 'bo')
plt.xticks(X)
plt.yticks([y * 2e5 for y in range(19)])
plt.xlabel('n')
plt.ylabel('F(n)')
plt.title('The Fibonacci Numbers')
plt.show()
Explanation: It seems that the Fibonacci numbers grow pretty fast. Let us plot these numbers to get a better understanding of their growth.
End of explanation
import math
X = X[1:]
Y = Y[1:]
logY = [math.log(y) for y in Y]
sns.set_style('darkgrid')
plt.figure(figsize=(15, 10))
plt.plot(X, logY, 'bo')
plt.xticks(X)
plt.yticks(list(range(17)))
plt.xlabel('n')
plt.ylabel('ln(F(n))')
plt.title('The Logarithms of the Fibonacci Numbers')
plt.show()
Explanation: It looks like the Fibonacci numbers grow exponentially. Let us confirm this hypothesis by plotting the logarithm of these numbers.
End of explanation
import time
m = 36
Y = []
X = list(range(m))
for n in range(m):
start = time.time()
print(f'fib({n}) = {fibonacci(n)}')
stop = time.time()
print(stop - start)
Y.append(stop - start)
sns.set_style('darkgrid')
plt.figure(figsize=(15, 10))
plt.plot(X, Y, 'bo')
plt.xticks(X)
plt.xlabel('n')
plt.ylabel('time in seconds')
plt.title('Time to Compute the Fibonacci Numbers')
plt.show()
Explanation: This plot looks linear and confirms our hypothesis that these numbers grow exponentially.
Computing the Fibonacci numbers took quite long. Lets measure these times and plot them.
End of explanation
m = 36
Y = []
X = list(range(20, m))
for n in X:
start = time.time()
fibonacci(n)
stop = time.time()
Y.append(math.log(stop - start))
sns.set_style('darkgrid')
plt.figure(figsize=(15, 10))
plt.plot(X, Y, 'bo')
plt.xticks(X)
plt.xlabel('n')
plt.ylabel('time in seconds')
plt.title('Logarithm of the Time to Compute the Fibonacci Numbers')
plt.show()
Explanation: The times seem to grow exponentially.
End of explanation
import graphviz as gv
class ComputationTree:
def __init__(self, arg, value, left=None, right=None):
self.mArg = arg
self.mValue = value
self.mLeft = left
self.mRight = right
def isLeaf(self):
return self.mLeft == None and self.mRight == None
ComputationTree.isLeaf = isLeaf
del isLeaf
def toDot(self):
ComputationTree.sCounter = 0 # static variable of the class ComputationTree
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
NodeDict = {}
self._assignIDs(NodeDict)
for n, t in NodeDict.items():
dot.node(str(n), label='{' + str(t.mArg) + '|' + str(t.mValue) + '}')
if t.mLeft != None and t.mRight != None:
dot.edge(str(n), str(t.mLeft .mID))
dot.edge(str(n), str(t.mRight.mID))
return dot
ComputationTree.toDot = toDot
del toDot
def _assignIDs(self, NodeDict):
ComputationTree.sCounter += 1
self.mID = ComputationTree.sCounter
NodeDict[self.mID] = self
if self.isLeaf():
return
self.mLeft ._assignIDs(NodeDict)
self.mRight._assignIDs(NodeDict)
ComputationTree._assignIDs = _assignIDs
del _assignIDs
def fibonacci_tree(n):
if n <= 1:
return ComputationTree(n, n)
C1 = fibonacci_tree(n-1)
C2 = fibonacci_tree(n-2)
return ComputationTree(n, C1.mValue + C2.mValue, C1, C2)
t = fibonacci_tree(6)
t.toDot()
Explanation: The logarithmic plot confirms this. In order to investigate the reason for this exponential growth, we compute the computation tree. This tree shows the recursive invocations of the function.
End of explanation
def fibonacci_mem(n):
if n <= 1:
return n
L = [0 for k in range(n+1)]
L[0] = 0
L[1] = 1
for k in range(2, n+1):
L[k] = L[k-1] + L[k-2]
return L[n]
Explanation: The computation tree for the computation of fibonacci(6) shows the reason for the inefficiency:
* fibonacci(5) is computed once,
* fibonacci(4) is computed 2 times,
* fibonacci(3) is computed 3 times,
* fibonacci(2) is computed 5 times,
* fibonacci(1) is computed 8 times, and
* fibonacci(0) is computed 5 times.
If we want to compute the Fibonacci numbers efficiently, we must not compute the value fibonacci(n) for a given n more than once. The easiest way to achieve this is by storing the Fibnacci numbers in a list L. In the implementation below, L[n] stores the $n$-th Fibonacci number.
End of explanation
%%time
x = fibonacci_mem(100000)
x
Explanation: Now it is even possible to compute the $100,000^\mbox{th}$ Fibonnaci number.
End of explanation |
11,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The big reset
So I went ahead and cleared the memory.
Step1: The mystery section remains the same.
Step2: All the blocks are empty.
Step3: The 'PresetStyle' settings are empty, too.
Step4: Each of the registry settings are completely blank.
Interesting things to note
Step5: The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.
Perhaps this has to do with some kind of internal wear levelling or something.
Registration extension
Now that the memory has been cleared, we can hopefully figure out more about the registration settings.
Recording Bank 3, Button 2 as the following settings
Step6: I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.
This means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of zero, such as the Pitch Bend range, Reverb type, Chorus type, and panel Sustain.
Personally, I think it makes more sense for the first byte to act as the recorded flag, so I think I'll use that. | Python Code:
import sys
sys.path.append('..')
import collections
import mido
from commons import dgxdump
from commons.dumpdata import messages, songdata, regdata, regvalues
old_syx_messages = mido.read_syx_file('../data/syxout5.syx')
clear_syx_messages = mido.read_syx_file('../data/clear_bulk.txt')
o_dump = dgxdump.DgxDump(old_syx_messages)
c_dump = dgxdump.DgxDump(clear_syx_messages)
# songs slices
songslices = collections.OrderedDict([
('songs', slice(0x00, 0x01)),
('mystery', slice(0x01, 0x15D)),
('tracks', slice(0x15D, 0x167)),
('durations', slice(0x167, 0x17B)),
('trackdurations', slice(0x17B, 0x1F3)),
('presetstyle', slice(0x1F3, 0x22F)),
('beginningblocks', slice(0x22F, 0x24D)),
('nextblocks', slice(0x24D, 0x2CF)),
('startmarker', slice(0x2CF, 0x2D5)),
('blockdata', slice(0x2D5, 0x106D5)),
('endmarker', slice(0x106D5, None)),
])
EXPECTED_SIZE = 0x106DB
PRESETSTYLE = b'PresetStyle\0'*5
MARKER = b'PK0001'
def hex_string(data):
return " ".join("{:02X}".format(b) for b in data)
def bin_string(data):
return " ".join("{:08b}".format(b) for b in data)
def line_hex(data, head=None, tail=0):
if head is None:
head = len(data)
tailstart = len(data) - tail
if tailstart <= head:
return (hex_string(data))
else:
return ("{} .. {}".format(hex_string(data[:head]), hex_string(data[tailstart:])))
def song_section(dump, section):
return dump.song_data.data[songslices[section]]
for sec in songslices:
print(sec)
print(line_hex(song_section(o_dump, sec), 32, 4))
print(line_hex(song_section(c_dump, sec), 32, 4))
song_section(o_dump, 'mystery') == song_section(c_dump, 'mystery')
Explanation: The big reset
So I went ahead and cleared the memory.
End of explanation
all(b==0 for b in song_section(c_dump, 'nextblocks'))
all(b==0 for b in song_section(c_dump, 'blockdata'))
Explanation: The mystery section remains the same.
End of explanation
bytes(song_section(c_dump, 'presetstyle'))
Explanation: All the blocks are empty.
End of explanation
print(line_hex(o_dump.reg_data.data, 32, 4))
print(line_hex(c_dump.reg_data.data, 32, 4))
for bank in range(1, 8+1):
for button in range(1, 2+1):
print(bank, button)
print(line_hex(o_dump.reg_data.settings.get_setting(bank, button).data))
print(line_hex(c_dump.reg_data.settings.get_setting(bank, button).data))
Explanation: The 'PresetStyle' settings are empty, too.
End of explanation
for x in range(2, 7):
!diff -qs ../data/backup_experiment/cb1.txt ../data/backup_experiment/cb{x}.txt
!diff -qs ../data/backup_experiment/cb1.txt ../data/clear_bulk.txt
c2_syx_messages = mido.read_syx_file('../data/backup_experiment/cb1.txt')
c2_dump = dgxdump.DgxDump(c2_syx_messages)
c_dump.song_data.data == c2_dump.song_data.data
c_dump.reg_data.data == c2_dump.reg_data.data
for sec in songslices:
c_sec = song_section(c_dump, sec)
c2_sec = song_section(c2_dump, sec)
if c_sec != c2_sec:
print(sec)
print(line_hex(c_sec, 32, 4))
print(line_hex(c2_sec, 32, 4))
for n, (a, b) in enumerate(zip(c_dump.song_data.data, c2_dump.song_data.data)):
if a != b:
print("{0:02X}: {1:02X} {2:02X} ({1:03d} {2:03d})".format(n, a, b))
Explanation: Each of the registry settings are completely blank.
Interesting things to note: the first byte is 0 instead of 1, which probably indicates that the setting is unused.
The bytes that were FF in each recorded setting are 00 here.
Investigating FUNCTION backup
According to the manual (page 49), the following settings can be saved to backup, i.e. persistent memory for startup bu holding the FUNCTION button:
User songs (These are saved when recorded anyway)
Style files (the ones loaded using SmartMedia)
Touch response (ON/OFF)
Registration memory
These function settings:
Tuning
Split point
Touch sensitivity
Style volume
Song volume
Metronome volume
Grade
Demo cancel
Language
Media Select
Panel Sustain.
These backup settings are also cleared with the rest of the memory.
The default values for these settings are as follows:
| setting | default |
|-------------------|--------------|
| Touch response | ON |
| Tuning | 000 |
| Split point | 54 (F#2) |
| Touch sensitivity | 2 (Medium) |
| Style volume | 100 |
| Song volume | 100 |
| Metronome volume | 100 |
| Grade | ON |
| Demo cancel | OFF |
| Language | English |
| Media Select | Flash Memory |
| Panel sustain | OFF |
As an experiment, I changed the values of the function settings:
| setting | new value |
|-------------------|--------------|
| Touch response | ON |
| Tuning | 057 |
| Split point | 112 (E7) |
| Touch sensitivity | 3 (Hard) |
| Style volume | 045 |
| Song volume | 079 |
| Metronome volume | 121 |
| Grade | OFF |
| Demo cancel | ON |
| Language | Japanese |
| Media Select | Smart Media |
| Panel sustain | ON |
and without making a backup:
- took a bulk dump. (cb1.txt),
- then made the backup, took another bulk dump, (cb2.txt),
- restarted with the new settings, took another (cb3.txt),
- reset everything to default without backup (cb4.txt),
- made a backup again and took another dump (cb5.txt),
- then restarted again (cb6.txt).
All of these files were identical to each other, which suggests that these backup settings are not stored any part we can retrieve.
However, there is one thing interesting about these files, in that they differ from the dump I got immediately after resetting the memory (clear_bulk.txt).
End of explanation
r1_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/1reg.syx'))
c2_dump.song_data.data == r1_dump.song_data.data
c2_dump.reg_data.data == r1_dump.reg_data.data
for bank in range(1, 8+1):
for button in range(1, 2+1):
if not all(x == 0 for x in r1_dump.reg_data.settings.get_setting(bank, button).data):
print(bank, button)
line_hex(r1_dump.reg_data.settings.get_setting(3, 2).data)
for bb in [(3, 2), (1, 1)]:
sets = r1_dump.reg_data.settings.get_setting(*bb)
print(line_hex(sets.data))
sets.print_settings()
sets.print_unusual()
Explanation: The only difference seems to be two bytes in the mystery section, at offsets 0x07 and 0x08.
Perhaps this has to do with some kind of internal wear levelling or something.
Registration extension
Now that the memory has been cleared, we can hopefully figure out more about the registration settings.
Recording Bank 3, Button 2 as the following settings:
| setting | value |
|------------------|-------|
| Style | 092 |
| Accompaniment | ON |
| Split point | 053 |
| Main A/B | A |
| Style vol | 050 |
| Main voice | 060 |
| Main Octave | -1 |
| Main Volume | 054 |
| Main Pan | 092 |
| Main Reverb | 078 |
| Main Chorus | 103 |
| Split | ON |
| Split voice | 003 |
| Split Octave | 0 |
| Split Volume | 108 |
| Split Pan | 064 |
| Split Reverb | 032 |
| Split Chorus | 127 |
| Dual | OFF |
| Dual voice | 201 |
| Dual Octave | +2 |
| Dual Volume | 095 |
| Dual Pan | 048 |
| Dual Reverb | 017 |
| Dual Chorus | 082 |
| Pitch bend range | 05 |
| Reverb type | --(Room) |
| Chorus type | --(Celeste) |
| Harmony | OFF |
| Harmony type | 06(Trill1/4) |
| Harmony volume | 085/---* |
| Transpose | +03 |
| Tempo | 080 |
| Panel Sustain | ON |
*This was set using a different Harmony type setting.
End of explanation
r2_dump = dgxdump.DgxDump(mido.read_syx_file('../data/post_clear/2reg.txt'))
sets = r2_dump.reg_data.settings.get_setting(2,2)
sets.print_settings()
sets.print_unusual()
Explanation: I believe the only real way to get unrecorded settings is to reset the memory, which clears all the values to zero.
This means that the first byte which has a value of 01 for all recorded settings can indeed be used as a flag... along with the FF byte at offset 24, and any other setting that cannot be set to a value of zero, such as the Pitch Bend range, Reverb type, Chorus type, and panel Sustain.
Personally, I think it makes more sense for the first byte to act as the recorded flag, so I think I'll use that.
End of explanation |
11,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
Step1: Set the number of covariates.
Step2: These parameters define the population variance for each level of grouping.
Step3: Set the number of groups
Step4: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
Step5: Calculate the total sample size.
Step6: Construct the design matrix.
Step7: Construct labels showing which group each observation belongs to at each level.
Step8: Simulate the random effects.
Step9: Simulate the response variable.
Step10: Put everything into a dataframe.
Step11: Fit the model.
Step12: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above. | Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
Explanation: GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
End of explanation
p = 5
Explanation: Set the number of covariates.
End of explanation
groups_var = 1
level1_var = 2
level2_var = 3
resid_var = 4
Explanation: These parameters define the population variance for each level of grouping.
End of explanation
n_groups = 100
Explanation: Set the number of groups
End of explanation
group_size = 20
level1_size = 10
level2_size = 5
Explanation: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
End of explanation
n = n_groups * group_size * level1_size * level2_size
Explanation: Calculate the total sample size.
End of explanation
xmat = np.random.normal(size=(n, p))
Explanation: Construct the design matrix.
End of explanation
groups_ix = np.kron(np.arange(n // group_size), np.ones(group_size)).astype(np.int)
level1_ix = np.kron(np.arange(n // level1_size), np.ones(level1_size)).astype(np.int)
level2_ix = np.kron(np.arange(n // level2_size), np.ones(level2_size)).astype(np.int)
Explanation: Construct labels showing which group each observation belongs to at each level.
End of explanation
groups_re = np.sqrt(groups_var) * np.random.normal(size=n // group_size)
level1_re = np.sqrt(level1_var) * np.random.normal(size=n // level1_size)
level2_re = np.sqrt(level2_var) * np.random.normal(size=n // level2_size)
Explanation: Simulate the random effects.
End of explanation
y = groups_re[groups_ix] + level1_re[level1_ix] + level2_re[level2_ix]
y += np.sqrt(resid_var) * np.random.normal(size=n)
Explanation: Simulate the response variable.
End of explanation
df = pd.DataFrame(xmat, columns=["x%d" % j for j in range(p)])
df["y"] = y + xmat[:, 0] - xmat[:, 3]
df["groups_ix"] = groups_ix
df["level1_ix"] = level1_ix
df["level2_ix"] = level2_ix
Explanation: Put everything into a dataframe.
End of explanation
cs = sm.cov_struct.Nested()
dep_fml = "0 + level1_ix + level2_ix"
m = sm.GEE.from_formula("y ~ x0 + x1 + x2 + x3 + x4", cov_struct=cs,
dep_data=dep_fml, groups="groups_ix", data=df)
r = m.fit()
Explanation: Fit the model.
End of explanation
r.cov_struct.summary()
Explanation: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above.
End of explanation |
11,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast Sign Adversary Generation Example
This notebook demos find adversary example by using symbolic API and integration with Numpy
Reference
Step1: Build Network
note
Step2: Prepare useful data for the network
Step3: Init weight
Step4: Train a network
Step5: Get pertubation by using fast sign method, check validation change.
See that the validation set was almost entirely correct before the perturbations, but after the perturbations, it is much worse than random guessing.
Step6: Visualize an example after pertubation.
Note that the prediction is consistently incorrect. | Python Code:
%matplotlib inline
import mxnet as mx
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from mxnet.test_utils import get_mnist_iterator
Explanation: Fast Sign Adversary Generation Example
This notebook demos find adversary example by using symbolic API and integration with Numpy
Reference:
[1] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
https://arxiv.org/abs/1412.6572
End of explanation
dev = mx.cpu()
batch_size = 100
train_iter, val_iter = get_mnist_iterator(batch_size=batch_size, input_shape = (1,28,28))
# input
data = mx.symbol.Variable('data')
# first conv
conv1 = mx.symbol.Convolution(data=data, kernel=(5,5), num_filter=20)
tanh1 = mx.symbol.Activation(data=conv1, act_type="tanh")
pool1 = mx.symbol.Pooling(data=tanh1, pool_type="max",
kernel=(2,2), stride=(2,2))
# second conv
conv2 = mx.symbol.Convolution(data=pool1, kernel=(5,5), num_filter=50)
tanh2 = mx.symbol.Activation(data=conv2, act_type="tanh")
pool2 = mx.symbol.Pooling(data=tanh2, pool_type="max",
kernel=(2,2), stride=(2,2))
# first fullc
flatten = mx.symbol.Flatten(data=pool2)
fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=500)
tanh3 = mx.symbol.Activation(data=fc1, act_type="tanh")
# second fullc
fc2 = mx.symbol.FullyConnected(data=tanh3, num_hidden=10)
def Softmax(theta):
max_val = np.max(theta, axis=1, keepdims=True)
tmp = theta - max_val
exp = np.exp(tmp)
norm = np.sum(exp, axis=1, keepdims=True)
return exp / norm
def LogLossGrad(alpha, label):
grad = np.copy(alpha)
for i in range(alpha.shape[0]):
grad[i, int(label[i])] -= 1.
return grad
Explanation: Build Network
note: in this network, we will calculate softmax, gradient in numpy
End of explanation
data_shape = (batch_size, 1, 28, 28)
arg_names = fc2.list_arguments() # 'data'
arg_shapes, output_shapes, aux_shapes = fc2.infer_shape(data=data_shape)
arg_arrays = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes]
grad_arrays = [mx.nd.zeros(shape, ctx=dev) for shape in arg_shapes]
reqs = ["write" for name in arg_names]
model = fc2.bind(ctx=dev, args=arg_arrays, args_grad = grad_arrays, grad_req=reqs)
arg_map = dict(zip(arg_names, arg_arrays))
grad_map = dict(zip(arg_names, grad_arrays))
data_grad = grad_map["data"]
out_grad = mx.nd.zeros(model.outputs[0].shape, ctx=dev)
Explanation: Prepare useful data for the network
End of explanation
for name in arg_names:
if "weight" in name:
arr = arg_map[name]
arr[:] = mx.rnd.uniform(-0.07, 0.07, arr.shape)
def SGD(weight, grad, lr=0.1, grad_norm=batch_size):
weight[:] -= lr * grad / batch_size
def CalAcc(pred_prob, label):
pred = np.argmax(pred_prob, axis=1)
return np.sum(pred == label) * 1.0
def CalLoss(pred_prob, label):
loss = 0.
for i in range(pred_prob.shape[0]):
loss += -np.log(max(pred_prob[i, int(label[i])], 1e-10))
return loss
Explanation: Init weight
End of explanation
num_round = 4
train_acc = 0.
nbatch = 0
for i in range(num_round):
train_loss = 0.
train_acc = 0.
nbatch = 0
train_iter.reset()
for batch in train_iter:
arg_map["data"][:] = batch.data[0]
model.forward(is_train=True)
theta = model.outputs[0].asnumpy()
alpha = Softmax(theta)
label = batch.label[0].asnumpy()
train_acc += CalAcc(alpha, label) / batch_size
train_loss += CalLoss(alpha, label) / batch_size
losGrad_theta = LogLossGrad(alpha, label)
out_grad[:] = losGrad_theta
model.backward([out_grad])
# data_grad[:] = grad_map["data"]
for name in arg_names:
if name != "data":
SGD(arg_map[name], grad_map[name])
nbatch += 1
#print(np.linalg.norm(data_grad.asnumpy(), 2))
train_acc /= nbatch
train_loss /= nbatch
print("Train Accuracy: %.2f\t Train Loss: %.5f" % (train_acc, train_loss))
Explanation: Train a network
End of explanation
val_iter.reset()
batch = val_iter.next()
data = batch.data[0]
label = batch.label[0]
arg_map["data"][:] = data
model.forward(is_train=True)
theta = model.outputs[0].asnumpy()
alpha = Softmax(theta)
print("Val Batch Accuracy: ", CalAcc(alpha, label.asnumpy()) / batch_size)
#########
grad = LogLossGrad(alpha, label.asnumpy())
out_grad[:] = grad
model.backward([out_grad])
noise = np.sign(data_grad.asnumpy())
arg_map["data"][:] = data.asnumpy() + 0.15 * noise
model.forward(is_train=True)
raw_output = model.outputs[0].asnumpy()
pred = Softmax(raw_output)
print("Val Batch Accuracy after pertubation: ", CalAcc(pred, label.asnumpy()) / batch_size)
Explanation: Get pertubation by using fast sign method, check validation change.
See that the validation set was almost entirely correct before the perturbations, but after the perturbations, it is much worse than random guessing.
End of explanation
import random as rnd
idx = rnd.randint(0, 99)
images = data.asnumpy() + 0.15 * noise
plt.imshow(images[idx, :].reshape(28,28), cmap=cm.Greys_r)
print("true: %d" % label.asnumpy()[idx])
print("pred: %d" % np.argmax(pred, axis=1)[idx])
Explanation: Visualize an example after pertubation.
Note that the prediction is consistently incorrect.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.