Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Http Response
Usage
To construct a response packet you have a variety of facilities available.
Previously we saw how to parse HTTP responses using HttpParser. Of-course, we can also construct a response packet using HttpParser class.
Step1: But, this is a painful way to construct responses. Hence, other high level abstractions are available.
Example, following one liner will give us the same response packet.
Step2: Notice how okResponse will always add a Content-Length header for you.
You can also customize other headers
Step3: Let's add some content to our response packet
Step4: Note, how okResponse automatically added a Content-Length header for us.
Depending upon --min-compression-length flag, okResponse will also perform compression for content.
Example, default value for min compression length is 20.
Step5: You can pass a custom value for min_compression_length kwarg to okResponse.
Step6: Internally, okResponse uses build_http_response and hence you can also pass any argument also accepted by build_http_response. Example, it supports a conn_close argument which will add a Connection
Step7: Chunked Encoding
You can also send chunked encoded responses.
Step8: If we omit the min_compression_length flag | Python Code:
from proxy.http.parser import HttpParser, httpParserTypes
from proxy.common.constants import HTTP_1_1
response = HttpParser(httpParserTypes.RESPONSE_PARSER)
response.code = b'200'
response.reason = b'OK'
response.version = HTTP_1_1
print(response.build_response())
Explanation: Http Response
Usage
To construct a response packet you have a variety of facilities available.
Previously we saw how to parse HTTP responses using HttpParser. Of-course, we can also construct a response packet using HttpParser class.
End of explanation
from proxy.http.responses import okResponse
print(okResponse().tobytes())
Explanation: But, this is a painful way to construct responses. Hence, other high level abstractions are available.
Example, following one liner will give us the same response packet.
End of explanation
response = okResponse(
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Notice how okResponse will always add a Content-Length header for you.
You can also customize other headers
End of explanation
response = okResponse(
content=b'Hello World',
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Let's add some content to our response packet
End of explanation
response = okResponse(
content=b'H' * 21,
headers={
b'X-Custom-Header': b'my value',
},
)
print(response.tobytes())
Explanation: Note, how okResponse automatically added a Content-Length header for us.
Depending upon --min-compression-length flag, okResponse will also perform compression for content.
Example, default value for min compression length is 20.
End of explanation
response = okResponse(
content=b'H' * 21,
headers={
b'Host': b'jaxl.com',
},
min_compression_length=21,
)
print(response.tobytes())
Explanation: You can pass a custom value for min_compression_length kwarg to okResponse.
End of explanation
response = okResponse(
content=b'Hello World',
headers={
b'Host': b'jaxl.com',
},
conn_close=True,
)
print(response.tobytes())
Explanation: Internally, okResponse uses build_http_response and hence you can also pass any argument also accepted by build_http_response. Example, it supports a conn_close argument which will add a Connection: close header. Simply, pass conn_close=True.
End of explanation
from proxy.http.parser import ChunkParser
chunks = ChunkParser.to_chunks(b'Hello World', chunk_size=5)
response = okResponse(
content=chunks,
headers={
b'Transfer-Encoding': b'chunked',
},
# Avoid compressing chunks for demo purposes here
# Ideally you should omit this flag and send
# compressed chunks.
min_compression_length=len(chunks),
)
print(response.tobytes())
Explanation: Chunked Encoding
You can also send chunked encoded responses.
End of explanation
response = okResponse(
content=chunks,
headers={
b'Transfer-Encoding': b'chunked',
},
)
print(response.tobytes())
Explanation: If we omit the min_compression_length flag
End of explanation |
13,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function
Function basic
Function define
def use
argument define
Step1: return
ํจ์์ ์ข
๋ฃ ๋ช
์
๊ฐ์ด ํจ๊ป ์ค๋ ๊ฒฝ์ฐ๋ ๊ฐ์ ํธ์ถํ ๊ณณ์ผ๋ก ๋ฐํํ๋ฉด์ ์ข
๋ฃ
๋ฆฌํด์ ๋ช
์ํ์ง ์๋ ๊ฒฝ์ฐ, ํจ์์ ๋ง์ง๋ง๋ผ์ธ์ด ์คํ๋ ํ ๋ฆฌํด
Step2: e.g)๋ฌธ์์ด ํฌ๋งท ํจ์
Step3: 1st class citizen
๋ชจ๋ ๊ฒ์ด ๊ฐ์ฒด(object)๋ค!
์ซ์, ๋ฌธ์์ด, ํํ, ๋ฆฌ์คํธ, ๋์
๋๋ฆฌ ๋ฑ๋ฑ
ํจ์๋ ํฌํจ
1st class citizen ์ด๋ผ ํจ์ ํจ์๋ฅผ ๋ณ์์ ํ ๋น ํ ์ ์๊ณ , ๋ค๋ฅธ ํจ์์ ์ธ์๋ก ์ ๋ฌ ๊ฐ๋ฅํ๋ฉฐ, ํจ์๋ฅผ ๋ฐํํ ์ ์๋ค๋ ๋ป
Step4: Recursive Function
์ฌ๊ทํจ์
๋ด๋ถ์ ์ผ๋ก ์๊ธฐ ์์ ์ ํธ์ถํจ
์ ํ์์ผ๋ก ํํ๋๋ ๋ชจ๋ ์์์ ๊ฐ๊ฒฐํ ํํํ ์ ์์
์ข
๋ฃ์กฐ๊ฑด์ด ๊ผญ! ํ์ํจ (๋ช
์ํ์ง ์์ผ๋ฉด ๋ฌดํ ํธ์ถ๋ฐ๋ณต)
์ฝ๋๋ ๊ฐ๊ฒฐํ๋, ํธ์ถ์ ๋ฐ๋ฅธ ์ค๋ฒํค๋๊ฐ ํผ)
Step5: Lambda Function
๋จ์ผ๋ฌธ์ผ๋ก ํํ๋๋ ์ต๋ช
ํจ์
์ต๋ช
ํจ์๋ ์ด๋ฆ์ด ์๋ ๊ตฌํ์ฒด๋ง ์กด์ฌํ๋ ๊ฐ๋จํ ํจ์๋ฅผ ์๋ฏธ
์ฝ๋ ์์์ ํ๋ฒ๋ง ์ฌ์ฉ๋๋ ๊ธฐ๋ฅ์ด ์์ ๋, ๊ตณ์ด ํจ์๋ก ๋ง๋ค์ง ์๊ณ 1ํ์ฑ์ผ๋ก ๋ง๋ค์ด์ ์ธ ๋ ์ฌ์ฉ
Step6: filter, map ,reduce
lambda๊ฐ ์ ์ฉํ๊ฒ ์ฌ์ฉ๋๋ 3๊ฐ์ง ๋ํ์ ํจ์
ํจ์ํ ํ๋ก๊ทธ๋๋ฐ์ ๊ธฐ๋ณธ ์์์ด๊ธฐ๋ ํจ
filter | Python Code:
def add(num1, num2):
return num1 + num2
result = add(232, 323)
print result
print add('abcd', 'efg')
#ํจ์์ด๋ฆ์ผ๋ก๋ถํฐ ๊ธฐ๋ฅ์ด ๋ช
์๋๋ฉด ์ข๋ค.
def test_substraction(a, b):
return a - b
print test_substraction(5, 3)
#parameter(argument)
#int, string, float, list ๋ฑ ์ด๋ค ํ์ด์ฌ ๊ฐ์ฒด๋ ์ ๋ฌ ๊ฐ๋ฅ
def len2(string):
return len(string)
def sum2(nums):
return sum(nums)
print len2('test')
print sum2([1, 2, 3, 4])
#optional parameter
#๊ธฐ๋ณธ๊ฐ ์ง์ ๊ฐ๋ฅ, ์ธ์ ๊ฐ์ ์ ๋ฌํ์ง ์์ ๊ฒฝ์ฐ ์์์ ์ ํ ๊ธฐ๋ณธ๊ฐ ์ ๋ฌ
def print_hello(nums = 'hello world'):
return 1 + 3 + 4
print_hello('hello world')
print_hello()
def increment_by(a, b = 1):
return a + b
print increment_by(45)
print increment_by(34, 10)
Explanation: Function
Function basic
Function define
def use
argument define
:(colone)
body(code)
function name
End of explanation
def simple():
return 1
a = simple()
print a
def just_return():
a = 1
b = 1
return
c = just_return()
print c
def add_sub(a, b):
return a+b, a-b
add_sub(9, 3)
def c_to_f(c):
f = 1.8 * c + 32
return f
print c_to_f(36.5)
def odd_sum(nums):
odd_sum = 0
for i in nums:
if i % 2 == 1:
odd_sum += i
return odd_sum
def odd_sum2(nums):
return sum([i for i in nums if i % 2 == 1])
a = [1, 2, 3, 7, 8, 10, 11, 13, 15, 29, 100, 201, 300]
print odd_sum(a)
print odd_sum2(a)
def max_val(nums):
mval = nums[0]
for i in nums:
if mval < i:
mval = i
return mval
print max_val(a)
def factorial(n):
mul = 1
for i in range(1, n + 1):
mul *= i
return mul
factorial(6)
Explanation: return
ํจ์์ ์ข
๋ฃ ๋ช
์
๊ฐ์ด ํจ๊ป ์ค๋ ๊ฒฝ์ฐ๋ ๊ฐ์ ํธ์ถํ ๊ณณ์ผ๋ก ๋ฐํํ๋ฉด์ ์ข
๋ฃ
๋ฆฌํด์ ๋ช
์ํ์ง ์๋ ๊ฒฝ์ฐ, ํจ์์ ๋ง์ง๋ง๋ผ์ธ์ด ์คํ๋ ํ ๋ฆฌํด
End of explanation
str1 = 'Hi my name is {} and the weather is {}'.format('younghyo', 'clear')
print str1
Explanation: e.g)๋ฌธ์์ด ํฌ๋งท ํจ์
End of explanation
def test1():
print 23
def run_something(func):
func()
test2 = test1
test2()
print test1, type(test1)
run_something(test1)
def bubble_sort():
pass
def quick_sort():
pass
def sort(sort_method):
return sort_method()
sort(bubble_sort)
sort(quick_sort)
Explanation: 1st class citizen
๋ชจ๋ ๊ฒ์ด ๊ฐ์ฒด(object)๋ค!
์ซ์, ๋ฌธ์์ด, ํํ, ๋ฆฌ์คํธ, ๋์
๋๋ฆฌ ๋ฑ๋ฑ
ํจ์๋ ํฌํจ
1st class citizen ์ด๋ผ ํจ์ ํจ์๋ฅผ ๋ณ์์ ํ ๋น ํ ์ ์๊ณ , ๋ค๋ฅธ ํจ์์ ์ธ์๋ก ์ ๋ฌ ๊ฐ๋ฅํ๋ฉฐ, ํจ์๋ฅผ ๋ฐํํ ์ ์๋ค๋ ๋ป
End of explanation
def factorial(n):
mul = 1
for i in range(2, n+1):
mul *= i
return mul
assert(factorial(5) == 120)
#fibonacci sequence
def fibonacci(n):
fibo = [1, 1]
for i in range(2, n):
fibo.append(fibo[i-1] + fibo[i-2])
return fibo
print fibonacci(5)
#fibonacci recursive
def recursive_fibonacci(n):
if n == 1 or n ==2:
return 1
return recursive_fibonacci(n-1) + recursive_fibonacci(n-2)
print recursive_fibonacci(5)
Explanation: Recursive Function
์ฌ๊ทํจ์
๋ด๋ถ์ ์ผ๋ก ์๊ธฐ ์์ ์ ํธ์ถํจ
์ ํ์์ผ๋ก ํํ๋๋ ๋ชจ๋ ์์์ ๊ฐ๊ฒฐํ ํํํ ์ ์์
์ข
๋ฃ์กฐ๊ฑด์ด ๊ผญ! ํ์ํจ (๋ช
์ํ์ง ์์ผ๋ฉด ๋ฌดํ ํธ์ถ๋ฐ๋ณต)
์ฝ๋๋ ๊ฐ๊ฒฐํ๋, ํธ์ถ์ ๋ฐ๋ฅธ ์ค๋ฒํค๋๊ฐ ํผ)
End of explanation
def square(x):
return x ** 2
#lambda x : x ** 2
square2 = lambda x : x ** 2
print square(4), square2(4)
add3 = lambda x, y : x + y
print add3(20, 30)
Explanation: Lambda Function
๋จ์ผ๋ฌธ์ผ๋ก ํํ๋๋ ์ต๋ช
ํจ์
์ต๋ช
ํจ์๋ ์ด๋ฆ์ด ์๋ ๊ตฌํ์ฒด๋ง ์กด์ฌํ๋ ๊ฐ๋จํ ํจ์๋ฅผ ์๋ฏธ
์ฝ๋ ์์์ ํ๋ฒ๋ง ์ฌ์ฉ๋๋ ๊ธฐ๋ฅ์ด ์์ ๋, ๊ตณ์ด ํจ์๋ก ๋ง๋ค์ง ์๊ณ 1ํ์ฑ์ผ๋ก ๋ง๋ค์ด์ ์ธ ๋ ์ฌ์ฉ
End of explanation
nums = range(2, 100)
print filter(None, nums)
print filter(lambda x : x % 2 == 0, nums)
a = ['apple', 'cat', 'banana', 'hat', 'orange', 'carrot', 'python']
print filter(lambda x : len(x) <=5, a)
print [x for x in a if len(x) <=5]
nums = range(2, 20)
print map(lambda x : x ** 2, nums)
nums = [1, 2, 9, 8, 5, 4, 7, 10, 3]
print reduce(lambda x, y : x + y, nums)
print reduce(lambda x, y : x if x > y else y, nums)
Explanation: filter, map ,reduce
lambda๊ฐ ์ ์ฉํ๊ฒ ์ฌ์ฉ๋๋ 3๊ฐ์ง ๋ํ์ ํจ์
ํจ์ํ ํ๋ก๊ทธ๋๋ฐ์ ๊ธฐ๋ณธ ์์์ด๊ธฐ๋ ํจ
filter : ํน์ ์กฐ๊ฑด์ ๋ง์กฑํ๋ ์์๋ง ๋จ๊ธฐ๊ณ ํํฐ๋ง
map : ๊ฐ ์์๋ฅผ ์ฃผ์ด์ง ์์์ ๋ฐ๋ผ ๋ณํํ์ฌ ์๋ก์ด ๋ฆฌ์คํธ๋ฅผ ๋ณํ
reduce : ์ฐจ๋ก๋๋ก ์ 2๊ฐ์ ์์๋ฅผ ๊ฐ์ง๊ณ ์ฐ์ฐ. ์ด๊ฒ์ ๋ง์ง๋ง ์์๊น์ง ์งํ
End of explanation |
13,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr>
<td width=15%><img src="./img/UGA.png"></img></td>
<td><center><h1>Introduction to Python for Data Sciences</h1></center></td>
<td width=15%><a href="http
Step1: Support Vector Machines (SVM) are based on learning a vector $w$ and an intercept $b$ such that the hyperplane $w^T x - b = 0$ separates the data i.e. $a$ belongs to one class if $w^T a - b > 0$ and the other elsewhere.
They were later extended to Kernel methods that is $\kappa(w, a) - b = 0$ is now the separating curve where $\kappa$ is the kernel, typically
Step3: The following illustration can be found in the Python Data Science Handbook by Jake VanderPlas.
Step4: We see clearly that the linear SVM seeks at maximizing the margin between the hyperplane and the two well defined classes from the data.
Non-separable data
In real cases, the data is usually not linearly separable as before.
Step5: Let us use the same linear SVM classifier. Obviously, there are misclassified points, the model is thus learnt not by maximizing the margin (which does not exist anymore) but by minimizing a penalty over misclassified data. This penalty takes the form of an allowance margin controlled by a parameter $C$. The smaller $C$ the more inclusive the margin. Finding a good value for $C$ is up to the data scientist.
Step6: To find out which value of $C$ to use or globally the performance of the classifier, one can use Scikit Learn's classification metrics, for instance the confusion matrix.
Step7: It can also be plotted in a fancier way with seaborn.
Step8: Kernels
When the separation between classes is not linear, kernels may be used to draw separating curves instead of lines. The most popular is the Gaussian rbf.
Step9: Let us compare the linear and rbf training error using the zero one loss (the proportion of misclassified examples).
Step10: Multiple classes
Where there are multiples classes (as in the iris dataset of the Pandas notebook), different strategies can be adopted
Step11: Other classifiers
The main classifiers from Scikit learn are
Step12: One immediate problem here is that the features are not numeric (not floats). Thankfully, Scikit Learn provides encoders to convert categorical (aka nominal, discrete) features to numerical ones.
Step13: Even numerical values were encoded, as we are going to normalize, it is not really important.
The normalization is done by removing the mean and equalizing the variance per feature, in addition, we are going to add an intercept.
Step14: Regression and Feature selection with the Lasso
The lasso problem is finding a regressor $w$ such that minimizes
$$ \frac{1}{2 n_{samples}} \|X w - y ||^2_2 + \alpha \|w\|_1 $$
and is popular for prediction as it simultaneously selects features thanks to the $\ell_1$-term. The greater $\alpha$ the fewer features.
Step15: We can observe the regressor $w$ provided by the model, notice the sparsity.
Step16: We can observe which coefficients are put to $0$ and which ones are positively/negatively correlated.
Step17: Let us take a look at our predictions.
Step18: Regularization path
Selecting a good parameter $\alpha$ is the role of the data scientist. For instance, a easy way to do is the following. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
%matplotlib inline
# we create 40 separable points in R^2 around 2 centers (random_state=6 is a seed so that the set is separable)
X, y = make_blobs(n_samples=40, n_features=2, centers=2 , random_state=6)
print(X[:5,:],y[:5]) # print the first 5 points and labels
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
Explanation: <table>
<tr>
<td width=15%><img src="./img/UGA.png"></img></td>
<td><center><h1>Introduction to Python for Data Sciences</h1></center></td>
<td width=15%><a href="http://www.iutzeler.org" style="font-size: 16px; font-weight: bold">Franck Iutzeler</a> </td>
</tr>
</table>
<br/><br/>
<center><a style="font-size: 40pt; font-weight: bold">Chap. 4 - Scikit Learn </a></center>
<br/><br/>
2- Supervised Learning
In the session, we will investigate some examples on how to deal with popular learning problems using standard algorithms. Many other problems and algorithms exist so this course is not at all exhaustive.
Classification
End of explanation
from sklearn.svm import SVC # Support vector classifier i.e. Classifier by SVM
modelSVMLinear = SVC(kernel="linear")
modelSVMLinear.fit(X,y)
Explanation: Support Vector Machines (SVM) are based on learning a vector $w$ and an intercept $b$ such that the hyperplane $w^T x - b = 0$ separates the data i.e. $a$ belongs to one class if $w^T a - b > 0$ and the other elsewhere.
They were later extended to Kernel methods that is $\kappa(w, a) - b = 0$ is now the separating curve where $\kappa$ is the kernel, typically:
* linear: $\kappa(x,y)= x^T y$ (original SVM)
* polynomial: $\kappa(x,y)= (x^T y)^d$
* Gaussian radial basis function (rfb): $\kappa(x,y)= \exp( - \gamma \| x - y \|^2 )$
End of explanation
def plot_svc_decision_function(model, ax=None, plot_support=True):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y , cmap=plt.cm.Paired)
plot_svc_decision_function(modelSVMLinear)
Explanation: The following illustration can be found in the Python Data Science Handbook by Jake VanderPlas.
End of explanation
# we create points in R^2 around 2 centers (random_state=48443 is a seed so that the set is *not* separable)
X, y = make_blobs(n_samples=100, n_features=2, centers=2 , random_state=48443)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
Explanation: We see clearly that the linear SVM seeks at maximizing the margin between the hyperplane and the two well defined classes from the data.
Non-separable data
In real cases, the data is usually not linearly separable as before.
End of explanation
from sklearn.model_selection import train_test_split # sklearn > ...
XTrain, XTest, yTrain, yTest = train_test_split(X,y,test_size = 0.5) # split data in two
model1 = SVC(kernel="linear",C=0.01)
model1.fit(XTrain,yTrain)
model2 = SVC(kernel="linear",C=100)
model2.fit(XTrain,yTrain)
plt.scatter(XTrain[:, 0], XTrain[:, 1], c=yTrain , cmap=plt.cm.Paired)
plot_svc_decision_function(model1)
plt.title("C = 0.01")
plt.scatter(XTrain[:, 0], XTrain[:, 1], c=yTrain , cmap=plt.cm.Paired)
plot_svc_decision_function(model2)
plt.title("C = 100")
Explanation: Let us use the same linear SVM classifier. Obviously, there are misclassified points, the model is thus learnt not by maximizing the margin (which does not exist anymore) but by minimizing a penalty over misclassified data. This penalty takes the form of an allowance margin controlled by a parameter $C$. The smaller $C$ the more inclusive the margin. Finding a good value for $C$ is up to the data scientist.
End of explanation
from sklearn.metrics import confusion_matrix
yFit1 = model1.predict(XTest)
yFit2 = model2.predict(XTest)
mat1 = confusion_matrix(yTest, yFit1)
mat2 = confusion_matrix(yTest, yFit2)
print('Model with C = 0.01')
print(mat1)
print("Model with C = 100")
print(mat2)
Explanation: To find out which value of $C$ to use or globally the performance of the classifier, one can use Scikit Learn's classification metrics, for instance the confusion matrix.
End of explanation
import seaborn as sns
sns.heatmap(mat1, square=True, annot=True ,cbar=False)
plt.ylabel('true label')
plt.xlabel('predicted label')
Explanation: It can also be plotted in a fancier way with seaborn.
End of explanation
from sklearn.datasets import make_moons
X,y = make_moons(noise=0.1)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
modelLinear = SVC(kernel="linear")
modelLinear.fit(X,y)
modelRbf = SVC(kernel="rbf")
modelRbf.fit(X,y)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plot_svc_decision_function(modelLinear)
plot_svc_decision_function(modelRbf)
plt.title("The two models superposed")
Explanation: Kernels
When the separation between classes is not linear, kernels may be used to draw separating curves instead of lines. The most popular is the Gaussian rbf.
End of explanation
from sklearn.metrics import zero_one_loss
yFitLinear = modelLinear.predict(X)
yFitRbf = modelRbf.predict(X)
print("0/1 loss -- Linear: {:.3f} Rbf: {:.3f}".format(zero_one_loss(y, yFitLinear),zero_one_loss(y, yFitRbf)))
Explanation: Let us compare the linear and rbf training error using the zero one loss (the proportion of misclassified examples).
End of explanation
import pandas as pd
import numpy as np
iris = pd.read_csv('data/iris.csv')
classes = pd.DataFrame(iris["species"])
features = iris.drop(["species","sepal_length","sepal_width"],axis=1)
classes.sample(6)
features.sample(6)
XTrain, XTest, yTrain, yTest = train_test_split(features,classes,test_size = 0.5)
from sklearn.multiclass import OneVsRestClassifier
yPred = OneVsRestClassifier(SVC()).fit(XTrain, yTrain).predict(XTest)
print(yPred) # Note the classes are not number but everything went as expected
class_labels= ['virginica' , 'setosa' , 'versicolor']
sns.heatmap(confusion_matrix(yTest, yPred), square=True, annot=True ,cbar=False, xticklabels= class_labels, yticklabels=class_labels)
plt.ylabel('true label')
plt.xlabel('predicted label')
Explanation: Multiple classes
Where there are multiples classes (as in the iris dataset of the Pandas notebook), different strategies can be adopted:
* Transforming the multiclass problem into a binary one by looking at the one-vs-rest problem (for each class construct a binary classifier between it and the rest) or the one-vs-one one (where each couple of classes is considered separately). After this transformation, standard binary classifiers can be used.
* Using dedicated algorithms such as decision trees
The corresponding algorithms can be found in the multiclass module documentation.
We are going to illustrate this by the iris 3-class classification problem using only the 2 petal features (width and length, this is only so that the feature vector is 2D and easy to visualize).
End of explanation
import pandas as pd
import numpy as np
student = pd.read_csv('data/student-mat.csv')
student.head()
target = pd.DataFrame(student["G3"])
features = student.drop(["G3"],axis=1)
Explanation: Other classifiers
The main classifiers from Scikit learn are: Linear SVM, RBF SVM (as already seen), Nearest Neighbors, Gaussian Process, Decision Tree, Random Forest, Neural Net, AdaBoost, Naive Bayes, QDA.
Use is:
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
Regression
Let consider the problem of predicting real values from a set of features.
We will consider the <a href="http://archive.ics.uci.edu/ml/datasets/Student+Performance">student performance</a> dataset. The goal is to predict the final grade from the other information, we get from the documentation:
End of explanation
from sklearn.preprocessing import LabelEncoder
lenc = LabelEncoder()
num_features = features.apply(lenc.fit_transform)
num_features.head()
Explanation: One immediate problem here is that the features are not numeric (not floats). Thankfully, Scikit Learn provides encoders to convert categorical (aka nominal, discrete) features to numerical ones.
End of explanation
from sklearn.preprocessing import StandardScaler, add_dummy_feature
scaler = StandardScaler()
normFeatures = add_dummy_feature(scaler.fit_transform(num_features))
preproData = pd.DataFrame(normFeatures , columns=[ "intercept" ] + list(num_features.columns) )
preproData.describe().T
Explanation: Even numerical values were encoded, as we are going to normalize, it is not really important.
The normalization is done by removing the mean and equalizing the variance per feature, in addition, we are going to add an intercept.
End of explanation
from sklearn.model_selection import train_test_split # sklearn > ...
from sklearn.linear_model import Lasso
XTrain, XTest, yTrain, yTest = train_test_split(preproData,target,test_size = 0.25)
model = Lasso(alpha=0.1)
model.fit(XTrain,yTrain)
Explanation: Regression and Feature selection with the Lasso
The lasso problem is finding a regressor $w$ such that minimizes
$$ \frac{1}{2 n_{samples}} \|X w - y ||^2_2 + \alpha \|w\|_1 $$
and is popular for prediction as it simultaneously selects features thanks to the $\ell_1$-term. The greater $\alpha$ the fewer features.
End of explanation
model.coef_
Explanation: We can observe the regressor $w$ provided by the model, notice the sparsity.
End of explanation
print("Value Feature")
for idx,val in enumerate(model.coef_):
print("{:6.3f} {}".format(val,preproData.columns[idx]))
Explanation: We can observe which coefficients are put to $0$ and which ones are positively/negatively correlated.
End of explanation
targetPred = model.predict(XTest)
print("Predicted True")
for idx,val in enumerate(targetPred):
print("{:4.1f} {:.0f}".format(val,float(yTest.iloc[idx])))
Explanation: Let us take a look at our predictions.
End of explanation
n_test = 15
alpha_tab = np.logspace(-10,1,base=2,num = n_test)
print(alpha_tab)
trainError = np.zeros(n_test)
testError = np.zeros(n_test)
featureNum = np.zeros(n_test)
for idx,alpha in enumerate(alpha_tab):
model = Lasso(alpha=alpha)
model.fit(XTrain,yTrain)
yPredTrain = model.predict(XTrain)
yPredTest = model.predict(XTest)
trainError[idx] = np.linalg.norm(yPredTrain-yTrain["G3"].values)/yTrain.count()
testError[idx] = np.linalg.norm(yPredTest-yTest["G3"].values)/yTest.count()
featureNum[idx] = sum(model.coef_!=0)
alpha_opt = alpha_tab[np.argmin(testError)]
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
plt.subplot(311)
plt.xscale("log")
plt.plot(alpha_tab, trainError,label="train error")
plt.xlim([min(alpha_tab),max(alpha_tab)])
plt.legend()
plt.xticks([])
plt.axvline(x=alpha_opt)
plt.ylabel("error")
plt.subplot(312)
plt.xscale("log")
plt.plot(alpha_tab, testError,'r',label="test error")
plt.xlim([min(alpha_tab),max(alpha_tab)])
#plt.ylim([0.19, 0.21])
plt.legend()
plt.axvline(x=alpha_opt)
plt.xticks([])
plt.ylabel("error")
plt.subplot(313)
plt.xscale("log")
plt.scatter(alpha_tab, featureNum)
plt.xlim([min(alpha_tab),max(alpha_tab)])
plt.ylim([0,28])
plt.axvline(x=alpha_opt)
plt.ylabel("nb. of features")
plt.xlabel("alpha")
Explanation: Regularization path
Selecting a good parameter $\alpha$ is the role of the data scientist. For instance, a easy way to do is the following.
End of explanation |
13,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This exercise will test your ability to read a data file and understand statistics about the data.
In later exercises, you will apply techniques to filter the data, build a machine learning model, and iteratively improve your model.
The course examples use data from Melbourne. To ensure you can apply these techniques on your own, you will have to apply them to a new dataset (with house prices from Iowa).
Exercises
Run the following cell to set up code-checking, which will verify your work as you go.
Step1: Step 1
Step2: Step 2 | Python Code:
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex2 import *
print("Setup Complete")
Explanation: This exercise will test your ability to read a data file and understand statistics about the data.
In later exercises, you will apply techniques to filter the data, build a machine learning model, and iteratively improve your model.
The course examples use data from Melbourne. To ensure you can apply these techniques on your own, you will have to apply them to a new dataset (with house prices from Iowa).
Exercises
Run the following cell to set up code-checking, which will verify your work as you go.
End of explanation
import pandas as pd
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = ____
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
# Fill in the line below to read the file into a variable home_data
home_data = 0
# Call line below with no argument to check that you've loaded the data correctly
step_1.assert_check_failed()
#%%RM_IF(PROD)%%
# Fill in the line below to read the file into a variable home_data
home_data = pd.DataFrame()
# Call line below with no argument to check that you've loaded the data correctly
step_1.assert_check_failed()
home_data = pd.read_csv(iowa_file_path)
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
Explanation: Step 1: Loading Data
Read the Iowa data file into a Pandas DataFrame called home_data.
End of explanation
# Print summary statistics in next line
____
# What is the average lot size (rounded to nearest integer)?
avg_lot_size = ____
# As of today, how old is the newest home (current year - the date in which it was built)
newest_home_age = ____
# Check your answers
step_2.check()
#step_2.hint()
#step_2.solution()
Explanation: Step 2: Review The Data
Use the command you learned to view summary statistics of the data. Then fill in variables to answer the following questions
End of explanation |
13,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 5
In this notebook I add additional features to my dataframe that help explain the variation in the price of BTC. Furthermore, multiple regression will control for these features with regards to their effect on BTC.
You will see below taht I replace a lot of the data I initially obtained via quandl has been replaced from a website called blockchain.info
Step1: In this notebook, I will get additional features from blockchain.info.
I came across blockchain.info that has a PLETHORA of data regarding bitcoin!
As such, I will be getting all my information from here.
Step2: The above dataframe represents where we started. I will be replacing volume and weighted price moving forward.
I will also be dropping open, high, low and close in addition to compound_sent which represents a univariate metric for sentiment, whereas I am convered with a multivariate metrics.
Step3: Adding Total Number of confirmed transactions per day
Step4: Adding No. of Unique BTC Addresses BTC
Step5: Adding Market Cap
Step6: Adding hash rate
Step7: Adding mempool transaction count
Step8: adding estiamted USD transaction value
Step9: Adding Average market price across all exchanges.
Step10: We can see from above that mkt_price and mkt_cap are perfectly collinear. We will drop mkt_cap. We will also drop neu_sent as it has an extremely large VIF. | Python Code:
from sklearn.model_selection import train_test_split
%run helper_functions.py
%run filters.py
%run plotly_functions.py
%run master_func.py
%run btc_info_df.py
plt.style.use('fivethirtyeight')
%autosave 120
import quandl
from datetime import date
from tabulate import tabulate
from collections import Counter
from IPython.display import Image
import math
import string
%matplotlib inline
plt.rcParams["figure.figsize"] = (15,10)
plt.rcParams["xtick.labelsize"] = 16
plt.rcParams["ytick.labelsize"] = 16
plt.rcParams["axes.labelsize"] = 20
plt.rcParams['legend.fontsize'] = 20
plt.style.use('fivethirtyeight')
pd.set_option('display.max_colwidth', -1)
import plotly.plotly as py
import plotly.graph_objs as go
import spacy
nlp = spacy.load("en")
nltk_stopwords = stopwords.words("english")+["rt", "via","-ยป","--ยป","--","---","-->","<--","->","<-","ยซ--","ยซ","ยซ-","ยป","ยซยป", " โ", "โ"]
punc = '#!"%&\'()*+,-./:;<=>?@[\\]^_`{|}~'
from nltk.sentiment.vader import SentimentIntensityAnalyzer
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from patsy import dmatrices
import statsmodels.api as sm
from statsmodels.stats.outliers_influence import variance_inflation_factor
Explanation: Notebook 5
In this notebook I add additional features to my dataframe that help explain the variation in the price of BTC. Furthermore, multiple regression will control for these features with regards to their effect on BTC.
You will see below taht I replace a lot of the data I initially obtained via quandl has been replaced from a website called blockchain.info
End of explanation
modelling_df = unpickle_object("modelling_df_V1.pkl")
modelling_df.head()
Explanation: In this notebook, I will get additional features from blockchain.info.
I came across blockchain.info that has a PLETHORA of data regarding bitcoin!
As such, I will be getting all my information from here.
End of explanation
modelling_df.drop(["Close","High", "Low", "Open", "compound_sent", "Volume (Currency)", "Weighted Price"], axis=1, inplace=True)
modelling_df.head()
Explanation: The above dataframe represents where we started. I will be replacing volume and weighted price moving forward.
I will also be dropping open, high, low and close in addition to compound_sent which represents a univariate metric for sentiment, whereas I am convered with a multivariate metrics.
End of explanation
tot_num_trans = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/total_num_trans_per_day_btc.csv", header=None)
subset_tot_num_trans = clean_blockchain_csv(tot_num_trans, ["date", "tot_num_trans"])
df1 = pd.merge(modelling_df, subset_tot_num_trans, on='date', how="outer")
df1.head()
Explanation: Adding Total Number of confirmed transactions per day
End of explanation
num_unique_addr = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/unique_address_btc.csv", header=None)
subset_num_unique_addr = clean_blockchain_csv(num_unique_addr, ['date', "unique_addr"])
df2 = pd.merge(df1, subset_num_unique_addr, on='date', how="outer")
df2.head()
Explanation: Adding No. of Unique BTC Addresses BTC
End of explanation
mkt_cap = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/market_cap_btc.csv", header=None)
subset_mkt_cap = clean_blockchain_csv(mkt_cap, ['date', "mkt_cap"])
df3 = pd.merge(df2, subset_mkt_cap, on='date', how="outer")
df3.head()
Explanation: Adding Market Cap
End of explanation
hash_rate = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/hash_rate_btc.csv", header=None)
subset_hash_rate = clean_blockchain_csv(hash_rate, ['date', "hash_rate"])
df4 = pd.merge(df3, subset_hash_rate, on='date', how="outer")
df4.head()
Explanation: Adding hash rate
End of explanation
mempool_trans = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/mempool_trans_count_btc.csv", header=None)
subset_mempool_trans = clean_blockchain_csv(mempool_trans, ['date', "mempool_trans"])
subset_mempool_trans['date'] = subset_mempool_trans['date'].apply(lambda x: x.date())
subset_mempool_trans = subset_mempool_trans.groupby("date").sum().reset_index()
del subset_mempool_trans['date']
df5 = pd.concat([df4, subset_mempool_trans], axis=1) #couldnt merge for some reason
df5.head()
Explanation: Adding mempool transaction count
End of explanation
est_USD_tans_val = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/estimated-transaction-volume-usd.csv", header=None)
subset_est_USD_tans_val = clean_blockchain_csv(est_USD_tans_val, ['date', "USD_trans_val"])
df6 = pd.merge(df5, subset_est_USD_tans_val, on='date', how="outer")
df6.head()
Explanation: adding estiamted USD transaction value
End of explanation
mkt_price = pd.read_csv("/Users/ibrahimgabr/Downloads/project-5/Data/blockchain.info/market_price_btc.csv", header=None)
subset_mkt_price = clean_blockchain_csv(mkt_price, ['date', "mkt_price"])
df7 = pd.merge(df6, subset_mkt_price, on="date", how="outer")
df7.head()
dates_lst = df7['date']
df7.head()
df7.drop(["date"],axis=1, inplace=True)
features = "+".join(df7.columns[:-1])
y, X = dmatrices('mkt_price ~ ' + features, df7, return_type='dataframe')
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif["features"] = X.columns
vif.round(1) #looks like we are doing great!
df7.corr()
Explanation: Adding Average market price across all exchanges.
End of explanation
df7.drop(["neu_sent", "mkt_cap"],axis=1, inplace=True)
features = "+".join(df7.columns[:-1])
y, X = dmatrices('mkt_price ~ ' + features, df7, return_type='dataframe')
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif["features"] = X.columns
vif.round(1) #looks like we are doing great!
df7.corr()
plot_corr_matrix(df7)
df7.head()
df7.shape
# pickle_object(df7, "blockchain_info_df")
data = unpickle_object("blockchain_info_df.pkl")
data['date'] = dates_lst
percentage_missing(data)
data.set_index('date', inplace=True)
data.head()
Explanation: We can see from above that mkt_price and mkt_cap are perfectly collinear. We will drop mkt_cap. We will also drop neu_sent as it has an extremely large VIF.
End of explanation |
13,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov switching dynamic regression models
This notebook provides an example of the use of Markov switching models in statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http
Step1: Federal funds rate with switching intercept
The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply
Step2: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.
Step3: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.
Step4: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.
Federal funds rate with switching intercept and lagged dependent variable
The second example augments the previous model to include the lagged value of the federal funds rate.
$$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood
Step5: There are several things to notice from the summary output
Step6: Finally, the expected durations of each regime have decreased quite a bit.
Step7: Taylor rule with 2 or 3 regimes
We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.
Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.
Step8: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.
Step9: Switching variances
We can also accommodate switching variances. In particular, we consider the model
$$
y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2)
$$
We use maximum likelihood to estimate the parameters of this model
Step10: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
Explanation: Markov switching dynamic regression models
This notebook provides an example of the use of Markov switching models in statsmodels to estimate dynamic regression models with changes in regime. It follows the examples in the Stata Markov switching documentation, which can be found at http://www.stata.com/manuals14/tsmswitch.pdf.
End of explanation
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds
dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
# Plot the data
dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3))
# Fit the model
# (a switching mean is the default of the MarkovRegession model)
mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2)
res_fedfunds = mod_fedfunds.fit()
res_fedfunds.summary()
Explanation: Federal funds rate with switching intercept
The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply:
$$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$.
The data used in this example can be found at https://www.stata-press.com/data/r14/usmacro.
End of explanation
res_fedfunds.smoothed_marginal_probabilities[1].plot(
title='Probability of being in the high regime', figsize=(12,3));
Explanation: From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.
End of explanation
print(res_fedfunds.expected_durations)
Explanation: From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.
End of explanation
# Fit the model
mod_fedfunds2 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1])
res_fedfunds2 = mod_fedfunds2.fit()
res_fedfunds2.summary()
Explanation: A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years.
Federal funds rate with switching intercept and lagged dependent variable
The second example augments the previous model to include the lagged value of the federal funds rate.
$$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$
where $S_t \in {0, 1}$, and the regime transitions according to
$$ P(S_t = s_t | S_{t-1} = s_{t-1}) =
\begin{bmatrix}
p_{00} & p_{10} \
1 - p_{00} & 1 - p_{10}
\end{bmatrix}
$$
We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$.
End of explanation
res_fedfunds2.smoothed_marginal_probabilities[0].plot(
title='Probability of being in the high regime', figsize=(12,3));
Explanation: There are several things to notice from the summary output:
The information criteria have decreased substantially, indicating that this model has a better fit than the previous model.
The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept.
Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability.
End of explanation
print(res_fedfunds2.expected_durations)
Explanation: Finally, the expected durations of each regime have decreased quite a bit.
End of explanation
# Get the additional data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf
dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS'))
exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:]
# Fit the 2-regime model
mod_fedfunds3 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=2, exog=exog)
res_fedfunds3 = mod_fedfunds3.fit()
# Fit the 3-regime model
np.random.seed(12345)
mod_fedfunds4 = sm.tsa.MarkovRegression(
dta_fedfunds.iloc[4:], k_regimes=3, exog=exog)
res_fedfunds4 = mod_fedfunds4.fit(search_reps=20)
res_fedfunds3.summary()
res_fedfunds4.summary()
Explanation: Taylor rule with 2 or 3 regimes
We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better.
Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.
End of explanation
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-interest rate regime')
ax = axes[1]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-interest rate regime')
ax = axes[2]
ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-interest rate regime')
fig.tight_layout()
Explanation: Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.
End of explanation
# Get the federal funds rate data
from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns
dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W'))
# Plot the data
dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3))
# Fit the model
mod_areturns = sm.tsa.MarkovRegression(
dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True)
res_areturns = mod_areturns.fit()
res_areturns.summary()
Explanation: Switching variances
We can also accommodate switching variances. In particular, we consider the model
$$
y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2)
$$
We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$.
The application is to absolute returns on stocks, where the data can be found at https://www.stata-press.com/data/r14/snp500.
End of explanation
res_areturns.smoothed_marginal_probabilities[0].plot(
title='Probability of being in a low-variance regime', figsize=(12,3));
Explanation: The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.
End of explanation |
13,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Passband Luminosity
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
Step3: Lastly, just to make things a bit easier, we'll turn off limb-darkening and irradiation (reflection) and use blackbody atmospheres.
Step4: Relevant Parameters
The 'pblum_ref' parameter exists for each component-dataset pair and it determines how the intensities for that star in that passband should be scaled, i.e. by the pblum provided by that component ('self') or coupled to the pblum provided by another component.
By default the passband luminosities are coupled (see below for explanations of coupled vs decoupled), with the passband luminosity being defined by the primary component in the system.
Step5: The 'pblum' parameter is only relevant for each component-dataset pair in which pblum_ref=='self'. This component will then have its intensities scaled such that they match the value provided by pblum. In general, a pblum of 4pi will result in an out-of-eclipse flux of ~1.
Step6: NOTE
Step7: Now note that only a single pblum parameter is visible.
Step8: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2.
Step9: If we now set pblum to be only 2 pi, we should expect the entire light curve to be scaled in half.
Step10: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
Step11: Let us undo our changes before we look at decoupled luminosities.
Step12: Decoupled Luminosities
The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, all 'pblum_ref' parameters should be set to 'self'.
Step13: Now we see that both pblums are available and can have different values.
Step14: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
Step15: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii).
Step16: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Now we'll just undo our changes before we look at accessing model luminosities.
Step17: Accessing Model Luminosities
Luminosities of the individual stars in a system can be accessed through the mesh (either through creating a mesh dataset or by setting pbmesh=True during run_compute). For stars that have pblum defined (as opposed to coupled to another star in the system), this value should be equivalent to pblum at t0 - and in simple circular cases will probably be equivalent at all times.
Let's create a mesh dataset at a few times and then access the synthetic luminosities.
Step18: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
Step19: Now let's compare the value of the synthetic luminosities to those of the input pblum
Step20: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
Step21: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
Step22: Now, we'll just undo our changes before continuing
Step23: Role of Pblum
Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood.
Step24: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
Step25: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
Step26: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle. | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Passband Luminosity
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: And we'll add a single light curve dataset so that we can see how passband luminosities affect the resulting synthetic light curve model.
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0,0])
b.set_value_all('atm', 'blackbody')
b.set_value('irrad_method', 'none')
Explanation: Lastly, just to make things a bit easier, we'll turn off limb-darkening and irradiation (reflection) and use blackbody atmospheres.
End of explanation
print b['pblum_ref']
print b['pblum_ref@primary']
Explanation: Relevant Parameters
The 'pblum_ref' parameter exists for each component-dataset pair and it determines how the intensities for that star in that passband should be scaled, i.e. by the pblum provided by that component ('self') or coupled to the pblum provided by another component.
By default the passband luminosities are coupled (see below for explanations of coupled vs decoupled), with the passband luminosity being defined by the primary component in the system.
End of explanation
print b['pblum']
Explanation: The 'pblum' parameter is only relevant for each component-dataset pair in which pblum_ref=='self'. This component will then have its intensities scaled such that they match the value provided by pblum. In general, a pblum of 4pi will result in an out-of-eclipse flux of ~1.
End of explanation
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
Explanation: NOTE: other parameters also affect flux-levels, including limb darkening and distance
Coupled Luminosities
Passband luminosities are considered coupled when a single pblum value is provided, while the passband luminosity of the other component(s) is scaled by the same factor. To accomplish this, ONE pblum_ref in the system must be set as 'self' and ALL OTHER pbscales must refer to that component. This is the default case, set explicitly by:
End of explanation
print b['pblum']
Explanation: Now note that only a single pblum parameter is visible.
End of explanation
b.run_compute()
axs, artists = b.plot(dataset='lc01')
Explanation: Let's see how changing the value of pblum affects the computed light curve. By default, pblum is set to be 4 pi, giving a total flux for the primary star of ~1.
Since the secondary star in the default binary is identical to the primary star, we'd expect an out-of-eclipse flux of the binary to be ~2.
End of explanation
b['pblum@primary'] = 2 * np.pi
b.run_compute()
axs, artist = b.plot()
Explanation: If we now set pblum to be only 2 pi, we should expect the entire light curve to be scaled in half.
End of explanation
b['teff@secondary'] = 0.5 * b.get_value('teff@primary')
print b['teff']
b.run_compute()
axs, artists = b.plot()
Explanation: And if we halve the temperature of the secondary star - the resulting light curve changes to the new sum of fluxes, where the primary star dominates since the secondary star flux is reduced by a factor of 16, so we expect a total out-of-eclipse flux of ~0.5 + ~0.5/16 = ~0.53.
End of explanation
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
Explanation: Let us undo our changes before we look at decoupled luminosities.
End of explanation
b.set_value_all('pblum_ref', 'self')
Explanation: Decoupled Luminosities
The luminosities are decoupled when pblums are provided for the individual components. To accomplish this, all 'pblum_ref' parameters should be set to 'self'.
End of explanation
print b['pblum']
Explanation: Now we see that both pblums are available and can have different values.
End of explanation
b.set_value_all('pblum', 4*np.pi)
b.run_compute()
axs, artists = b.plot()
Explanation: If we set these to 4pi, then we'd expect each star to contribute 1.0 in flux units, meaning the baseline of the light curve should be at approximately 2.0
End of explanation
print b['teff']
b['teff@secondary'] = 3000
b.run_compute()
axs, artists = b.plot()
Explanation: Now let's make a significant temperature-ratio by making a very cool secondary star. Since the luminosities are decoupled - this temperature change won't affect the resulting light curve very much (compare this to the case above with coupled luminosities). What is happening here is that even though the secondary star is cooler, its luminosity is being rescaled to the same value as the primary star, so the eclipse depth doesn't change (you would see a similar lack-of-effect if you changed the radii).
End of explanation
b.set_value_all('teff', 6000)
b.set_value_all('pblum', 4*np.pi)
b['pblum_ref@primary'] = 'self'
b['pblum_ref@secondary'] = 'primary'
Explanation: In most cases you will not want decoupled luminosities as they can easily break the self-consistency of your model.
Now we'll just undo our changes before we look at accessing model luminosities.
End of explanation
b.add_dataset('mesh', times=np.linspace(0,1,5), dataset='mesh01')
b.run_compute()
Explanation: Accessing Model Luminosities
Luminosities of the individual stars in a system can be accessed through the mesh (either through creating a mesh dataset or by setting pbmesh=True during run_compute). For stars that have pblum defined (as opposed to coupled to another star in the system), this value should be equivalent to pblum at t0 - and in simple circular cases will probably be equivalent at all times.
Let's create a mesh dataset at a few times and then access the synthetic luminosities.
End of explanation
print b.filter(qualifier='pblum', context='model').twigs
Explanation: Since the luminosities are passband-dependent, they are stored with the same dataset as the light curve (or RV), but with the mesh method, and are available at each of the times at which a mesh was stored.
End of explanation
t0 = b.get_value('t0@system')
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value('pblum@primary@dataset')
Explanation: Now let's compare the value of the synthetic luminosities to those of the input pblum
End of explanation
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
Explanation: In this case, since our two stars are identical, the synthetic luminosity of the secondary star should be the same as the primary (and the same as pblum@primary).
End of explanation
b['teff@secondary@component'] = 3000
b.run_compute()
print b.get_value(qualifier='pblum', time=t0, component='primary', kind='mesh', context='model')
print b.get_value(qualifier='pblum', time=t0, component='secondary', kind='mesh', context='model')
Explanation: However, if we change the temperature of the secondary star again, since the pblums are coupled, we'd expect the synthetic luminosity of the primary to remain fixed but the secondary to decrease.
End of explanation
b.set_value_all('teff@component', 6000)
Explanation: Now, we'll just undo our changes before continuing
End of explanation
areas = b.get_value(qualifier='areas', dataset='mesh01', time=t0, component='primary', unit='m^2')
ldint = b.get_value(qualifier='ldint', component='primary', time=t0)
ptfarea = b.get_value(qualifier='ptfarea', component='primary', time=t0)
abs_normal_intensities = b.get_value(qualifier='abs_normal_intensities', dataset='lc01', time=t0, component='primary')
normal_intensities = b.get_value(qualifier='normal_intensities', dataset='lc01', time=t0, component='primary')
Explanation: Role of Pblum
Let's now look at the intensities in the mesh to see how they're being scaled under-the-hood.
End of explanation
np.median(abs_normal_intensities)
Explanation: 'abs_normal_intensities' are the intensities per triangle in absolute units, i.e. W/m^3.
End of explanation
np.median(normal_intensities)
Explanation: The values of 'normal_intensities', however, are significantly samller (in this case). These are the intensities in relative units which will eventually be integrated to give us flux for a light curve.
End of explanation
pblum = b.get_value(qualifier='pblum', component='primary', context='dataset')
print np.sum(normal_intensities * ldint * np.pi * areas) * ptfarea, pblum
Explanation: 'normal_intensities' are scaled from 'abs_normal_intensities' so that the computed luminosity matches the prescribed luminosity (pblum).
Here we compute the luminosity by summing over each triangle's intensity in the normal direction, and multiply it by pi to account for blackbody intensity emitted in all directions in the solid angle, and by the area of that triangle.
End of explanation |
13,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step2: Landau-Zener-Stuckelberg interferometry
Step3: Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from qutip.ui.progressbar import TextProgressBar as ProgressBar
Explanation: QuTiP example: Landau-Zener-Stuckelberg inteferometry
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
# set up the parameters and start calculation
delta = 1.0 * 2 * np.pi # qubit sigma_x coefficient
w = 2.0 * 2 * np.pi # driving frequency
T = 2 * np.pi / w # driving period
gamma1 = 0.00001 # relaxation rate
gamma2 = 0.005 # dephasing rate
eps_list = np.linspace(-20.0, 20.0, 101) * 2 * np.pi
A_list = np.linspace( 0.0, 20.0, 101) * 2 * np.pi
# pre-calculate the necessary operators
sx = sigmax(); sz = sigmaz(); sm = destroy(2); sn = num(2)
# collapse operators
c_op_list = [np.sqrt(gamma1) * sm, np.sqrt(gamma2) * sz] # relaxation and dephasing
# ODE settings (for list-str format)
options = Options()
options.rhs_reuse = True
options.atol = 1e-6 # reduce accuracy to speed
options.rtol = 1e-5 # up the calculation a bit
# for function-callback style time-dependence
def hamiltonian_t(t, args):
evaluate the hamiltonian at time t.
H0 = args[0]
H1 = args[1]
w = args[2]
return H0 + H1 * np.sin(w * t)
# perform the calculation for each combination of eps and A, store the result
# in a matrix
def calculate():
p_mat = np.zeros((len(eps_list), len(A_list)))
pbar = ProgressBar(len(eps_list))
for m, eps in enumerate(eps_list):
H0 = - delta/2.0 * sx - eps/2.0 * sz
pbar.update(m)
for n, A in enumerate(A_list):
H1 = (A/2) * sz
# function callback format
#args = (H0, H1, w); H_td = hamiltonian_t
# list-str format
#args = {'w': w}; H_td = [H0, [H1, 'sin(w * t)']]
# list-function format
args = w; H_td = [H0, [H1, lambda t, w: np.sin(w * t)]]
U = propagator(H_td, T, c_op_list, args, options)
rho_ss = propagator_steadystate(U)
p_mat[m,n] = np.real(expect(sn, rho_ss))
return p_mat
p_mat = calculate()
fig, ax = plt.subplots(figsize=(8, 8))
A_mat, eps_mat = np.meshgrid(A_list/(2*np.pi), eps_list/(2*np.pi))
ax.pcolor(eps_mat, A_mat, p_mat)
ax.set_xlabel(r'Bias point $\epsilon$')
ax.set_ylabel(r'Amplitude $A$')
ax.set_title("Steadystate excitation probability\n" +
r'$H = -\frac{1}{2}\Delta\sigma_x -\frac{1}{2}\epsilon\sigma_z - \frac{1}{2}A\sin(\omega t)$' + "\n");
Explanation: Landau-Zener-Stuckelberg interferometry: Steady state of a strongly driven two-level system, using the one-period propagator.
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
13,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
Step1: Load data from CSV files
Step2: Data merging
Step3: Let's explore some data
Let's examine 3 different queries and products
Step4: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
Step5: only 'wood' is present from search term
Step6: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
Step7: Stemming
Step8: TF-IDF with linear regression | Python Code:
import graphlab as gl
from nltk.stem import *
Explanation: Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
End of explanation
train = gl.SFrame.read_csv("../data/train.csv")
test = gl.SFrame.read_csv("../data/test.csv")
desc = gl.SFrame.read_csv("../data/product_descriptions.csv")
Explanation: Load data from CSV files
End of explanation
# merge train with description
train = train.join(desc, on = 'product_uid', how = 'left')
# merge test with description
test = test.join(desc, on = 'product_uid', how = 'left')
Explanation: Data merging
End of explanation
first_doc = train[0]
first_doc
Explanation: Let's explore some data
Let's examine 3 different queries and products:
* first from the training set
* somewhere in the moddle in the training set
* the last one from the training set
End of explanation
middle_doc = train[37033]
middle_doc
Explanation: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
End of explanation
last_doc = train[-1]
last_doc
Explanation: only 'wood' is present from search term
End of explanation
train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term'])
ranked3doc = train[train['relevance'] == 3]
print ranked3doc.head()
len(ranked3doc)
words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True)
words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True)
words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True)
wordsdiff_desc = []
wordsdiff_title = []
puid = []
search_term = []
ws_count = []
ws_count_used_desc = []
ws_count_used_title = []
for item in xrange(len(ranked3doc)):
ws = words_search[item]
pd = words_description[item]
pt = words_title[item]
diff = set(ws) - set(pd)
if diff is None:
diff = 0
wordsdiff_desc.append(diff)
diff2 = set(ws) - set(pt)
if diff2 is None:
diff2 = 0
wordsdiff_title.append(diff2)
puid.append(ranked3doc[item]['product_uid'])
search_term.append(ranked3doc[item]['search_term'])
ws_count.append(len(ws))
ws_count_used_desc.append(len(ws) - len(diff))
ws_count_used_title.append(len(ws) - len(diff2))
differences = gl.SFrame({"puid" : puid,
"search term": search_term,
"diff desc" : wordsdiff_desc,
"diff title" : wordsdiff_title,
"ws count" : ws_count,
"ws count used desc" : ws_count_used_desc,
"ws count used title" : ws_count_used_title})
differences.sort(['ws count used desc', 'ws count used title'])
print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0]))
print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0]))
print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) &
(differences['ws count used title'] == 0)]))
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
End of explanation
#stemmer = SnowballStemmer("english")
stemmer = PorterStemmer()
def stem(word):
singles = [stemmer.stem(plural) for plural in unicode(word, errors='replace').split()]
text = ' '.join(singles)
return text
print "Starting stemming train search term..."
stemmed = train['search_term'].apply(stem)
train['stem_search_term'] = stemmed
print "Starting stemming train product description..."
stemmed = train['product_description'].apply(stem)
train['stem_product_description'] = stemmed
print "Starting stemming train product title..."
stemmed = train['product_title'].apply(stem)
train['stem_product_title'] = stemmed
print "Starting stemming test search term..."
stemmed = test['search_term'].apply(stem)
test['stem_search_term'] = stemmed
print "Starting stemming test product description..."
stemmed = test['product_description'].apply(stem)
test['stem_product_description'] = stemmed
print "Starting stemming test product title..."
stemmed = test['product_title'].apply(stem)
test['stem_product_title'] = stemmed
Explanation: Stemming
End of explanation
train['search_term_word_count'] = gl.text_analytics.count_words(train['stem_search_term'])
train_search_tfidf = gl.text_analytics.tf_idf(train['search_term_word_count'])
train['search_tfidf'] = train_search_tfidf
train['product_desc_word_count'] = gl.text_analytics.count_words(train['stem_product_description'])
train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count'])
train['desc_tfidf'] = train_desc_tfidf
train['product_title_word_count'] = gl.text_analytics.count_words(train['stem_product_title'])
train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count'])
train['title_tfidf'] = train_title_tfidf
train['distance_desc'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
#train['distance_desc_sqrt'] = train['distance_desc'] ** 2
train['distance_title'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
#train['distance_title_sqrt'] = train['distance_title'] ** 3
model1 = gl.linear_regression.create(train, target = 'relevance',
features = ['distance_desc', 'distance_title'],
validation_set = None)
# model1 = gl.linear_regression.create(train, target = 'relevance',
# features = ['distance_desc', 'distance_desc_sqrt', 'distance_title', 'distance_title_sqrt'],
# validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
test['search_term_word_count'] = gl.text_analytics.count_words(test['stem_search_term'])
test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count'])
test['search_tfidf'] = test_search_tfidf
test['product_desc_word_count'] = gl.text_analytics.count_words(test['stem_product_description'])
test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count'])
test['desc_tfidf'] = test_desc_tfidf
test['product_title_word_count'] = gl.text_analytics.count_words(test['stem_product_title'])
test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count'])
test['title_tfidf'] = test_title_tfidf
test['distance_desc'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
#test['distance_desc_sqrt'] = test['distance_desc'] ** 2
test['distance_title'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
#test['distance_title_sqrt'] = test['distance_title'] ** 3
'''
predictions_test = model1.predict(test)
test_errors = predictions_test - test['relevance']
RSS_test = sum(test_errors * test_errors)
print RSS_test
'''
predictions_test = model1.predict(test)
predictions_test
submission = gl.SFrame(test['id'])
submission.add_column(predictions_test)
submission.rename({'X1': 'id', 'X2':'relevance'})
submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: str(x['relevance']))
submission.export_csv('../data/submission2.csv', quote_level = 3)
#gl.canvas.set_target('ipynb')
Explanation: TF-IDF with linear regression
End of explanation |
13,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSE 6040, Fall 2015 [11, Part A]
Step3: Solution 1
This solution first queries the database for the total number of complaints by type. It then uses these data to normalize the counts by city.
The main idea is to use queries to get data frames storing the values you need, and then combining them within Python.
Step4: Solution 2
This second approach computes the total number of complaints by type, but stores it in a view (or virtual table). It then references this virtual table within the city-specific query to normalize the counts.
We mentioned views in the class slides but did not do a specific exercise using them, so it's OK if you did not think of this solution.
Step5: A nice feature of a view is that it is stored in the database and automatically kept up to date.
So, you can create it once and use any time you need it, even after updates to the data from which the view derives.
Step6: Solution 3
This third solution introduces a new concept, namely, the idea of a subquery.
The basic idea is that, within a SELECT statement, you can reference a table generated "on-the-fly" from another SELECT statement. Notice how this solution basically merges the two queries used in the previous solutions into just a single query.
Step7: Solution 4 (variation of 2)
This next solution is a variation on Solution 2, except instead of creating a view, we create an actual table with the totals.
By storing the table, we can speed up Solution 2 a lot. The downside is that we now have to be careful to maintain this totals table, in the event there are updates to the underlying dataset from which it derives. | Python Code:
import sqlite3 as db
disk_engine = db.connect ('NYC-311-2M.db')
import plotly.plotly as py
py.sign_in ('USERNAME', 'PASSWORD') # Connect!
import pandas as pd
import itertools
import time # To benchmark of these three solutions
import sys # for sys.stdout.flush ()
from plotly.graph_objs import Bar, Layout
def iplot_percent_complaints_by_type_and_city (traces):
return py.iplot({'data': traces,
'layout': Layout(barmode='stack',
xaxis={'tickangle': 40, 'autorange': False, 'range': [-0.5, 16]},
yaxis={'title': 'Percent of Complaints by City'},
margin={'b': 150},
title='Relative Number of 311 Complaints by City')
}, filename='311/relative complaints by city', validate=False)
# Generate a static list of the top 7 cities
query = '''
SELECT City, COUNT(*) AS NumComplaints
FROM data
WHERE City <> 'None'
GROUP BY City COLLATE NOCASE
ORDER BY -NumComplaints
LIMIT 7
'''
TOP_CITIES = pd.read_sql_query (query, disk_engine)['City']
print TOP_CITIES
Explanation: CSE 6040, Fall 2015 [11, Part A]: NYC Follow-up
Recall that you ended Lab 10, with the question, "Given a complaint type, what percentage of such complaints were logged in each area of NYC?"
This follow-up lab gives you several solutions. By inspecting and running these examples, you should be able to see their tradeoffs.
Setup
First, some setup code common to all three solutions, which sets up the database and connects to plotly.
Be sure to modify the plotly login credentials accordingly.
End of explanation
t1a = time.time ()
# Determine the number of complaints by type
query = '''
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
df = pd.read_sql_query (query, disk_engine)
t1a = time.time () - t1a
print "[+%gs] Part A" % t1a
print df.head ()
t1b = time.time ()
# Convert this data into a lookup table (dictionary)
total_complaints_by_type = \
dict (zip ([x.capitalize () for x in df.ComplaintType],
df.NumComplaints))
t1b = time.time () - t1b
print "[+%gs] Part B" % t1b
# Print a few entries just as a sanity check
print list (itertools.islice (total_complaints_by_type.items (), 5))
t1c = time.time ()
def capitalize (string_list):
Given a list of strings, returns a new list with standardized
capitalization.
return [s.capitalize () for s in string_list]
def gather (key_list, dictionary):
Given a list of keys, returns a list of corresponding values from a
dictionary.
return [dictionary[key] for key in key_list]
traces1 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t1c, city)) ; sys.stdout.flush ()
query = '''
SELECT ComplaintType, COUNT(*) as NumComplaints
FROM data
WHERE City = "{}" COLLATE NOCASE
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
# Normalize complaint counts
complaint_types = capitalize (df.ComplaintType)
totals = gather (complaint_types, total_complaints_by_type)
percent_complaints = 100.0 * df.NumComplaints / totals
# Add this city as a new trace
traces1.append (Bar (x=complaint_types,
y=percent_complaints,
name=city.capitalize ()))
t1c = time.time () - t1c
print "[+%gs] Part C" % t1c
# Check it!
print "==> Total time for Solution 1: %gs" % (t1a + t1b + t1c)
iplot_percent_complaints_by_type_and_city (traces1)
Explanation: Solution 1
This solution first queries the database for the total number of complaints by type. It then uses these data to normalize the counts by city.
The main idea is to use queries to get data frames storing the values you need, and then combining them within Python.
End of explanation
t2a = time.time ()
query = '''
CREATE VIEW IF NOT EXISTS TotalComplaintsView AS
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
c = disk_engine.cursor ()
c.execute (query)
t2a = time.time () - t2a
print "[+%gs] Part A" % t2a
Explanation: Solution 2
This second approach computes the total number of complaints by type, but stores it in a view (or virtual table). It then references this virtual table within the city-specific query to normalize the counts.
We mentioned views in the class slides but did not do a specific exercise using them, so it's OK if you did not think of this solution.
End of explanation
t2b = time.time ()
traces2 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t2b, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D, TotalComplaintsView AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces2.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t2b = time.time () - t2b
print "[+%gs] Part B" % t2b
print ("==> Total time for Solution 2: %gs" % (t2a + t2b))
iplot_percent_complaints_by_type_and_city (traces2)
Explanation: A nice feature of a view is that it is stored in the database and automatically kept up to date.
So, you can create it once and use any time you need it, even after updates to the data from which the view derives.
End of explanation
t3 = time.time ()
traces3 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t3, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D,
(SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE) AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces3.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t3 = time.time () - t3
print "[+%gs] Total" % t3
print "==> Total time for Solution 3: %gs" % t3
iplot_percent_complaints_by_type_and_city (traces3)
Explanation: Solution 3
This third solution introduces a new concept, namely, the idea of a subquery.
The basic idea is that, within a SELECT statement, you can reference a table generated "on-the-fly" from another SELECT statement. Notice how this solution basically merges the two queries used in the previous solutions into just a single query.
End of explanation
t4a = time.time ()
query = '''
DROP TABLE IF EXISTS TotalComplaints
'''
c = disk_engine.cursor ()
c.execute (query)
query = '''
CREATE TABLE TotalComplaints AS
SELECT ComplaintType, COUNT(*) AS NumComplaints
FROM data
GROUP BY ComplaintType COLLATE NOCASE
ORDER BY -NumComplaints
'''
c.execute (query)
t4a = time.time () - t4a
print "[+%gs] Part A" % t4a
t4b = time.time ()
traces4 = []
for city in TOP_CITIES: # Determines the complaint counts by city
print ("[+%gs] Processing %s ..." % (time.time () - t4b, city)) ; sys.stdout.flush ()
query = '''
SELECT D.ComplaintType,
(100.0 * COUNT(*) / T.NumComplaints) AS PercentComplaints
FROM data AS D, TotalComplaints AS T
WHERE (City = "{}" COLLATE NOCASE)
AND (D.ComplaintType = T.ComplaintType COLLATE NOCASE)
GROUP BY D.ComplaintType COLLATE NOCASE
ORDER BY -T.NumComplaints
'''.format (city)
df = pd.read_sql_query (query, disk_engine)
traces4.append (Bar (x=capitalize (df.ComplaintType),
y=df.PercentComplaints,
name=city.capitalize ()))
t4b = time.time () - t4b
print "[+%gs] Part B" % t4b
print "==> Total time for Solution 4: %gs" % (t4a + t4b)
iplot_percent_complaints_by_type_and_city (traces4)
Explanation: Solution 4 (variation of 2)
This next solution is a variation on Solution 2, except instead of creating a view, we create an actual table with the totals.
By storing the table, we can speed up Solution 2 a lot. The downside is that we now have to be careful to maintain this totals table, in the event there are updates to the underlying dataset from which it derives.
End of explanation |
13,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
ๆฉๆขฐๅญฆ็ฟใจใฏใใใฎๅใฎ้ใใๆฉๆขฐใใใๅญฆ็ฟใใใใใใจใงใใใใใผใฟใซๅฏพใใฆไบๆธฌใ่กใใใใใซใใใใจใงใใ
ๆฉๆขฐใจใฏใๅ
ทไฝ็ใซใฏๆฐ็ใป็ตฑ่จ็ใชใขใใซใซใชใใพใใ
ๅญฆ็ฟใจใฏใใใฎใขใใซใฎใใฉใกใผใฟใใๅฎ้ใฎใใผใฟใซๆฒฟใใใ่ชฟๆดใใใใจใงใใ
ๅญฆ็ฟใฎๆนๆณใฏๅคงใใๅใใฆ2ใคใใใพใใ
ๆๅธซๆใๅญฆ็ฟ(Supervised learning)
Step1: Loading the Data
scikit-learnใงใฏใใใไพใจใใฆๅฉ็จใใใใใผใฟใปใใ(irisใฎใใผใฟใๆๆธใๆๅญใฎใใผใฟใชใฉ)ใไปฅไธใฎใใใซ็ฐกๅใซๅๅพใใใใจใใงใใพใใ
Dataset loading utilities
Step2: datasetใฏใไปฅไธใฎๅ
ๅฎนใงๆงๆใใใฆใใพใใ
data
Step3: ้ๅธธใฎใใผใฟ่ชญใฟ่พผใฟใซใฏใPythonใซๆจๆบใงๆญ่ผใใใฆใใcsvใชใฉใไฝฟใใพใใ
Step4: ใชใใใใฎใใใชใใผใฟใฎ่ชญใฟ่พผใฟใใพใ่ชญใฟ่พผใใ ใใผใฟใซๅฏพใใๆไฝใใตใใผใใใใฉใคใใฉใชใจใใฆpandasใใใใพใใ
Arrange the Data
ใใผใฟใฎไธญใฎๅ็นๅพด้ใฏใๅฅใ
ใฎๅนณๅใปๅๆฃใๆใฃใฆใใพใ(ไพ
Step5: โปpreprocessingใซใฏNormalizationใจใใใขใธใฅใผใซใใใใพใใใใใใฏไธ่ฌ็ใซ่จใๆญฃ่ฆๅใ่กใใใใฎใใฎใงใฏใชใใฎใงๆณจๆใใฆใใ ใใใ
ใพใใใใผใฟใฎไธญใซใฏใใญในใใงใใ้
็ฎใๅซใพใใฆใใใใจใใใใพใใ
Step6: ไธ่จใฎgoodใชใฉใฎใใญในใ้
็ฎใฏใๆ็ต็ใซใฏๆฐๅคใซใใชใใจใใผใฟใๅญฆ็ฟใใใใใจใใงใใพใใใ ใใใpreprocessingใๅฉ็จใใใใจใง็ฐกๅใซๆฐๅคใธๅคๆใใใใจใใงใใพใใ
Step7: Feature Extractionใงใฏใใใญในใ/็ปๅใซใคใใฆใใๅผทๅใซ็นๅพด้ใฎๆฐๅคๅ(ใใฏใใซๅ)ใ่กใๆฉ่ฝใใตใใผใใใใฆใใพใใไปฅไธใงใฏใcityใจใใใใญในใใฎ้
็ฎใDubai/London/San Fransiscoใ่กจใ0/1ใฎ็นๅพด้ใธใจๅคๆใใใฆใใพใใ
Step8: preprocessingใซใฏใไปใซใๆฌ ๆๅคใฎไฟฎๆญฃใ่กใImputerใชใฉใใผใฟใฎๆดๅใซๅฝน็ซใคใขใธใฅใผใซใๅซใพใใฆใใพใใ
Dimensionality reduction
ใใผใฟใๅณ็คบใใใใจใฏใใใฎๅพใฎใขใใซใฎ้ธๆใ่กใๆใๅซใใๆงใ
ใชใทใผใณใง้ๅธธใซ้่ฆใงใใ
ใใใใๅ็ดใซ็นๅพด้ใ4ใคใซใชใฃใใ ใใงใใใผใฟใๅณ็คบใใใใจใใงใใชใใชใฃใฆใใพใใพใใ(4ๆฌกๅ
ใฎๅณใซใชใฃใฆใใพใใใ)ใๅ ดๅใซใใฃใฆใฏ้ๅธธใซๅคใใชใใใจใใใใพใ(ใใญในใ่งฃๆใชใฉ)ใ
ใใฎใใใใใผใฟใใชใในใๅฐใชใใๅฟ
่ฆๆๅฐ้ใฎ็นๅพด้ใง่กจ็พใใใใจใ้่ฆใซใชใใพใใใใใ่กใใฎใDimensionality reduction(ๆฌกๅ
ๅ้ค/ๆฌกๅ
ๅง็ธฎ)ใจๅผใฐใใๆๆณใงใใ
ๅ
ทไฝ็ใซใฏใใใผใฟใฎไธญใซ่บซ้ทใจไฝ้ใใใฃใๅ ดๅใใใใใฏไฝใๅคงใใใชใใฐไธกๆนใจใๅขใใ็นๅพด้ใฎใใใใใผใฟใฎ็นๆงใ่กจใไธใงใฏใฉใกใใใฒใจใคใงๅๅใงใใใใฎใใใซ็ๆนใๅขใใใฐ็ๆนใๅขใใใจใใฃใใไบใใซ็ธ้ขใฎใใ็นๅพด้ใๆถใใฆใใใฐๅฟ
่ฆๆๅฐ้ใฎ็นๅพด้ใงใใผใฟใ่กจ็พใใใใจใใงใใใปใปใปใจใใใฎใๅบๆฌ็ใช่ใๆนใงใใ
scikit-learnใงใฏdecompositionใๅฉ็จใใใฎๅฆ็ใ่กใใใจใใงใใพใใไปฅไธใงใฏใTruncatedSVDใซใใฃใฆๆฐๅญใใผใฟใฎ็นๅพด้ใใไธ่จใง่ฟฐในใใจใใไบใใซ็ธ้ขใฎใชใใ2ใคใฎ็นๅพด้ใธใจๅง็ธฎใใฆใใพใใ
Visualize
ๅฎ้ใซใใผใฟใๅณ็คบใใใซใฏใscikit-learnใงใฏใชใmatplotlibใๅฉ็จใใพใใ
ไปฅไธใงใฏใๆๅใฎ2ใคใฎ็นๅพด้ใใใใฏใขใใใใirisใฎใใผใฟใใใญใใใใฆใใพใใ
Step9: Select the Model
ๆฉๆขฐๅญฆ็ฟใซไฝฟใใใขใใซใซใฏ่ฒใ
ใชใใฎใใใใscikit-learnใงใๆงใ
ใชใขใใซใไฝฟใใใใใซใชใฃใฆใใพใใ
ใใ ใใใฎๅไธไฝใฉใใ้ธในใฐ่ฏใใฎใใฏ้ๅธธใซๆฉใพใใๅ้กใงใใ
ไธใคใฎๅบๆบใจใใฆใไปฅไธใฎใใใชใใญใผใใฃใผใใใใใพใใใใใฏใscikit-learnใฎไธญใฎใขใซใดใชใบใ ใใฉใฎใใใชๅบๆบใง้ธๆใใใใใใฎใใๅณ็คบใใใใฎใงใใ
Choosing the right estimator
scikit-learnใซใฏNeural Networkใใชใใใๅณไธญใซใใใใพใใใใๅบๆฌ็ใซใฏSVC/SVRใฎไปฃๆฟใงใใใใใผใฟใๅคใใปใฉ็ฒพๅบฆใๅไธใใพใใ
ใใคใณใใจใใฆใฏไปฅไธใซใชใใพใใ
ๆไฝใงใ50ไปถไปฅไธใฏใใผใฟใ้ใใ
ๅ็ดใชใขใใซใใๅงใใ(ClassificationใชใLinerSVCใRegressionใชใRasso/ElasticNetใชใฉ)
Just lookingใใๅงใใ(ใใผใฟใ่ฆใฆใๅฟ
่ฆใซๅฟใๆฌกๅ
ๅ้คใ่กใ)
ๆฉๆขฐๅญฆ็ฟใงๆญฃใใ็ตๆใๅบใใซใฏใใผใฟใฎๆดๅ(ๅณไธญใงใฏJust lookingใๅ็ซ ใซๅฝใใ้จๅ)ใๆฌ ใใใพใใใใใผใฟๆดๅใใไธใงๅ็ดใชใขใใซใงๆค่จผใใใฆใฟใฆใๅฟ
่ฆใซๅฟใไปใฎใขใใซใ่ฉฆใใฆใใใจใใใฎใๅบๆฌ็ใช้ฒใๆนใซใชใใพใใ
Select Model Features
็นๅพด้ใๅคใๅ ดๅใฏใใฉใฎ็นๅพด้ใใขใใซใซไฝฟใใฎใใ้่ฆใชๅ้กใงใใscikit-learnใซใฏใใฉใฎ็นๅพด้ใไบๆธฌๅคใซๅฏไธใใฆใใใ่ชฟในใใใใฎๆฉ่ฝใใใใพใใ ไปฅไธใงใฏใFeature selectionใๅฉ็จใ็นๅพด้ใใใฃใจใๆ็จใช2ใคใซ็ตใฃใฆใใพใ(k=2)ใ
Step10: Split the Data
ๅญฆ็ฟใซๅฝใใฃใฆใฏใใใผใฟใๅญฆ็ฟ็จ(training set)ใจ่ฉไพก็จ(test set)ใซๅใใฆใใใพใใๅญฆ็ฟใซไฝฟใฃใใใผใฟใซๅฏพใใฆไบๆธฌใใใพใใงใใใฎใฏๅฝใใๅใชใฎใงใๆญฃ็ขบใซ็ฒพๅบฆใๆธฌๅฎใใใใใ่ฉไพก็จใฎใใผใฟใฏๅญฆ็ฟ็จใจใฏๅฅใซใใฆใใใพใใ
ๅ็ดใซๅญฆ็ฟ็จใจ่ฉไพก็จใซ2ๅๅฒใใใฎใงใชใใใใผใฟๅ
จไฝใไฝๅใใซๅๅฒใใ่ฉไพก็จใจใใฆไฝฟใใใผใฟใๅใๆฟใใฆใใใจใใๆนๆณใใใใพใใใใใซใใใๅฐใชใใใผใฟใงใๅน็็ใซๅญฆ็ฟใ่กใใใจใใงใใพใใ
K-FOLD CROSS-VALIDATION, WITH MATLAB CODE
ใใใฏCross Validationใจๅผใฐใใๆๆณใงใใใscikit-learnใงใฏๅ็ดใชๅๅฒใใใใฎCross Validationใพใงใcross-validationใง่กใใใใใซใชใฃใฆใใพใใ
Step11: Cross Validationใๅฉ็จใใ้ใฏใใใผใฟใฎๅๅฒใจๅญฆ็ฟใๅใใใฆ่กใฃใฆใใใcross_val_scoreใๅฉ็จใใใฎใ็ฐกๅใงใใ(ๅพ่ฟฐใใพใ)ใใใผใฟใฎๅๅฒใฎใฟ่กใๅ ดๅใฏKFoldใๅฉ็จใใพใใ
Step12: ใใใงใใผใฟใฎๆบๅใฏๆดใฃใใฎใงใใใใใๅญฆ็ฟใ่กใฃใฆใใใพใใ
Training the Model
ไปฅไธใงใฏใๅ้กใ่กใ้ใซใใๅฉ็จใใใSupport Vector Machineใใใผในใซใใฎๅญฆ็ฟๆนๆณใชใฉใ่งฃ่ชฌใใฆใใใพใใ
Step13: ใขใใซใฎๆง็ฏใฏใใฃใใใใ ใใงใใใพใใงใใใใใฆใๅญฆ็ฟใใใฃใไธ่กใงๆธใพใใใใจใใงใใพใ(ไปฅไธใฎไพใงใฏใๆๅพใฎ1ใใผใฟไปฅๅคใๅญฆ็ฟใใผใฟใจใใฆๆธกใใฆใใพใ)ใ
Step14: ใใใฆใๅใฃใฆใใใๆๅพใฎไธใคใฎใใผใฟใซใคใใฆใใขใใซใไฝฟใฃใฆไบๆธฌใใใฆใฟใพใใ
Step15: ๅฎ้ๅคๅฎใใ็ปๅใฏไปฅไธใงใใใ8ใใจใใไบๆธฌใฏใใใใ็ใๅพใฆใใใฎใงใฏใชใใใจๆใใพใใ
Step16: ไปๅบฆใฏCross Validationใไฝฟใฃใฆใฟใพใใcvใงใฏใใผใฟใฎๅๅฒๆฐ(foldใฎๆฐ)ใๆๅฎใใพใใ
Step17: Search Model Parameters
ไธ่จใงใฏใขใใซใฎใใฉใกใผใฟใๅบๅฎใงๆๅฎใใพใใใ(gamma=0.001ใชใฉ)ใๅฎ้ใฉใใชใใฉใกใผใฟใ่จญๅฎใในใใใฏ้ๅธธใซๆฉใพใใๅ้กใงใใ
ๆ้ฉใชใใฉใกใผใฟใๆขใใใใๅใใฉใกใผใฟใๅใใใ็ฏๅฒใๆฑบใใใใฎ็ตใฟๅใใใ่ฉฆใใฆใใใจใใๆๆณใใใใพใใใใใGrid Searchใจๅผใณใพใใใscikit-learnใงใฏใใใ่กใใใใฎGrid Searchใขใธใฅใผใซใๆไพใใใฆใใพใใ
Step18: Ensemble Lerning
ๅญฆ็ฟใฎ็ตๆใๅช็งใชใขใใซใใงใใใใจใใใใฐใใใงใชใใใจใใใใพใใ
็นๅพด้ใๅขใใใใคใพใๅคๆฌกๅ
ใซใชใใปใฉใใใใใใซ่ฆใใใใฉใกใผใฟใผใฎ็ตใฟๅใใใใฏๅขใใฆใใใฎใงใไฝๅบฆใๅญฆ็ฟใใใฆใฟใชใใจใชใใชใๆ้ฉใจๆใใ็ตๆใซใฏใใฉใ็ใใชใใชใใพใใ
ใใใใๅ้กใซๅฏพๅฟใใใใใ่คๆฐใฎใขใใซใๅฅใ
ใซๅญฆ็ฟใใใๆ็ต็ใซใฏใใใใฎ็ตใฟๅใใใงๆฑบใใใใจใง็ฒพๅบฆใไธใใใจใใๆๆณใใใใพใใ
ใใใใขใณใตใณใใซๅญฆ็ฟใจๅผใฐใใๆๆณใงใใใ่คๆฐใฎใขใใซใใฏใๅใใขใใซใฎใใจใใใใฐ(Bagging)ใใใใ็ฐใชใใขใใซใไฝฟใใใจใใใใพใ(Boosting)ใ
scikit-learnใงใฏใensembleใซใใฃใฆใใฎใขใณใตใณใใซๅญฆ็ฟใ่กใใใจใใงใใพใใไปฅไธใงใฏใBaggingใซใใฃใฆ10ๅใฎใขใใซใไธฆๅใงๅญฆ็ฟใใใใใฎ็ตใฟๅใใใงๆฑบใใใขใใซใไฝๆใใฆใใพใใ
Step19: n_jobsใซใใไธฆๅใงๅญฆ็ฟใใใ้ใฎใใญใปในๆฐใ็ฐกๅใซ่ชฟๆดใใใใจใใงใใ้ซ้ใชๅญฆ็ฟใๅฏ่ฝใงใ
Evaluate Training Result
ๅญฆ็ฟใใ็ตๆใใคใพใใขใใซใฎ็ฒพๅบฆใฏใฉใฎใใใซๆธฌใใฐใใใงใใใใใ
ๅ็ดใซใฏไบๆธฌใจๅฎ้ใฎ2ใคใๆฏ่ผใใใใจใง็ฒพๅบฆใ็ฎๅบใใใใจใใงใใพใใใใ ใใใฎๆๆณใ ใจไพใใฐใ90%ใAใง10%ใBใใจใใใใผใฟใใใฃใๅ ดๅใๅธธใซAใจใใๅ็ดใชใขใใซใ็ใพใใฆใใใจใใฆใใใฎ็ฒพๅบฆใฏ90%ใจใชใฃใฆใใพใใพใใ
ใใใใๅ้กใ้ฒใใใใๅ้ก็ตๆใไปฅไธใฎใใใซใพใจใ่ฉไพกใ่กใๆททๅ่กๅ(confusion matrix)ใใใใพใใ
ใใใงใฏใไปฅไธ3็นใฎ่ฆณ็นใ้่ฆใงใใ
ๆญฃ็ขบๅบฆ(Accuracyใๅใซ็ฒพๅบฆใจใใฃใๅ ดๅใใฎๅค)
Step20: classification_reportใไฝฟ็จใใใใจใงใ็ฐกๅใซไธ่ฆง่กจใๅๅพใงใใพใ(ๅ็ดใซๅคใ ใๅๅพใใใๅ ดๅใฏprecision_recall_fscore_support)ใ
Step21: Store the Model
ๅญฆ็ฟใใใขใใซใฏใใกใคใซใจใใฆๅบๅใไฟๅญใใฆใใใใจใใงใใพใใ
ไปฅไธใฏใๆจๆบใฎpickleใไฝฟใๆๆณใงใใ
Step22: ใใฎใปใใsklearn.externalsใฎjoblibใๅฉ็จใใใกใคใซใซไฟ็ฎกใใใใจใใงใใพใใๅคง่ฆๆจกใชใขใใซใชใฉใซใฏใใกใใฎๆนใใใใงใใใใ | Python Code:
# enable showing matplotlib image inline
%matplotlib inline
Explanation: Introduction
ๆฉๆขฐๅญฆ็ฟใจใฏใใใฎๅใฎ้ใใๆฉๆขฐใใใๅญฆ็ฟใใใใใใจใงใใใใใผใฟใซๅฏพใใฆไบๆธฌใ่กใใใใใซใใใใจใงใใ
ๆฉๆขฐใจใฏใๅ
ทไฝ็ใซใฏๆฐ็ใป็ตฑ่จ็ใชใขใใซใซใชใใพใใ
ๅญฆ็ฟใจใฏใใใฎใขใใซใฎใใฉใกใผใฟใใๅฎ้ใฎใใผใฟใซๆฒฟใใใ่ชฟๆดใใใใจใงใใ
ๅญฆ็ฟใฎๆนๆณใฏๅคงใใๅใใฆ2ใคใใใพใใ
ๆๅธซๆใๅญฆ็ฟ(Supervised learning): ใใผใฟใจใใใใใไบๆธฌใใใในใๅค(ๆญฃ่งฃ)ใไธใใใใจใงๅญฆ็ฟใใใพใใ
ๅ้ก(Classification): ใใผใฟใใใใคใใฎใซใใดใชใซๅ้กใงใใใจใใใใฎใซใใดใชใไบๆธฌใใใพใ(ไพ๏ผๆๆธใใฎๆฐๅญใ0๏ฝ9ใฎไฝใใใงใใใๅคๅฅใใใชใฉ)
ๅๅธฐ(Regression): ใใผใฟใใไบๆธฌใใใ้ฃ็ถ็ใชๅคใไบๆธฌใใพใ(ไพ๏ผๅนด้ฝขใจไฝ้ใใๆ
้ใไบๆธฌใใใชใฉ)ใ
ๆๅธซใชใๅญฆ็ฟ(Unsupervised learning): ใใผใฟใไธใใใใจใงใใใฎ่ฃๅดใซใใๆง้ ใๅญฆ็ฟใใใพใ
ใฏใฉในใฟใชใณใฐ: ไผผใฆใใใใผใฟใใพใจใใใใจใงใใใผใฟใใฉใใใใใฎ้ๅ(ใฏใฉในใฟ)ใใๆงๆใใใใฎใใไบๆธฌใใพใใ
ๅๅธๆจๅฎ๏ผ ใใผใฟใ็ใฟๅบใใฆใใ็ขบ็ๅๅธใฎๆจๅฎใ่กใใพใใ
scikit-learnใฏใPython่ฃฝใฎๆฉๆขฐๅญฆ็ฟใฉใคใใฉใชใงใใ
ใใฎไธญใซใฏๆงใ
ใชใๆฉๆขฐใใๅฎ่ฃ
ใใใฆใใใใใฎใๅญฆ็ฟใใฎใใใฎไป็ตใฟใๅใใฃใฆใใพใใ
ไปฅไธใงใฏใใใฎscikit-learnใๅฉ็จใใใผใฟใๆบๅใใใจใใใใๅฎ้ใซใขใใซใๆง็ฏใๅญฆ็ฟใป่ฉไพกใ่กใใพใงใฎๆ้ ใ่งฃ่ชฌใใพใใ
ใใผใฟใฎๆบๅ
ใใผใฟใฎๆดๅ
ใขใใซใฎ้ธๆ
ใใผใฟใฎๅๅฒ
ใขใใซใฎๅญฆ็ฟ
ใขใใซใฎ่ฉไพก
ใขใใซใฎไฟ็ฎก
็ฐๅขใฎใปใใใขใใใซใคใใฆใฏใไปฅไธใซใพใจใใฆใใใฎใงใๅ่ใใ ใใใ
Pythonใงๆฉๆขฐๅญฆ็ฟใขใใชใฑใผใทใงใณใฎ้็บ็ฐๅขใๆง็ฏใใ
End of explanation
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
Explanation: Loading the Data
scikit-learnใงใฏใใใไพใจใใฆๅฉ็จใใใใใผใฟใปใใ(irisใฎใใผใฟใๆๆธใๆๅญใฎใใผใฟใชใฉ)ใไปฅไธใฎใใใซ็ฐกๅใซๅๅพใใใใจใใงใใพใใ
Dataset loading utilities
End of explanation
print(iris.keys())
Explanation: datasetใฏใไปฅไธใฎๅ
ๅฎนใงๆงๆใใใฆใใพใใ
data: ใใผใฟๆฌไฝ(ๅธธใซใตใณใใซร็นๅพด้ใฎไบๆฌกๅ
้
ๅใ็ปๅใชใฉใใผใฟ่ชไฝใ2ๆฌกๅ
ใง่กจ็คบใใใๅ ดๅใฏใimagesใใใขใฏใปในใงใใ)
target: ใใผใฟใใไบๆธฌใใใในใๆญฃ่งฃ(ๆๅธซใใผใฟ)
feature_names: ็นๅพด้้
็ฎใฎๅๅ
target_names : ไบๆธฌๅค้
็ฎใฎๅๅ
DESCR: ใใผใฟใฎ่ชฌๆ
End of explanation
import csv
import numpy as np
encoding = "utf-8"
ratings = []
with open("./data/ratings.txt", encoding="utf-8") as f:
content = csv.reader(f, delimiter="\t")
lines = list(content)
ratings = np.array(lines)
print(ratings)
Explanation: ้ๅธธใฎใใผใฟ่ชญใฟ่พผใฟใซใฏใPythonใซๆจๆบใงๆญ่ผใใใฆใใcsvใชใฉใไฝฟใใพใใ
End of explanation
from sklearn import datasets
import numpy as np
from sklearn import preprocessing
iris_data = iris["data"]
scaler = preprocessing.StandardScaler().fit(iris_data)
describe = lambda t, x: (t + ":\n {0}").format({"mean": np.mean(x, axis=0), "std": np.std(x, axis=0)})
# before scaling
print(describe("Before scaling", iris_data))
# scaling
iris_data_scaled = scaler.transform(iris_data)
print(describe("After scaling (mean is almost 0, std = 1)", iris_data_scaled))
# inverse
iris_data_inv = scaler.inverse_transform(iris_data_scaled)
print(describe("Inverse the scaling", iris_data_inv))
Explanation: ใชใใใใฎใใใชใใผใฟใฎ่ชญใฟ่พผใฟใใพใ่ชญใฟ่พผใใ ใใผใฟใซๅฏพใใๆไฝใใตใใผใใใใฉใคใใฉใชใจใใฆpandasใใใใพใใ
Arrange the Data
ใใผใฟใฎไธญใฎๅ็นๅพด้ใฏใๅฅใ
ใฎๅนณๅใปๅๆฃใๆใฃใฆใใพใ(ไพ: ไฝ้ใจ่บซ้ทใงใฏๅนณๅใๅๆฃใ็ฐใชใ)ใ
ใใฎ็ถๆ
ใ ใจๅญฆ็ฟใๅน็็ใซ้ฒใพใชใใใใๅ็นๅพด้ใฎๅนณๅใ0ใปๅๆฃใ1ใซใใใใๆญฃ่ฆๅ(Normalization)ใ่กใใใจใไธ่ฌ็ใงใใ
(ใใใซๅ ใใ็นๅพด้้ใฎ็ธ้ขใๆถใ็ฝ่ฒๅ(Whitening)ใพใง่กใใใจใใใใพใ)ใ
scikit-learnใงใฏpreprocessingใไฝฟ็จใใใใจใงใใฎไฝๆฅญใใจใฆใ็ฐกๅใซ่กใใใจใใงใใพใใไปฅไธใงใฏใStandardScalerใไฝฟใฃใฆๅฆ็ใ่กใฃใฆใใพใใ
End of explanation
print(ratings)
Explanation: โปpreprocessingใซใฏNormalizationใจใใใขใธใฅใผใซใใใใพใใใใใใฏไธ่ฌ็ใซ่จใๆญฃ่ฆๅใ่กใใใใฎใใฎใงใฏใชใใฎใงๆณจๆใใฆใใ ใใใ
ใพใใใใผใฟใฎไธญใซใฏใใญในใใงใใ้
็ฎใๅซใพใใฆใใใใจใใใใพใใ
End of explanation
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(["bad", "nbad", "good", "vgood"])
encoded_rating = le.transform(ratings[:, 1])
print("{0} is encoded to {1}".format(ratings[:, 1], encoded_rating))
Explanation: ไธ่จใฎgoodใชใฉใฎใใญในใ้
็ฎใฏใๆ็ต็ใซใฏๆฐๅคใซใใชใใจใใผใฟใๅญฆ็ฟใใใใใจใใงใใพใใใ ใใใpreprocessingใๅฉ็จใใใใจใง็ฐกๅใซๆฐๅคใธๅคๆใใใใจใใงใใพใใ
End of explanation
from sklearn.feature_extraction import DictVectorizer
measurements = [
{"city": "Dubai", "temperature": 33.},
{"city": "London", "temperature": 12.},
{"city": "San Fransisco", "temperature": 18.},
{"city": "Dubai", "temperature": 32.},
]
vec = DictVectorizer()
vectorized = vec.fit_transform(measurements).toarray()
print(vectorized)
feature_names = vec.get_feature_names()
print(feature_names)
Explanation: Feature Extractionใงใฏใใใญในใ/็ปๅใซใคใใฆใใๅผทๅใซ็นๅพด้ใฎๆฐๅคๅ(ใใฏใใซๅ)ใ่กใๆฉ่ฝใใตใใผใใใใฆใใพใใไปฅไธใงใฏใcityใจใใใใญในใใฎ้
็ฎใDubai/London/San Fransiscoใ่กจใ0/1ใฎ็นๅพด้ใธใจๅคๆใใใฆใใพใใ
End of explanation
import matplotlib.pyplot as plt
features = iris.data[:, :2] # select first 2 feature
label = iris.target
plt.scatter(features[:, 0], features[:, 1], c=label, cmap=plt.cm.Paired)
for i in range(features.shape[1]):
f_data = features[:, i]
if i == 0:
plt.xlabel(iris.feature_names[i])
plt.xlim(f_data.min(), f_data.max())
else:
plt.ylabel(iris.feature_names[i])
plt.ylim(f_data.min(), f_data.max())
plt.title("iris data")
from sklearn import decomposition
digits_data = digits["data"]
show_dimension = lambda dset: len(dset[0])
dimension = 2
digits_recuced = decomposition.TruncatedSVD(n_components=dimension).fit_transform(digits_data)
print("Dimension is reduced from {0} to {1}.".format(show_dimension(digits_data), show_dimension(digits_recuced)))
Explanation: preprocessingใซใฏใไปใซใๆฌ ๆๅคใฎไฟฎๆญฃใ่กใImputerใชใฉใใผใฟใฎๆดๅใซๅฝน็ซใคใขใธใฅใผใซใๅซใพใใฆใใพใใ
Dimensionality reduction
ใใผใฟใๅณ็คบใใใใจใฏใใใฎๅพใฎใขใใซใฎ้ธๆใ่กใๆใๅซใใๆงใ
ใชใทใผใณใง้ๅธธใซ้่ฆใงใใ
ใใใใๅ็ดใซ็นๅพด้ใ4ใคใซใชใฃใใ ใใงใใใผใฟใๅณ็คบใใใใจใใงใใชใใชใฃใฆใใพใใพใใ(4ๆฌกๅ
ใฎๅณใซใชใฃใฆใใพใใใ)ใๅ ดๅใซใใฃใฆใฏ้ๅธธใซๅคใใชใใใจใใใใพใ(ใใญในใ่งฃๆใชใฉ)ใ
ใใฎใใใใใผใฟใใชใในใๅฐใชใใๅฟ
่ฆๆๅฐ้ใฎ็นๅพด้ใง่กจ็พใใใใจใ้่ฆใซใชใใพใใใใใ่กใใฎใDimensionality reduction(ๆฌกๅ
ๅ้ค/ๆฌกๅ
ๅง็ธฎ)ใจๅผใฐใใๆๆณใงใใ
ๅ
ทไฝ็ใซใฏใใใผใฟใฎไธญใซ่บซ้ทใจไฝ้ใใใฃใๅ ดๅใใใใใฏไฝใๅคงใใใชใใฐไธกๆนใจใๅขใใ็นๅพด้ใฎใใใใใผใฟใฎ็นๆงใ่กจใไธใงใฏใฉใกใใใฒใจใคใงๅๅใงใใใใฎใใใซ็ๆนใๅขใใใฐ็ๆนใๅขใใใจใใฃใใไบใใซ็ธ้ขใฎใใ็นๅพด้ใๆถใใฆใใใฐๅฟ
่ฆๆๅฐ้ใฎ็นๅพด้ใงใใผใฟใ่กจ็พใใใใจใใงใใใปใปใปใจใใใฎใๅบๆฌ็ใช่ใๆนใงใใ
scikit-learnใงใฏdecompositionใๅฉ็จใใใฎๅฆ็ใ่กใใใจใใงใใพใใไปฅไธใงใฏใTruncatedSVDใซใใฃใฆๆฐๅญใใผใฟใฎ็นๅพด้ใใไธ่จใง่ฟฐในใใจใใไบใใซ็ธ้ขใฎใชใใ2ใคใฎ็นๅพด้ใธใจๅง็ธฎใใฆใใพใใ
Visualize
ๅฎ้ใซใใผใฟใๅณ็คบใใใซใฏใscikit-learnใงใฏใชใmatplotlibใๅฉ็จใใพใใ
ไปฅไธใงใฏใๆๅใฎ2ใคใฎ็นๅพด้ใใใใฏใขใใใใirisใฎใใผใฟใใใญใใใใฆใใพใใ
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
X, y = iris.data, iris.target
print(X.shape)
X_new = SelectKBest(chi2, k=2).fit_transform(X, y)
print(X_new.shape)
Explanation: Select the Model
ๆฉๆขฐๅญฆ็ฟใซไฝฟใใใขใใซใซใฏ่ฒใ
ใชใใฎใใใใscikit-learnใงใๆงใ
ใชใขใใซใไฝฟใใใใใซใชใฃใฆใใพใใ
ใใ ใใใฎๅไธไฝใฉใใ้ธในใฐ่ฏใใฎใใฏ้ๅธธใซๆฉใพใใๅ้กใงใใ
ไธใคใฎๅบๆบใจใใฆใไปฅไธใฎใใใชใใญใผใใฃใผใใใใใพใใใใใฏใscikit-learnใฎไธญใฎใขใซใดใชใบใ ใใฉใฎใใใชๅบๆบใง้ธๆใใใใใใฎใใๅณ็คบใใใใฎใงใใ
Choosing the right estimator
scikit-learnใซใฏNeural Networkใใชใใใๅณไธญใซใใใใพใใใใๅบๆฌ็ใซใฏSVC/SVRใฎไปฃๆฟใงใใใใใผใฟใๅคใใปใฉ็ฒพๅบฆใๅไธใใพใใ
ใใคใณใใจใใฆใฏไปฅไธใซใชใใพใใ
ๆไฝใงใ50ไปถไปฅไธใฏใใผใฟใ้ใใ
ๅ็ดใชใขใใซใใๅงใใ(ClassificationใชใLinerSVCใRegressionใชใRasso/ElasticNetใชใฉ)
Just lookingใใๅงใใ(ใใผใฟใ่ฆใฆใๅฟ
่ฆใซๅฟใๆฌกๅ
ๅ้คใ่กใ)
ๆฉๆขฐๅญฆ็ฟใงๆญฃใใ็ตๆใๅบใใซใฏใใผใฟใฎๆดๅ(ๅณไธญใงใฏJust lookingใๅ็ซ ใซๅฝใใ้จๅ)ใๆฌ ใใใพใใใใใผใฟๆดๅใใไธใงๅ็ดใชใขใใซใงๆค่จผใใใฆใฟใฆใๅฟ
่ฆใซๅฟใไปใฎใขใใซใ่ฉฆใใฆใใใจใใใฎใๅบๆฌ็ใช้ฒใๆนใซใชใใพใใ
Select Model Features
็นๅพด้ใๅคใๅ ดๅใฏใใฉใฎ็นๅพด้ใใขใใซใซไฝฟใใฎใใ้่ฆใชๅ้กใงใใscikit-learnใซใฏใใฉใฎ็นๅพด้ใไบๆธฌๅคใซๅฏไธใใฆใใใ่ชฟในใใใใฎๆฉ่ฝใใใใพใใ ไปฅไธใงใฏใFeature selectionใๅฉ็จใ็นๅพด้ใใใฃใจใๆ็จใช2ใคใซ็ตใฃใฆใใพใ(k=2)ใ
End of explanation
from sklearn.model_selection import train_test_split
test_size = 0.3 # use 30% of data to test the model
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=test_size, random_state=0)
test_data_rate = X_test.shape[0] * 100 / (X_train.shape[0] + X_test.shape[0])
print("test data is {0}% of data".format(test_data_rate))
Explanation: Split the Data
ๅญฆ็ฟใซๅฝใใฃใฆใฏใใใผใฟใๅญฆ็ฟ็จ(training set)ใจ่ฉไพก็จ(test set)ใซๅใใฆใใใพใใๅญฆ็ฟใซไฝฟใฃใใใผใฟใซๅฏพใใฆไบๆธฌใใใพใใงใใใฎใฏๅฝใใๅใชใฎใงใๆญฃ็ขบใซ็ฒพๅบฆใๆธฌๅฎใใใใใ่ฉไพก็จใฎใใผใฟใฏๅญฆ็ฟ็จใจใฏๅฅใซใใฆใใใพใใ
ๅ็ดใซๅญฆ็ฟ็จใจ่ฉไพก็จใซ2ๅๅฒใใใฎใงใชใใใใผใฟๅ
จไฝใไฝๅใใซๅๅฒใใ่ฉไพก็จใจใใฆไฝฟใใใผใฟใๅใๆฟใใฆใใใจใใๆนๆณใใใใพใใใใใซใใใๅฐใชใใใผใฟใงใๅน็็ใซๅญฆ็ฟใ่กใใใจใใงใใพใใ
K-FOLD CROSS-VALIDATION, WITH MATLAB CODE
ใใใฏCross Validationใจๅผใฐใใๆๆณใงใใใscikit-learnใงใฏๅ็ดใชๅๅฒใใใใฎCross Validationใพใงใcross-validationใง่กใใใใใซใชใฃใฆใใพใใ
End of explanation
from sklearn.model_selection import KFold
kf = KFold(n_splits=3) # divide into 3 set
i = 0
for train_index, test_index in kf.split(iris.data):
x_train = iris.data[train_index]
y_train = iris.target[train_index]
x_test = iris.data[test_index]
y_test = iris.target[test_index]
print("{0}: training {1}, test {2}".format(i, len(y_train), len(y_test)))
i += 1
Explanation: Cross Validationใๅฉ็จใใ้ใฏใใใผใฟใฎๅๅฒใจๅญฆ็ฟใๅใใใฆ่กใฃใฆใใใcross_val_scoreใๅฉ็จใใใฎใ็ฐกๅใงใใ(ๅพ่ฟฐใใพใ)ใใใผใฟใฎๅๅฒใฎใฟ่กใๅ ดๅใฏKFoldใๅฉ็จใใพใใ
End of explanation
from sklearn import svm
clf = svm.SVC(gamma=0.001, C=100.)
Explanation: ใใใงใใผใฟใฎๆบๅใฏๆดใฃใใฎใงใใใใใๅญฆ็ฟใ่กใฃใฆใใใพใใ
Training the Model
ไปฅไธใงใฏใๅ้กใ่กใ้ใซใใๅฉ็จใใใSupport Vector Machineใใใผในใซใใฎๅญฆ็ฟๆนๆณใชใฉใ่งฃ่ชฌใใฆใใใพใใ
End of explanation
clf.fit(digits.data[:-1], digits.target[:-1])
Explanation: ใขใใซใฎๆง็ฏใฏใใฃใใใใ ใใงใใใพใใงใใใใใฆใๅญฆ็ฟใใใฃใไธ่กใงๆธใพใใใใจใใงใใพใ(ไปฅไธใฎไพใงใฏใๆๅพใฎ1ใใผใฟไปฅๅคใๅญฆ็ฟใใผใฟใจใใฆๆธกใใฆใใพใ)ใ
End of explanation
clf.predict([digits.data[-1]])
Explanation: ใใใฆใๅใฃใฆใใใๆๅพใฎไธใคใฎใใผใฟใซใคใใฆใใขใใซใไฝฟใฃใฆไบๆธฌใใใฆใฟใพใใ
End of explanation
import matplotlib.pyplot as plt
plt.figure(1, figsize=(3, 3))
plt.imshow(digits.images[-1], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
Explanation: ๅฎ้ๅคๅฎใใ็ปๅใฏไปฅไธใงใใใ8ใใจใใไบๆธฌใฏใใใใ็ใๅพใฆใใใฎใงใฏใชใใใจๆใใพใใ
End of explanation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(clf, digits.data, digits.target, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Explanation: ไปๅบฆใฏCross Validationใไฝฟใฃใฆใฟใพใใcvใงใฏใใผใฟใฎๅๅฒๆฐ(foldใฎๆฐ)ใๆๅฎใใพใใ
End of explanation
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
candidates = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100]},
{'kernel': ['linear'], 'C': [1, 10, 100]}]
clf = GridSearchCV(SVC(C=1), candidates, cv=5)
clf.fit(digits.data, digits.target)
print(clf.best_estimator_)
for params, mean_, std_ in zip(clf.cv_results_["params"], clf.cv_results_["mean_test_score"], clf.cv_results_["std_test_score"]):
print("%0.3f (+/-%0.03f) for %r" % (mean_, std_ / 2, params))
Explanation: Search Model Parameters
ไธ่จใงใฏใขใใซใฎใใฉใกใผใฟใๅบๅฎใงๆๅฎใใพใใใ(gamma=0.001ใชใฉ)ใๅฎ้ใฉใใชใใฉใกใผใฟใ่จญๅฎใในใใใฏ้ๅธธใซๆฉใพใใๅ้กใงใใ
ๆ้ฉใชใใฉใกใผใฟใๆขใใใใๅใใฉใกใผใฟใๅใใใ็ฏๅฒใๆฑบใใใใฎ็ตใฟๅใใใ่ฉฆใใฆใใใจใใๆๆณใใใใพใใใใใGrid Searchใจๅผใณใพใใใscikit-learnใงใฏใใใ่กใใใใฎGrid Searchใขใธใฅใผใซใๆไพใใใฆใใพใใ
End of explanation
from sklearn.ensemble import BaggingClassifier
from sklearn.svm import SVC
base_clf = svm.SVC()
bagging_clf = BaggingClassifier(base_estimator=clf, n_estimators=10, max_samples=0.9, max_features=2, n_jobs=4)
scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
Explanation: Ensemble Lerning
ๅญฆ็ฟใฎ็ตๆใๅช็งใชใขใใซใใงใใใใจใใใใฐใใใงใชใใใจใใใใพใใ
็นๅพด้ใๅขใใใใคใพใๅคๆฌกๅ
ใซใชใใปใฉใใใใใใซ่ฆใใใใฉใกใผใฟใผใฎ็ตใฟๅใใใใฏๅขใใฆใใใฎใงใไฝๅบฆใๅญฆ็ฟใใใฆใฟใชใใจใชใใชใๆ้ฉใจๆใใ็ตๆใซใฏใใฉใ็ใใชใใชใใพใใ
ใใใใๅ้กใซๅฏพๅฟใใใใใ่คๆฐใฎใขใใซใๅฅใ
ใซๅญฆ็ฟใใใๆ็ต็ใซใฏใใใใฎ็ตใฟๅใใใงๆฑบใใใใจใง็ฒพๅบฆใไธใใใจใใๆๆณใใใใพใใ
ใใใใขใณใตใณใใซๅญฆ็ฟใจๅผใฐใใๆๆณใงใใใ่คๆฐใฎใขใใซใใฏใๅใใขใใซใฎใใจใใใใฐ(Bagging)ใใใใ็ฐใชใใขใใซใไฝฟใใใจใใใใพใ(Boosting)ใ
scikit-learnใงใฏใensembleใซใใฃใฆใใฎใขใณใตใณใใซๅญฆ็ฟใ่กใใใจใใงใใพใใไปฅไธใงใฏใBaggingใซใใฃใฆ10ๅใฎใขใใซใไธฆๅใงๅญฆ็ฟใใใใใฎ็ตใฟๅใใใงๆฑบใใใขใใซใไฝๆใใฆใใพใใ
End of explanation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
actual = [1, 0, 1, 1, 0, 0, 0, 1]
predict = [0, 0, 1, 1, 0, 1, 1, 1]
c_mx = confusion_matrix(actual, predict)
print(c_mx)
# calculate each score
print(precision_score(actual, predict))
print(recall_score(actual, predict))
print(accuracy_score(actual, predict))
print(f1_score(actual, predict))
Explanation: n_jobsใซใใไธฆๅใงๅญฆ็ฟใใใ้ใฎใใญใปในๆฐใ็ฐกๅใซ่ชฟๆดใใใใจใใงใใ้ซ้ใชๅญฆ็ฟใๅฏ่ฝใงใ
Evaluate Training Result
ๅญฆ็ฟใใ็ตๆใใคใพใใขใใซใฎ็ฒพๅบฆใฏใฉใฎใใใซๆธฌใใฐใใใงใใใใใ
ๅ็ดใซใฏไบๆธฌใจๅฎ้ใฎ2ใคใๆฏ่ผใใใใจใง็ฒพๅบฆใ็ฎๅบใใใใจใใงใใพใใใใ ใใใฎๆๆณใ ใจไพใใฐใ90%ใAใง10%ใBใใจใใใใผใฟใใใฃใๅ ดๅใๅธธใซAใจใใๅ็ดใชใขใใซใ็ใพใใฆใใใจใใฆใใใฎ็ฒพๅบฆใฏ90%ใจใชใฃใฆใใพใใพใใ
ใใใใๅ้กใ้ฒใใใใๅ้ก็ตๆใไปฅไธใฎใใใซใพใจใ่ฉไพกใ่กใๆททๅ่กๅ(confusion matrix)ใใใใพใใ
ใใใงใฏใไปฅไธ3็นใฎ่ฆณ็นใ้่ฆใงใใ
ๆญฃ็ขบๅบฆ(Accuracyใๅใซ็ฒพๅบฆใจใใฃใๅ ดๅใใฎๅค): ๅ
จใใผใฟใฎใใกใไบๆธฌ=ๅฎ้ใ ใฃใใใฎใฎๅฒๅ
้ฉๅ็(Precision): ็ใจไบๆธฌใใใใฎใซใคใใฆใๅฎ้็ใ ใฃใใใฎใฎๅฒๅ
ๅ็พ็(Recall): ๅฎ้็ใ ใฃใใใฎใซใคใใฆใไบๆธฌใฆ็ใ ใฃใใใฎใฎๅฒๅ
ๅ
ใปใฉใฎไพใ ใจใๅฝใจใฏไบๆธฌใใชใใใๅ็พ็ใฏๅธธใซ1ใซใชใใพใใใใใฎๅไบๆธฌใใใใฎใฎใใก้้ใใ ใฃใใใฎใๅขใใใใใใใผใฟใๅขใใใซใคใ้ฉๅ็ใๆชๅใใพใใ้ฉๅ็ใไธใใๅ ดๅใฏ้ใซใปใผ้้ใใชใใใฎไปฅๅคใไบๆธฌใใชใใใฐ่ฏใใงใใใใใฎๅๆฌๅฝใฏ็ใ ใฃใใใฎใๅขใใใใๅ็พ็ใไธใใใใจใซใชใใพใใ
ใใฎใใใซ้ฉๅ็ใจๅ็พ็ใฏใใฌใผใใชใใฎ้ขไฟใซใใใ่ฏใใขใใซใจใฏๅบๆฌ็ใซใฏใใฎ2ใคใฎๅคใฎใใฉใณในใๅใใฆใใใใฎใซใชใใพใใใใฎใใฉใณในใ่ฉไพกใใๅคใจใใฆFๅคใใใใใใใใฎๅคใๅณใใใจใใขใใซใฎ็ฒพๅบฆใๆธฌใไธใง้่ฆใซใชใใพใใ
scikit-learnใงใฏใmetricsใๅฉ็จใใใใจใงใใใใฎๅคใ็ฐกๅใซๅ็
งใใใใจใใงใใพใใ
End of explanation
from sklearn.metrics import classification_report
actual = [0, 1, 2, 2, 2]
predict = [0, 0, 2, 2, 1]
target_names = ["class 0", "class 1", "class 2"]
print(classification_report(actual, predict, target_names=target_names))
Explanation: classification_reportใไฝฟ็จใใใใจใงใ็ฐกๅใซไธ่ฆง่กจใๅๅพใงใใพใ(ๅ็ดใซๅคใ ใๅๅพใใใๅ ดๅใฏprecision_recall_fscore_support)ใ
End of explanation
from sklearn import svm
from sklearn import datasets
clf = svm.SVC()
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)
import pickle
s = pickle.dumps(clf) #serialize model data
clf2 = pickle.loads(s) #load serialized model data
Explanation: Store the Model
ๅญฆ็ฟใใใขใใซใฏใใกใคใซใจใใฆๅบๅใไฟๅญใใฆใใใใจใใงใใพใใ
ไปฅไธใฏใๆจๆบใฎpickleใไฝฟใๆๆณใงใใ
End of explanation
from sklearn.externals import joblib
joblib.dump(clf, "data/model.pkl")
clf = joblib.load("data/model.pkl")
Explanation: ใใฎใปใใsklearn.externalsใฎjoblibใๅฉ็จใใใกใคใซใซไฟ็ฎกใใใใจใใงใใพใใๅคง่ฆๆจกใชใขใใซใชใฉใซใฏใใกใใฎๆนใใใใงใใใใ
End of explanation |
13,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Click the button to launch this notebook in Binder
Step1: Alternately, to install from the latest version on pypi, uncomment and run the cell below
Step2: Sandhi Splitting
Splitting sandhis in a long phrase/sentence to obtain the constituent words can be done in just a few lines of code.
First, let's import the Parser class that is used for most of the tasks.
Step3: The Parser object supports various options for controlling the parsing, as well as the input and output formats. Here, let us specify that we want output in Devanagari (default is SLP1). The other options available can be seen here
Step4: As an example, let us try a long phrase from the เคเคฎเฅเคชเฅเคฐเคพเคฎเคพเคฏเคฃเคฎเฅ of เคญเฅเคเค เฅค We will ask the parser to find at most 10 splits.
Step5: As we can see, the parser did a decent job of splitting this long phrase, though it does some over splitting. Hopefully, this should point a student in the correct direction.
Vakya Analysis
Next, let us use the parser for analyzing a sentence and understanding the relationships among the words. We will use a simple sentence to illustrate the parser's capabilities.
Step6: We can now split the sentence to convert it to the parser's internal representation. Since we know that there is no sandhi in this sentence, we can pass pre_segmented=True to indicate this to the parser, and retain just the first split.
Step7: For visualization, the parses can be converted to the GraphViz DOT format.
Step8: We can convert this representation to a picture using any tool that supports the DOT format. Let's use [image-charts.com] which exposes a REST API for generating charts. | Python Code:
# !pip install git+https://github.com/kmadathil/sanskrit_parser
Explanation: Click the button to launch this notebook in Binder:
Sanskrit Parser Examples
The sanskrit_parser module supports 3 different usages, in order of increasing complexity:
1. tags - Morphological analysis of a word
2. sandhi - Sandhi split of a phrase
3. vakya - Morpho-syntactic analysis of a sentence (after Sandhi split)
In this notebook, we will see how to use the API to perform the latter two tasks - sandhi splitting and vakya analysis in python code.
Command line usage of the scripts is very similar and is documented here
Installation
Sanskrit Parser can be easily installed using pip.
If we are running on Binder, we can skip this step. If not, please uncomment and run one of the cells below.
To directly install from the github repo to get the latest version of the package:
End of explanation
# !pip install sanskrit_parser
Explanation: Alternately, to install from the latest version on pypi, uncomment and run the cell below
End of explanation
from sanskrit_parser import Parser
Explanation: Sandhi Splitting
Splitting sandhis in a long phrase/sentence to obtain the constituent words can be done in just a few lines of code.
First, let's import the Parser class that is used for most of the tasks.
End of explanation
parser = Parser(output_encoding='Devanagari')
Explanation: The Parser object supports various options for controlling the parsing, as well as the input and output formats. Here, let us specify that we want output in Devanagari (default is SLP1). The other options available can be seen here
End of explanation
text = 'เคคเคธเฅเคฎเคพเคคเฅเคธเคฎเคธเฅเคคเคเฅเคทเคคเฅเคฐเคตเคฐเฅเคเคเคฐเฅเคตเคชเคพเคเคจเคตเคฐเคฟเคทเฅเค เคงเคพเคฐเคพเคชเคฐเคถเฅเคตเคงเคญเคฐเคฃเคญเฅเคทเคฃเคตเฅเคทเคญเคพเคฐเฅเคเคตเคญเคเฅเคเคพเคฆเคชเคฐเคฟเคเฅเคเคฟเคจเฅเคจเคคเคฐเคถเฅเคฐเฅเคฏเคถเคพเคฒเคฟเคจเคฟ'
splits = parser.split(text, limit=10)
for split in splits:
print(f'{split}')
Explanation: As an example, let us try a long phrase from the เคเคฎเฅเคชเฅเคฐเคพเคฎเคพเคฏเคฃเคฎเฅ of เคญเฅเคเค เฅค We will ask the parser to find at most 10 splits.
End of explanation
sentence = 'เคฆเฅเคตเคฆเคคเฅเคคเค เคเฅเคฐเคพเคฎเค เคเคเฅเคเคคเคฟ'
Explanation: As we can see, the parser did a decent job of splitting this long phrase, though it does some over splitting. Hopefully, this should point a student in the correct direction.
Vakya Analysis
Next, let us use the parser for analyzing a sentence and understanding the relationships among the words. We will use a simple sentence to illustrate the parser's capabilities.
End of explanation
split = parser.split(sentence, pre_segmented=True)[0]
print(f'{split}')
parses = list(split.parse(limit=2))
for i, parse in enumerate(parses):
print(f'Parse {i}')
print(f'{parse}')
Explanation: We can now split the sentence to convert it to the parser's internal representation. Since we know that there is no sandhi in this sentence, we can pass pre_segmented=True to indicate this to the parser, and retain just the first split.
End of explanation
print(parses[0].to_dot())
Explanation: For visualization, the parses can be converted to the GraphViz DOT format.
End of explanation
from urllib.parse import urlencode
from IPython.display import Image
q = urlencode({'cht': 'gv:dot', 'chl': parses[0].to_dot()})
url = f"https://image-charts.com/chart?{q}"
Image(url=url)
Explanation: We can convert this representation to a picture using any tool that supports the DOT format. Let's use [image-charts.com] which exposes a REST API for generating charts.
End of explanation |
13,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(Interactive) Plotting using Matplotlib and Seaborn
Matplotlib is basic plotting library for Python inspired by Matlab. Seaborn is built on top of it with integrated analysis and specialized plots + pretty good integration with Pandas
Also see the full gallery of Seaborn or Matplotlib.
Step1: Scatterplot
Step2: Histogram
Step3: Box Plots
Step4: Log Scale
Step5: unbalanced with outliers what about log scale?
Step6: Grouping / Coloring Plots
grouped by color?
Step7: TASK
create a scatterplot where
* x = lifeExp
* y = gdpPerCap
* color = continent
* size = pop
label the axis appropiately and use a log scale for gdp
Step8: Interactive plots
simple interaction is possible with IPython by default. That means whenever the user changes some parameter the visualization is recreated on the server side and send to the client.
Step9: custom build widgets
Step10: TASK
make the plot from before interactive, such that you can slide the year | Python Code:
#disable some annoying warning
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
#plots the figures in place instead of a new window
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
#use a standard dataset of heterogenous data
cars = pd.read_csv('data/mtcars.csv')
cars.head()
Explanation: (Interactive) Plotting using Matplotlib and Seaborn
Matplotlib is basic plotting library for Python inspired by Matlab. Seaborn is built on top of it with integrated analysis and specialized plots + pretty good integration with Pandas
Also see the full gallery of Seaborn or Matplotlib.
End of explanation
plt.scatter(x=cars['mpg'],y=cars['wt'])
plt.xlabel('miles per gallon')
plt.ylabel('weight')
plt.title('MPG vs WT')
plt.show()
#integrated in pandas, too
cars.plot(x='mpg',y='wt',kind='scatter')
cars.plot(kind='scatter', x='mpg',y='wt',c='hp',s=cars['cyl']*20,alpha=0.5)
#what if we plot everything?
cars.plot()
Explanation: Scatterplot
End of explanation
cars['mpg'].hist(bins=5)
plt.hist(cars['mpg'],bins=5)
plt.title('miles per gallon')
#seaborn not just a histogram but also an kernel density enstimation and better default settings
sns.distplot(cars['mpg'],bins=5)
Explanation: Histogram
End of explanation
#box plots
cars['mpg'].plot(kind='box')
cars.boxplot('mpg')
#group by gear
cars.boxplot('mpg', by='gear')
# load gapminder again and select 2007
gap = pd.read_csv('data/gapminder-unfiltered.tsv',index_col=0, sep='\t')
gap2007 = gap[gap.year == 2007]
gap2007.columns
Explanation: Box Plots
End of explanation
gap2007.plot(kind='scatter', x='lifeExp',y='gdpPercap')
Explanation: Log Scale
End of explanation
gap2007.plot(kind='scatter', x='lifeExp',y='gdpPercap')
plt.yscale('log')
Explanation: unbalanced with outliers what about log scale?
End of explanation
#create a color palette
colors = sns.color_palette()
sns.palplot(colors)
#for each group create an own plot an overlay them
for (name, group),color in zip(gap2007.groupby('continent'),colors):
plt.scatter(x=group['lifeExp'],y=group['gdpPercap'],label=name, c=color,s=30)
plt.yscale('log')
plt.legend()
#playing with categories ... seaborn is pretty good with it
plt.figure(figsize=(40,20))
plt.subplot(121)
sns.boxplot(x='continent',y='gdpPercap',data=gap)
plt.subplot(122)
sns.violinplot(x='continent',y='gdpPercap',data=gap2007)
# or with linear regression
anscombe = sns.load_dataset("anscombe")
sns.lmplot('x','y',col='dataset',hue='dataset', data=anscombe, col_wrap=2)
#g = sns.FacetGrid(anscombe, col="dataset", size=4, aspect=1)
#g.map(sns.regplot, "x", "y")
# or with structured heatmaps
#compute the correlations and take a look at them
corrmat = gap.corr()
# draw a clustered heatmap using seaborn
sns.clustermap(corrmat, square=True)
Explanation: Grouping / Coloring Plots
grouped by color?
End of explanation
#for each group create an own plot an overlay them
pop_max = gap2007['pop'].max()
for (name, group),color in zip(gap2007.groupby('continent'),colors):
plt.scatter(x=group['lifeExp'],y=group['gdpPercap'],label=name, c=color,s=(group['pop']/pop_max)*400)
plt.yscale('log')
plt.title('Life Expectancy vs GDP')
plt.xlabel('Life Expectancy')
plt.ylabel('GDP Per Cap')
plt.legend()
Explanation: TASK
create a scatterplot where
* x = lifeExp
* y = gdpPerCap
* color = continent
* size = pop
label the axis appropiately and use a log scale for gdp
End of explanation
from IPython.html.widgets import interact, interact_manual
@interact(text='Hello', slider=(0,10),check=True,categories=['red','green','blue'])
def react(text, slider,check,categories):
print(text,slider*10,check,categories)
@interact_manual(text='Hello', slider=(0,10),check=True,categories=['red','green','blue'])
def react(text, slider,check,categories):
print(text,slider*10,check,categories)
@interact(bins=(5, 25, 5),color=['red','green','orange','blue'])
def show_distplot(bins,color):
cars['mpg'].hist(bins=bins, color=color)
Explanation: Interactive plots
simple interaction is possible with IPython by default. That means whenever the user changes some parameter the visualization is recreated on the server side and send to the client.
End of explanation
#hard core
from IPython.html import widgets
[widget for widget in dir(widgets) if not widget.endswith('Widget') and widget[0] == widget[0].upper() and widget[0] != '_']
@interact(bins=widgets.FloatTextWidget(value=5))
def show_distplot(bins):
cars['mpg'].hist(bins=bins)
text_widget = widgets.Textarea(value='Hello', description='text area')
slider_widget = widgets.BoundedFloatText(5,min=0,max=10, description='slider area')
check_widget = widgets.Checkbox(True,description="CheckboxWidget")
toggle = widgets.RadioButtons(options=['red','green','blue'], description="RadioButtonsWidget")
@interact(text=text_widget, slider=slider_widget,check=check_widget,categories=toggle)
def react(text, slider,check,categories):
print(text,slider*10,check,categories)
b = widgets.Button(description="Update")
checkbox = widgets.Checkbox(description="CheckboxWidget")
tab1_children = [b,
checkbox,
widgets.Dropdown(options=['A','B'], description="DropdownWidget"),
widgets.RadioButtons(options=['A','B'], description="RadioButtonsWidget"),
widgets.Select(options=['A','B'], description="SelectWidget"),
widgets.Text(description="TextWidget"),
widgets.Textarea(description="TextareaWidget"),
widgets.ToggleButton(description="ToggleButtonWidget"),
widgets.ToggleButtons(options=["Value 1", "Value2"], description="ToggleButtonsWidget"),
]
tab2_children = [widgets.BoundedFloatText(description="BoundedFloatTextWidget"),
widgets.BoundedIntText(description="BoundedIntTextWidget"),
widgets.FloatSlider(description="FloatSliderWidget"),
widgets.FloatText(description="FloatTextWidget"),
widgets.IntSlider(description="IntSliderWidget"),
widgets.IntText(description="IntTextWidget"),
]
tab1 = widgets.Box(children=tab1_children)
tab2 = widgets.Box(children=tab2_children)
i = widgets.Accordion(children=[tab1, tab2])
i.set_title(0,"Basic Widgets")
i.set_title(1,"Numbers Input")
from IPython.display import display
def button_clicked(bb):
print(checkbox.value)
#TODO update plot
b.on_click(button_clicked)
display(i)
Explanation: custom build widgets:
http://nbviewer.ipython.org/github/ipython/ipython/blob/3.x/examples/Interactive%20Widgets/Widget%20List.ipynb
End of explanation
pop_max = gap['pop'].max()
@interact(year=(gap.year.min(), gap.year.max()))
def plot_gapminder(year):
gapyear = gap[gap.year == year]
for (name, group),color in zip(gapyear.groupby('continent'),colors):
plt.scatter(x=group['lifeExp'],y=group['gdpPercap'],label=name, c=color,s=(group['pop']/pop_max)*400)
plt.yscale('log')
plt.title('Life Expectancy vs GDP')
plt.xlabel('Life Expectancy')
plt.ylabel('GDP Per Cap')
plt.xlim(gap.gdpPercap.min(),gap.gdpPercap.max())
plt.xlim(gap.lifeExp.min(),gap.lifeExp.max())
plt.legend()
Explanation: TASK
make the plot from before interactive, such that you can slide the year
End of explanation |
13,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Below is an engineering mechanics problem that can be solved with Python. Follow along to see how to solve the problem with code.
Problem
Given
Step1: Assume the aluminum thickness is 2 mm and find the stress in the steel and aluminum
Next we will compute the stress in the steel and the stress in the aluminum due to the applied moment $M$, assuming that the aluminum top and bottom layers are each $2 \ mm$ thick.
Step2: Calculate the stress in the aluminum
The maximum stress in the aluminum, $\sigma$ is dependent on the bending moment $M$, the distance from the neutral axis $c$ and the moment of inertia I according to the following equation
Step3: Calculate the stress in the steel
Maximum stress in the steel, $\sigma$ is dependent on the bending moment $M$, the distance from the neutral axis $c$ and the moment of inertia I according to the the same equation we used to calculate the stress in the aluminum.
$$ \sigma = n\frac{Mc}{I} $$
In case of the c, $c$ the distance from the neutral axis that we are calculating the stress for is $c = h/2-h_a$, so the equation above becomes
Step4: Code the calculation of stress using a for loop
Now we can find the stress in aluminum and the stress in steel for a range of aluminum thickness using a for loop. Each time through the loop we will use the aluminum thickness to calculate the stress in steel and aluminum.
Step5: Determine the maximum steel stress and which aluminum thickness this occurs at
Next we will code a way to find the maximum steel stress and what aluminum thickness this occurs at | Python Code:
h = 40
b = 60
ha = 2
hs = h - 2*ha
Ea = 75*10**3 #Elastic modulus in MPa
Es = 200*10**3 #Elastic modulus in MPa
M = 1500*10**3 # N mm
Explanation: Below is an engineering mechanics problem that can be solved with Python. Follow along to see how to solve the problem with code.
Problem
Given:
Two aluminum strips and a strip of steel are securely bonded together to form a a lamellar composite part. The width of the part is $b=60 \ mm$ and the thickness of the part $h = 40 \ mm$. The modulus of elasticity for the steel in the composite is $200 \ GPa$ and the modulus of elasticity of the aluminum is $75 GPa$. The bending moment $M$ of the part is $1500 N m$
Find:
(a) If the thickness of the aluminum top and bottom plates is varied from $ a = 0 \ mm$ to $a = 20 \ mm$ in $2 \ mm$ increments (the composite part keeps the same thickness), what is the stress in the steel and what is the stress in the aluminum?
(b) What is the largest stress that can occur in steel if aluminum thickness is varied from $a=0 \ mm$ to $a = 40/2 \ mm$ and how thick is the aluminum when this maximum steel stress occurs?
Solution
Start the solution: install Python
We are going to use Python to code the solution to this problem. If you don't already have Python installed on your computer, I recommend installing the Anaconda distribution of Python. See this post to learn how to install Anaconda on your computer.
I am coding this solution in a Jupyter Notebook. Once you install Anaconda, you can open a Jupyter notebook from the Windows Start Menu or the Anaconda Prompt. See this post to learn about 3 ways to open a Jupyter notebook.
Alternatively, instead of using a Jupyter notebook, you could code your solution in a .py file.
Alright. Let's get coding....
Define variables based on the composite dimensions and material properties
Based on parameters given in the problem, we can define the following variables in Python.
$h = 40 \ mm$ part thickness (height)
$b = 60 \ mm$ part width
$h_a = 2 \ mm$ thickness (height) of aluminum that we'll start the problem with
$h_s = h - 2h_a $ thickness of the steel
$E_a = 75 \ GPa$ elastic modulus of the aluminum
$E_s = 200 \ GPa$ elastic modulus of the steel
$M = 1500 \ N \ m$ applied moment
End of explanation
n = Es/Ea
ha = 2
I = (1/12)*b*h**3 + (1/12)*(2*(1/2)*(n*b-b))*(h-2*ha)**3
print(f"The moment of inertia I = {I} mm4")
Explanation: Assume the aluminum thickness is 2 mm and find the stress in the steel and aluminum
Next we will compute the stress in the steel and the stress in the aluminum due to the applied moment $M$, assuming that the aluminum top and bottom layers are each $2 \ mm$ thick.
End of explanation
sa = M*(h/2)/I
print(f"The stress in the aluminum sigma a = {sa} MPa")
Explanation: Calculate the stress in the aluminum
The maximum stress in the aluminum, $\sigma$ is dependent on the bending moment $M$, the distance from the neutral axis $c$ and the moment of inertia I according to the following equation:
$$ \sigma = \frac{Mc}{I} $$
In case of the aluminum, $c$ the distance from the neutral axis that we are calculating the stress for is $c = h/2$, so the equation above becomes:
$$ \sigma_a = \frac{M(h/2)}{I} $$
We can code this into Python pretty easily.
End of explanation
ss = n*M*(h/2-ha)/I
print(f"The stress in the steel sigma s = {ss} MPa")
Explanation: Calculate the stress in the steel
Maximum stress in the steel, $\sigma$ is dependent on the bending moment $M$, the distance from the neutral axis $c$ and the moment of inertia I according to the the same equation we used to calculate the stress in the aluminum.
$$ \sigma = n\frac{Mc}{I} $$
In case of the c, $c$ the distance from the neutral axis that we are calculating the stress for is $c = h/2-h_a$, so the equation above becomes:
$$ \sigma_a = n\frac{M(h/2-a}{I} $$
Like before, we can code this into Python.
End of explanation
for ha in range(0,22,2):
print(f"For aluminum thickness h_a = {ha} mm")
I = (1/12)*b*h**3 + (1/12)*(2*(1/2)*(n*b-b))*(h-2*ha)**3
sa = M*(h/2)/I
print(f"The stress in the aluminum sigma a = {sa} MPa")
ss = n*M*(h/2-ha)/I
print(f"The stress in the steel sigma s = {ss} MPa")
Explanation: Code the calculation of stress using a for loop
Now we can find the stress in aluminum and the stress in steel for a range of aluminum thickness using a for loop. Each time through the loop we will use the aluminum thickness to calculate the stress in steel and aluminum.
End of explanation
import numpy as np
SS = []
for ha in np.arange(0,22,0.1):
I = (1/12)*b*h**3 + (1/12)*(2*(1/2)*(n*b-b))*(h-2*ha)**3
ss = n*M*(h/2-ha)/I
SS.append([ss,ha])
maxs = max(SS)
print(f"The maximum stress in the steel is {maxs[0]} MPa at an aluminum thickness of {maxs[1]} mm")
Explanation: Determine the maximum steel stress and which aluminum thickness this occurs at
Next we will code a way to find the maximum steel stress and what aluminum thickness this occurs at
End of explanation |
13,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Let's setup our environment. We'll pull in the the usual gis suspects and setup a leaflet map, read our API keys from a json file, and setup our Planet client
Step1: Make a slippy map to get GeoJSON
The planet API allows you to query using a geojson which is a special flavor of json.
We use geojson to define Areas of Interest, or AOIs in satellite speak.
We are going to create a slippy map using leaflet and apply the Planet 2017 Q1 mosaic as the basemap. This requires our api key.
We are going to add a special draw handler that shoves a draw region into a object so we get the geojson.
If you don't want to do this, or need a fixed query try geojson.io
To install and run
Step2: Querying the Planet API.
First we'll grab our geojson area of interest (AOI) and use it to construct a query.
We'll then build a search to search that area looking for PSScene3Band
We have lots of products
Step3: Cleanup
The data we got back is good, but we need some more information
We got back big scenes, but we only care about our area of interest. The scene may not cover the whole area of interest.
We can use the Shapely library to quickly figure out how much each scene overlaps our AOI
We will convert our AOI and the geometry of each scene to calculate overlap using a shapely call.
The returned acquisition, publish, and update times are strings, we'll convert them to datatime objects so we wan search.
Step4: Filtering our search using pandas.
Using our dataframe we will filter the scenes to just what we want.
First we want scenes with less than 10% clouds.
Second we want standard quality images. Test images may not be high quality.
Third well only look for scenes since January.
Finally we will create a new data frame with our queries and print the results.
Step5: Visualizing scene foot prints overlap with our AOI
We know these scenes intersect with our AOI, but we aren't quite sure about the geometry.
We are going to plot our scene footprints and original AOI on our slippy map.
To do this we create GeoJson objects with properties.
Step6: Let's see what we got.
The API returns a handy thumbnail link.
Let's tell jupyter to show it.
You may need to login to planet explorer to have auth.
If this is the case just print the urls and paste them into your browser.
Step11: Product Activation and Downloading
There are two things we need to know, the satellite type (asset) and image type (product).
Full resolution uncompressed satellite images are big and there are lots of ways to view them.
For this reason Planet generally keeps images in their native format and only processes them on customer requests. There is some caching of processed scenes, but this is the exception not the rule.
All images must be activated prior to downloading and this can take some time based on demand.
Additionally we need to determine what sort of product we want to download. Generally speaking there are three kinds of scenes
Step12: Scenes ACTIVATE!
Given our good scenes list we will convert the data frame "id" column into a list and activate every item in that list.
For this example we are going to default to using a 3Band visual product but I have included some four band methods to help you out.
Activation usually takes about 5-15 minutes so get some coffee.
Step13: Download Scenes
In this section we will see if our scenes have been activated.
If they are activated the client object will have its status flag set to active.
Once that is done we will then save the scenes to the local directory.
A smart engineer would set a path variable to store these files and check if the asset has already been downloaded prior to downloading
Step18: Loading Images
There are a varitety of ways to load tif data including Rasterio, GDAL, OpenCV, SKImage.
Today we are going to use rasterio and load each channel into a numpy array.
Since the visual 3Band products are rotated we can also open a mask layer for processing.
Step19: Read Images and Use Matplotlib to show them.
Step20: Quick Histogram
Next up we'll plot the histogram of the image.
A histogram is just a plot of the number of pixels with a specific intensity for a given color.
Step21: Decomposing Channels
We can also decompose the channels of the image.
Sometimes it is useful to work just in a single channel.
Other times channels can be used to do useful things, like filter out clouds.
Step22: But all of these scenes are big, and we want downtown San Francisco
We can clip all of the scenes to the AOI we selected at the start of the notebook
First we'll dump the geojson to a file.
Since geospatial data is "big" we often work with files and get stuff out of memory ASAP.
For each of our scenes we'll create a 'clip' file.
We will use a tool called GDAL to clip the scene to our AOI
GDAL stands for Geospatial Data Abstraction Library
GDAL is a C++ library that is often run from the command line, but it does have SWIG bindings.
Step23: Awesome, Let's take a look at what we got.
Step24: Hrm... that's not right.
You'll notice that a lot of these scenes don't fill our AOI.
A lot of theses images were taken roughly at the same time.
We should try to merge these scenes together to make one big scene.
This process is called mosaicking, and GDAL can help.
We will call GDAL from the command line using subprocess to do this for us.
Step25: Let's take a look.... looks much better
Step26: Now let's pull it all together to do something interesting.
First we'll download and activate all of our target scenes from the past few years.
Then we'll clip them using GDAL to the small AOI we selected above.
Finally we'll export them and use that data to make a mosaic.
We'll use ImageMagick to convert our tifs to gifs, and our multiple gifs to an animated gif. | Python Code:
# See requirements.txt to set up your dev environment.
import sys
import os
import json
import scipy
import urllib
import datetime
import urllib3
import rasterio
import subprocess
import numpy as np
import pandas as pd
import seaborn as sns
from osgeo import gdal
from planet import api
from planet.api import filters
from traitlets import link
from shapely.geometry import mapping, shape
from IPython.display import display, Image, HTML
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
urllib3.disable_warnings()
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
%matplotlib inline
# will pick up api_key via environment variable PL_API_KEY
# but can be specified using `api_key` named argument
api_keys = json.load(open("apikeys.json",'r'))
client = api.ClientV1(api_key=api_keys["PLANET_API_KEY"])
Explanation: Setup
Let's setup our environment. We'll pull in the the usual gis suspects and setup a leaflet map, read our API keys from a json file, and setup our Planet client
End of explanation
# Basemap Mosaic (v1 API)
mosaicsSeries = 'global_quarterly_2017q1_mosaic'
# Planet tile server base URL (Planet Explorer Mosaics Tiles)
mosaicsTilesURL_base = 'https://tiles0.planet.com/experimental/mosaics/planet-tiles/' + mosaicsSeries + '/gmap/{z}/{x}/{y}.png'
# Planet tile server url
mosaicsTilesURL = mosaicsTilesURL_base + '?api_key=' + api_keys["PLANET_API_KEY"]
# Map Settings
# Define colors
colors = {'blue': "#009da5"}
center = [37.774929,-122.419416]
# Define initial map zoom level
zoom = 11
# Set Map Tiles URL
planetMapTiles = TileLayer(url= mosaicsTilesURL)
# Create the map
m = Map(
center=center,
zoom=zoom,
default_tiles = planetMapTiles # Uncomment to use Planet.com basemap
)
# Define the draw tool type options
polygon = {'shapeOptions': {'color': colors['blue']}}
rectangle = {'shapeOptions': {'color': colors['blue']}}
# Create the draw controls
# @see https://github.com/ellisonbg/ipyleaflet/blob/master/ipyleaflet/leaflet.py#L293
dc = DrawControl(
polygon = polygon,
rectangle = rectangle
)
# Initialize an action counter variable
actionCount = 0
AOIs = {}
# Register the draw controls handler
def handle_draw(self, action, geo_json):
# Increment the action counter
global actionCount
actionCount += 1
# Remove the `style` property from the GeoJSON
geo_json['properties'] = {}
# Convert geo_json output to a string and prettify (indent & replace ' with ")
geojsonStr = json.dumps(geo_json, indent=2).replace("'", '"')
AOIs[actionCount] = json.loads(geojsonStr)
# Attach the draw handler to the draw controls `on_draw` event
dc.on_draw(handle_draw)
m.add_control(dc)
m
Explanation: Make a slippy map to get GeoJSON
The planet API allows you to query using a geojson which is a special flavor of json.
We use geojson to define Areas of Interest, or AOIs in satellite speak.
We are going to create a slippy map using leaflet and apply the Planet 2017 Q1 mosaic as the basemap. This requires our api key.
We are going to add a special draw handler that shoves a draw region into a object so we get the geojson.
If you don't want to do this, or need a fixed query try geojson.io
To install and run:
$ pip install ipyleaflet
$ jupyter nbextension enable --py --sys-prefix ipyleaflet
$ jupyter nbextension enable --py --sys-prefix widgetsnbextension
More information
End of explanation
print AOIs[1]
myAOI = AOIs[1]["geometry"]
# build a query using the AOI and
# a cloud_cover filter that excludes 'cloud free' scenes
old = datetime.datetime(year=2017,month=1,day=1)
query = filters.and_filter(
filters.geom_filter(myAOI),
filters.range_filter('cloud_cover', lt=30),
filters.date_range('acquired', gt=old)
)
# build a request for only PlanetScope imagery
request = filters.build_search_request(
query, item_types=['PSScene3Band'] #"REOrthoTile"
)
# if you don't have an API key configured, this will raise an exception
result = client.quick_search(request)
scenes = []
planet_map = {}
for item in result.items_iter(limit=500):
planet_map[item['id']]=item
props = item['properties']
props["id"] = item['id']
props["geometry"] = item["geometry"]
props["thumbnail"] = item["_links"]["thumbnail"]
scenes.append(props)
scenes = pd.DataFrame(data=scenes)
display(scenes)
print scenes['satellite_id']
print len(scenes)
Explanation: Querying the Planet API.
First we'll grab our geojson area of interest (AOI) and use it to construct a query.
We'll then build a search to search that area looking for PSScene3Band
We have lots of products: RapidEye, PlanetScope (PS) 3 and 4 band, LandSat, and Sentinel are all possible.
Once we have our query, we'll do the search. We will then iterate over the results, slurp up the data, and put them in a pandas data frame for easy sorting.
We'll print the first few so we're sure it works.
End of explanation
# now let's clean up the datetime stuff
# make a shapely shape from our aoi
sf = shape(myAOI)
footprints = []
overlaps = []
# go through the geometry from our api call, convert to a shape and calculate overlap area.
# also save the shape for safe keeping
for footprint in scenes["geometry"].tolist():
s = shape(footprint)
footprints.append(s)
overlap = 100.0*(sf.intersection(s).area / sf.area)
overlaps.append(overlap)
# take our lists and add them back to our dataframe
scenes['overlap'] = pd.Series(overlaps, index=scenes.index)
scenes['footprint'] = pd.Series(footprints, index=scenes.index)
# now make sure pandas knows about our date/time columns.
scenes["acquired"] = pd.to_datetime(scenes["acquired"])
scenes["published"] = pd.to_datetime(scenes["published"])
scenes["updated"] = pd.to_datetime(scenes["updated"])
scenes.head()
Explanation: Cleanup
The data we got back is good, but we need some more information
We got back big scenes, but we only care about our area of interest. The scene may not cover the whole area of interest.
We can use the Shapely library to quickly figure out how much each scene overlaps our AOI
We will convert our AOI and the geometry of each scene to calculate overlap using a shapely call.
The returned acquisition, publish, and update times are strings, we'll convert them to datatime objects so we wan search.
End of explanation
# Now let's get it down to just good, recent, clear scenes
clear = scenes['cloud_cover']<0.05
good = scenes['quality_category']=="standard"
recent = scenes["acquired"] > datetime.date(year=2017,month=1,day=1)
partial_coverage = scenes["overlap"] > 60
good_scenes = scenes[(good&clear&recent&partial_coverage)]
display(good_scenes)
print len(good_scenes)
# Now let's get it down to just good, recent, clear scenes
clear = scenes['cloud_cover']<0.05
good = scenes['quality_category']=="standard"
all_time = scenes["acquired"] > datetime.date(year=2017,month=1,day=1)
full_coverage = scenes["overlap"] >= 60
all_scenes = scenes[(clear&all_time&full_coverage)]
display(all_scenes)
print all_scenes['satellite_id']
print len(all_scenes)
Explanation: Filtering our search using pandas.
Using our dataframe we will filter the scenes to just what we want.
First we want scenes with less than 10% clouds.
Second we want standard quality images. Test images may not be high quality.
Third well only look for scenes since January.
Finally we will create a new data frame with our queries and print the results.
End of explanation
# first create a list of colors
colors = ["#ff0000","#00ff00","#0000ff","#ffff00","#ff00ff","#00ffff"]
# grab our scenes from the geometry/footprint geojson
footprints = good_scenes["geometry"].tolist()
# for each footprint/color combo
for footprint,color in zip(footprints,colors):
# create the leaflet object
feat = {'geometry':footprint,"properties":{
'style':{'color': color,'fillColor': color,'fillOpacity': 0.2,'weight': 1}},
'type':u"Feature"}
# convert to geojson
gjson = GeoJSON(data=feat)
# add it our map
m.add_layer(gjson)
# now we will draw our original AOI on top
feat = {'geometry':myAOI,"properties":{
'style':{'color': "#FFFFFF",'fillColor': "#FFFFFF",'fillOpacity': 0.5,'weight': 1}},
'type':u"Feature"}
gjson = GeoJSON(data=feat)
m.add_layer(gjson)
m
Explanation: Visualizing scene foot prints overlap with our AOI
We know these scenes intersect with our AOI, but we aren't quite sure about the geometry.
We are going to plot our scene footprints and original AOI on our slippy map.
To do this we create GeoJson objects with properties.
End of explanation
imgs = []
# loop through our thumbnails and add display them
for img in good_scenes["thumbnail"].tolist():
imgs.append(Image(url=img))
print img
display(*imgs)
Explanation: Let's see what we got.
The API returns a handy thumbnail link.
Let's tell jupyter to show it.
You may need to login to planet explorer to have auth.
If this is the case just print the urls and paste them into your browser.
End of explanation
def get_products(client, scene_id, asset_type='PSScene3Band'):
Ask the client to return the available products for a
given scene and asset type. Returns a list of product
strings
out = client.get_assets_by_id(asset_type,scene_id)
temp = out.get()
return temp.keys()
def activate_product(client, scene_id, asset_type="PSScene3Band",product="analytic"):
Activate a product given a scene, an asset type, and a product.
On success return the return value of the API call and an activation object
temp = client.get_assets_by_id(asset_type,scene_id)
products = temp.get()
if( product in products.keys() ):
return client.activate(products[product]),products[product]
else:
return None
def download_and_save(client,product):
Given a client and a product activation object download the asset.
This will save the tiff file in the local directory and return its
file name.
out = client.download(product)
fp = out.get_body()
fp.write()
return fp.name
def scenes_are_active(scene_list):
Check if all of the resources in a given list of
scene activation objects is read for downloading.
retVal = True
for scene in scene_list:
if scene["status"] != "active":
print "{} is not ready.".format(scene)
return False
return True
Explanation: Product Activation and Downloading
There are two things we need to know, the satellite type (asset) and image type (product).
Full resolution uncompressed satellite images are big and there are lots of ways to view them.
For this reason Planet generally keeps images in their native format and only processes them on customer requests. There is some caching of processed scenes, but this is the exception not the rule.
All images must be activated prior to downloading and this can take some time based on demand.
Additionally we need to determine what sort of product we want to download. Generally speaking there are three kinds of scenes:
Analytic - multi-band full resolution images that have not been processed. These are like raw files for DSLR camers.
Visual - these are color corrected rectified tifs. If you are just starting out this is your best call.
UDM - Usable data mask. This mask can be used to find bad pixels and columns and to mask out areas with clouds.
End of explanation
to_get = good_scenes["id"].tolist()
activated = []
# for each scene to get
for scene in to_get:
# get the product
product_types = get_products(client,scene)
for p in product_types:
# if there is a visual product
if p == "visual": # p == "basic_analytic_dn"
print "Activating {0} for scene {1}".format(p,scene)
# activate the product
_,product = activate_product(client,scene,product=p)
activated.append(product)
Explanation: Scenes ACTIVATE!
Given our good scenes list we will convert the data frame "id" column into a list and activate every item in that list.
For this example we are going to default to using a 3Band visual product but I have included some four band methods to help you out.
Activation usually takes about 5-15 minutes so get some coffee.
End of explanation
tiff_files = []
asset_type = "_3B_Visual"
# check if our scenes have been activated
if True: #scenes_are_active(activated):
for to_download,name in zip(activated,to_get):
# create the product name
name = name + asset_type + ".tif"
# if the product exists locally
if( os.path.isfile(name) ):
# do nothing
print "We have scene {0} already, skipping...".format(name)
tiff_files.append(name)
elif to_download["status"] == "active":
# otherwise download the product
print "Downloading {0}....".format(name)
fname = download_and_save(client,to_download)
tiff_files.append(fname)
print "Download done."
else:
print "Could not download, still activating"
else:
print "Scenes aren't ready yet"
print tiff_files
Explanation: Download Scenes
In this section we will see if our scenes have been activated.
If they are activated the client object will have its status flag set to active.
Once that is done we will then save the scenes to the local directory.
A smart engineer would set a path variable to store these files and check if the asset has already been downloaded prior to downloading
End of explanation
def load_image4(filename):
Return a 4D (r, g, b, nir) numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b, g, r, nir = src.read()
return np.dstack([r, g, b, nir])
def load_image3(filename):
Return a 3D (r, g, b) numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b,g,r,mask = src.read()
return np.dstack([b, g, r])
def get_mask(filename):
Return a 1D mask numpy array with the data in the specified TIFF filename.
path = os.path.abspath(os.path.join('./', filename))
if os.path.exists(path):
with rasterio.open(path) as src:
b,g,r,mask = src.read()
return np.dstack([mask])
def rgbir_to_rgb(img_4band):
Convert an RGBIR image to RGB
return img_4band[:,:,:3]
Explanation: Loading Images
There are a varitety of ways to load tif data including Rasterio, GDAL, OpenCV, SKImage.
Today we are going to use rasterio and load each channel into a numpy array.
Since the visual 3Band products are rotated we can also open a mask layer for processing.
End of explanation
img_files = []
masks = []
# load the images and masks
for fname in tiff_files[0:2]:
img_files.append(load_image3(fname))
masks.append(get_mask(fname))
i = 0
# use matplotlib to display the map
for img,name in zip(img_files,tiff_files):
plt.figure(i,figsize=(18,36))
plt.imshow(img)
plt.title(name)
i+=1
Explanation: Read Images and Use Matplotlib to show them.
End of explanation
import numpy.ma as ma
def plot_hist4(img_4band,title=""):
# Plot a four band histogram
r, g, b, nir = img_4band[:, :, 0], img_4band[:, :, 1], img_4band[:, :, 2], img_4band[:, :, 3]
for slice_, name, color in ((r,'r', 'red'),(g,'g', 'green'),(b,'b', 'blue'), (nir, 'nir', 'magenta')):
plt.hist(slice_.ravel(), bins=100,
range=[0,img_4band.max()],
label=name, color=color, histtype='step')
plt.title(title)
plt.legend()
def plot_hist3(img_3band,mask,title=""):
# plot a three band histogramwaiter = []
r, g, b = img_3band[:, :, 0], img_3band[:, :, 1], img_3band[:, :, 2]
r = ma.masked_array(r,mask=mask)
g = ma.masked_array(g,mask=mask)
b = ma.masked_array(b,mask=mask)
for slice_, name, color in ((r,'r', 'red'),(g,'g', 'green'),(b,'b', 'blue')):
plt.hist(slice_.ravel(), bins=25,
range=[0,img_3band.max()],
label=name, color=color, histtype='step')
plt.title(title)
plt.legend()
i = 0
for img,name,mask in zip(img_files,tiff_files,masks):
plt.figure(i,figsize=(18,6))
plot_hist3(img,mask=mask,title=name)
i+=1
Explanation: Quick Histogram
Next up we'll plot the histogram of the image.
A histogram is just a plot of the number of pixels with a specific intensity for a given color.
End of explanation
def plot_bands4(img,title="",i=0):
fig = plt.figure(i)
fig.set_size_inches(24, 3)
r, g, b, nir = img[:, :, 0], img[:, :, 1], img[:, :, 2], img[:, :, 3]
fig.suptitle(title)
for i, (x, c) in enumerate(((r, 'r'), (g, 'g'), (b, 'b'), (nir, 'near-ir'))):
a = fig.add_subplot(1, 4, i+1)
a.set_title(c)
plt.imshow(x)
def plot_bands3(img,title="",i=0):
fig = plt.figure(i)
fig.set_size_inches(24, 5)
r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]
fig.suptitle(title)
for i, (x, c) in enumerate(((r, 'r'), (g, 'g'), (b, 'b'))):
a = fig.add_subplot(1, 4, i+1)
a.set_title(c)
plt.imshow(x)
plot_bands3(img_files[0],title=tiff_files[0],i=0)
Explanation: Decomposing Channels
We can also decompose the channels of the image.
Sometimes it is useful to work just in a single channel.
Other times channels can be used to do useful things, like filter out clouds.
End of explanation
aoi_file ="sanfrancisco.geojson"
# write our input AOI to a geojson file.
with open(aoi_file,"w") as f:
f.write(json.dumps(myAOI))
# create our full input and output names
clip_names = [os.path.abspath(tiff[:-4]+"_clip"+".tif") for tiff in tiff_files]
full_tif_files = [os.path.abspath("./"+tiff) for tiff in tiff_files]
for in_file,out_file in zip(tiff_files,clip_names):
commands = ["gdalwarp", # t
"-t_srs","EPSG:3857",
"-cutline",aoi_file,
"-crop_to_cutline",
"-tap",
"-tr", "3", "3"
"-overwrite"]
subprocess.call(["rm",out_file])
commands.append(in_file)
commands.append(out_file)
print " ".join(commands)
subprocess.call(commands)
Explanation: But all of these scenes are big, and we want downtown San Francisco
We can clip all of the scenes to the AOI we selected at the start of the notebook
First we'll dump the geojson to a file.
Since geospatial data is "big" we often work with files and get stuff out of memory ASAP.
For each of our scenes we'll create a 'clip' file.
We will use a tool called GDAL to clip the scene to our AOI
GDAL stands for Geospatial Data Abstraction Library
GDAL is a C++ library that is often run from the command line, but it does have SWIG bindings.
End of explanation
clip_img_files = [load_image3(fname) for fname in clip_names]
i = 0
for img,name in zip(clip_img_files,clip_names)[0:2]:
plt.figure(i,figsize=(6,12))
plt.imshow(img)
plt.title(name)
i+=1
Explanation: Awesome, Let's take a look at what we got.
End of explanation
subprocess.call(["rm","merged.tif"])
commands = ["gdalwarp", # t
"-t_srs","EPSG:3857",
"-cutline",aoi_file,
"-crop_to_cutline",
"-tap",
"-tr", "3", "3"
"-overwrite"]
output_mosaic = "merged.tif"
for tiff in tiff_files[0:2]:
commands.append(tiff)
commands.append(output_mosaic)
print " ".join(commands)
subprocess.call(commands)
Explanation: Hrm... that's not right.
You'll notice that a lot of these scenes don't fill our AOI.
A lot of theses images were taken roughly at the same time.
We should try to merge these scenes together to make one big scene.
This process is called mosaicking, and GDAL can help.
We will call GDAL from the command line using subprocess to do this for us.
End of explanation
merged = load_image3("./merged.tif")
plt.figure(i,figsize=(6,12))
plt.imshow(merged)
plt.title("merged")
Explanation: Let's take a look.... looks much better
End of explanation
# Activate
to_get = all_scenes["id"].tolist()
activated = []
for scene in to_get:
product_types = get_products(client,scene)
for p in product_types:
if p == "visual": # p == "basic_analytic_dn"
print "Activating {0} for scene {1}".format(p,scene)
_,product = activate_product(client,scene,product=p)
activated.append(product)
# Download
tiff_files = []
asset_type = "_3B_Visual"
if True:#scenes_are_active(activated):
for to_download,name in zip(activated,to_get):
name = name + asset_type + ".tif"
if( os.path.isfile(name) ):
print "We have scene {0} already, skipping...".format(name)
tiff_files.append(name)
elif to_download["status"] == "active":
print "Downloading {0}....".format(name)
fname = download_and_save(client,to_download)
tiff_files.append(fname)
print "Download done."
else:
print "Could not download, still activating"
else:
print "Scenes aren't ready yet"
tiff_files = sorted(tiff_files)
clip_names = []
# Create a list of tif file names.
for tiff in tiff_files:
clip_names.append(os.path.abspath(tiff[:-4]+"_clip"+".tif"))
full_tif_files = []
for tiff in tiff_files:
full_tif_files.append(os.path.abspath("./"+tiff))
# Run GDAL to crop our file down.
for in_file,out_file in zip(tiff_files,clip_names):
commands = ["gdalwarp", # t
"-t_srs","EPSG:3857",
"-cutline",aoi_file,
"-crop_to_cutline",
"-tap",
"-tr", "3", "3"
"-overwrite"]
subprocess.call(["rm",out_file])
commands.append(in_file)
commands.append(out_file)
print " ".join(commands)
subprocess.call(commands)
temp_names = []
i = 0
# use image magic convert to gif
for in_file in clip_names:
temp_name = "img{num:04d}.png".format(num=i)
command = ["convert", in_file, "-sample", "30x30%",temp_name]
temp_names.append(temp_name)
i += 1
subprocess.call(command)
magic = "SanFrancisco.gif"
last_call = ["convert","-delay", "10","-loop","0", "img*.png",magic]
subprocess.call(last_call)
print "done!"
Explanation: Now let's pull it all together to do something interesting.
First we'll download and activate all of our target scenes from the past few years.
Then we'll clip them using GDAL to the small AOI we selected above.
Finally we'll export them and use that data to make a mosaic.
We'll use ImageMagick to convert our tifs to gifs, and our multiple gifs to an animated gif.
End of explanation |
13,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deciphering a puzzle from a company's hiring page I came across
A friend asked me to look at a puzzle at the hiring page of a company he was applying to. Here is my attempt to the problem. Haven't seen the problem before, so I am not sure if I solved it completely, but sharing my attempt here.
Information Provided
A zipped folder containing a text file and 2 images with following content.
A string of characters (A, T, C and G)
<img src="CipherAndKey/DNAFragment.png" width="500">
A wheel with some decoding information
<img src="CipherAndKey/KEY-01.jpg" width="500">
A key that indicates use of enigma
<img src="CipherAndKey/KEY-02.jpg" width="500">
Step 1. Decoding codons
Looking at the string, we can immediately observe a message encrypted in language of nucleotides (A, T, C and G). I happen to know a sequence of three DNA or RNA nucleotides corresponds to a specific amino acid.
Step1: Provided string is a multiple of three. Seems promising. Lets build a hashmap out of the wheel provided. What seems apparent is outmost letter is encoded by sequence of letters from center to periphery (or vice versa).
Step2: Now that we have a hashmap. Lets try to decode the string provided. | Python Code:
GivenString = "GGCTACTAACATGCCTTTCAACTTCCAGGGTTACTGTCAGGGTACTTATGCTCGCATTTACAAGGGCCCTACTCACTGTCAGAAGGGCTTTGGTCTTCAGGGCAATTCAAAAGAGAACCTACCGATCAATCCATCAGAGAACGAGCTTGGATGTGATACCCCTCACGCAGAAACGGCAGTTTGCATGTGGCGCGACAAAGCACCGCTTACGGAATGGATGTCGGGTGTCCGGGATACACTACTGGCTATAACATTCTGTATCAAGGCTCGGGTCGTATGGGTTAGGATGAGGTAGATCTAGAAGCTTTTCTTCCAGGGCCTCTTTTCTACGAGCGAGCCGCGATACACTCAGAATTGCAGCGTACCCTTTCTTGGATTTAACCAAATAAGCATTACCATCTAAACCATAGATATCAATAAACTGAAATGACACCTGCCGTCGGTTCTACCCGATGCGAATGGTTGAATCCGACGCAATTGACGATTCCAGGCCGGGGCACTATGCAGACGCAAAACCAAAATTTGTAATACATGCGCGGGATGCGGTGTAGAGATCTTTAGACGCTAGCCGCTCGGAAAATGTGTTGGTTTTGCCCGTCAGGGCCCTACTCAGGGTCATGGTAATATTGATTAGACGGGTCCCTGCTTCTCGGGCCACCTGGGTTAGGGTTCCAGCCCGTCCAATAGCTAGGTTTTTGATTATAATGCGGGTTACCCGGGTTATACTGACACTGCCCGCTACTATAAAATTCTCCACTGTCATCGTAACCCGAATCTCTTTGATTCTCACTTACTGTCGACGGCTCGGTTCTTCCAAGCACTGCA"
print(len(GivenString))
Explanation: Deciphering a puzzle from a company's hiring page I came across
A friend asked me to look at a puzzle at the hiring page of a company he was applying to. Here is my attempt to the problem. Haven't seen the problem before, so I am not sure if I solved it completely, but sharing my attempt here.
Information Provided
A zipped folder containing a text file and 2 images with following content.
A string of characters (A, T, C and G)
<img src="CipherAndKey/DNAFragment.png" width="500">
A wheel with some decoding information
<img src="CipherAndKey/KEY-01.jpg" width="500">
A key that indicates use of enigma
<img src="CipherAndKey/KEY-02.jpg" width="500">
Step 1. Decoding codons
Looking at the string, we can immediately observe a message encrypted in language of nucleotides (A, T, C and G). I happen to know a sequence of three DNA or RNA nucleotides corresponds to a specific amino acid.
End of explanation
import string
charlist = [chr(9658), chr(9608)]+ \
list(string.ascii_uppercase)+ \
list([str(x) for x in range(10)])+ \
list("~!@#$%^&*()-+{}\/<>,.?:;` ")
x = 'TCAG'
symbollist = [k+j+i for k in x for j in x for i in x]
mapping = dict(zip(symbollist, charlist))
Explanation: Provided string is a multiple of three. Seems promising. Lets build a hashmap out of the wheel provided. What seems apparent is outmost letter is encoded by sequence of letters from center to periphery (or vice versa).
End of explanation
DecodedString = ''
for i in range(0,len(GivenString),3):
decode = mapping[GivenString[i:i+3]]
DecodedString += decode
DecodedString
Explanation: Now that we have a hashmap. Lets try to decode the string provided.
End of explanation |
13,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
Step1: Note
Step2: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
Step3: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards
Step4: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation
Step5: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
Step6: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent
Step7: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
Step8: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
Step9: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
Step10: Testing
Let's checkout how our trained agent plays the game. | Python Code:
import gym
import tensorflow as tf
import numpy as np
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
env.render(close=True)
env.reset()
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
print(rewards[-20:])
print(sum(rewards))
print(len(rewards))
Explanation: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards:
End of explanation
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation |
13,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see BABDEEEB.
Step1: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
Step2: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see
Step3: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with
Step4: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
Step5: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
Step6: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization
Step7: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors)
Step8: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https
Step9: This will plot the whitened evoked for the optimal estimator and display the
GFPs for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance. | Python Code:
import os.path as op
import mne
from mne.datasets import sample
Explanation: Computing a covariance matrix
Many methods in MNE, including source estimation and some classification
algorithms, require covariance estimations from the recordings.
In this tutorial we cover the basics of sensor covariance computations and
construct a noise covariance matrix that can be used when computing the
minimum-norm inverse solution. For more information, see BABDEEEB.
End of explanation
data_path = sample.data_path()
raw_empty_room_fname = op.join(
data_path, 'MEG', 'sample', 'ernoise_raw.fif')
raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname)
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(raw_fname)
raw.set_eeg_reference('average', projection=True)
raw.info['bads'] += ['EEG 053'] # bads + 1 more
Explanation: Source estimation method such as MNE require a noise estimations from the
recordings. In this tutorial we cover the basics of noise covariance and
construct a noise covariance matrix that can be used when computing the
inverse solution. For more information, see BABDEEEB.
End of explanation
raw_empty_room.info['bads'] = [
bb for bb in raw.info['bads'] if 'EEG' not in bb]
raw_empty_room.add_proj(
[pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']])
noise_cov = mne.compute_raw_covariance(
raw_empty_room, tmin=0, tmax=None)
Explanation: The definition of noise depends on the paradigm. In MEG it is quite common
to use empty room measurements for the estimation of sensor noise. However if
you are dealing with evoked responses, you might want to also consider
resting state brain activity as noise.
First we compute the noise using empty room recording. Note that you can also
use only a part of the recording with tmin and tmax arguments. That can be
useful if you use resting state as a noise baseline. Here we use the whole
empty room recording to compute the noise covariance (tmax=None is the same
as the end of the recording, see :func:mne.compute_raw_covariance).
Keep in mind that you want to match your empty room dataset to your
actual MEG data, processing-wise. Ensure that filters
are all the same and if you use ICA, apply it to your empty-room and subject
data equivalently. In this case we did not filter the data and
we don't use ICA. However, we do have bad channels and projections in
the MEG data, and, hence, we want to make sure they get stored in the
covariance object.
End of explanation
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5,
baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed
verbose='error') # and ignore the warning about aliasing
Explanation: Now that you have the covariance matrix in an MNE-Python object you can
save it to a file with :func:mne.write_cov. Later you can read it back
using :func:mne.read_cov.
You can also use the pre-stimulus baseline to estimate the noise covariance.
First we have to construct the epochs. When computing the covariance, you
should use baseline correction when constructing the epochs. Otherwise the
covariance matrix will be inaccurate. In MNE this is done by default, but
just to be sure, we define it here manually.
End of explanation
noise_cov_baseline = mne.compute_covariance(epochs, tmax=0)
Explanation: Note that this method also attenuates any activity in your
source estimates that resemble the baseline, if you like it or not.
End of explanation
noise_cov.plot(raw_empty_room.info, proj=True)
noise_cov_baseline.plot(epochs.info, proj=True)
Explanation: Plot the covariance matrices
Try setting proj to False to see the effect. Notice that the projectors in
epochs are already applied, so proj parameter has no effect.
End of explanation
noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto')
Explanation: How should I regularize the covariance matrix?
The estimated covariance can be numerically
unstable and tends to induce correlations between estimated source amplitudes
and the number of samples available. The MNE manual therefore suggests to
regularize the noise covariance matrix (see
cov_regularization), especially if only few samples are available.
Unfortunately it is not easy to tell the effective number of samples, hence,
to choose the appropriate regularization.
In MNE-Python, regularization is done using advanced regularization methods
described in [1]_. For this the 'auto' option can be used. With this
option cross-validation will be used to learn the optimal regularization:
End of explanation
evoked = epochs.average()
evoked.plot_white(noise_cov_reg, time_unit='s')
Explanation: This procedure evaluates the noise covariance quantitatively by how well it
whitens the data using the
negative log-likelihood of unseen data. The final result can also be visually
inspected.
Under the assumption that the baseline does not contain a systematic signal
(time-locked to the event of interest), the whitened baseline signal should
be follow a multivariate Gaussian distribution, i.e.,
whitened baseline signals should be between -1.96 and 1.96 at a given time
sample.
Based on the same reasoning, the expected value for the global field power
(GFP) is 1 (calculation of the GFP should take into account the true degrees
of freedom, e.g. ddof=3 with 2 active SSP vectors):
End of explanation
noise_covs = mne.compute_covariance(
epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True)
evoked.plot_white(noise_covs, time_unit='s')
Explanation: This plot displays both, the whitened evoked signals for each channels and
the whitened GFP. The numbers in the GFP panel represent the estimated rank
of the data, which amounts to the effective degrees of freedom by which the
squared sum across sensors is divided when computing the whitened GFP.
The whitened GFP also helps detecting spurious late evoked components which
can be the consequence of over- or under-regularization.
Note that if data have been processed using signal space separation
(SSS) [2],
gradiometers and magnetometers will be displayed jointly because both are
reconstructed from the same SSS basis vectors with the same numerical rank.
This also implies that both sensor types are not any longer statistically
independent.
These methods for evaluation can be used to assess model violations.
Additional
introductory materials can be found here <https://goo.gl/ElWrxe>.
For expert use cases or debugging the alternative estimators can also be
compared (see
sphx_glr_auto_examples_visualization_plot_evoked_whitening.py) and
sphx_glr_auto_examples_inverse_plot_covariance_whitening_dspm.py):
End of explanation
evoked_meg = evoked.copy().pick_types(meg=True, eeg=False)
noise_cov_meg = mne.pick_channels_cov(noise_cov_baseline, evoked_meg.ch_names)
noise_cov['method'] = 'empty_room'
noise_cov_meg['method'] = 'baseline'
evoked_meg.plot_white([noise_cov_meg, noise_cov], time_unit='s')
Explanation: This will plot the whitened evoked for the optimal estimator and display the
GFPs for all estimators as separate lines in the related panel.
Finally, let's have a look at the difference between empty room and
event related covariance.
End of explanation |
13,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DTM Example
In this example we will present a sample usage of the DTM wrapper. Prior to using this you need to compile the DTM code yourself or use one of the binaries.
This tutorial is on Windows. Running it on Linux and OSX is the same.
In this example we will use a small already processed corpus. To see how to get a dataset to this stage please take a look at Gensim Tutorials
Step1: First we wil setup logging
Step2: Now lets load a set of documents
Step3: This corpus contains 10 documents. Now lets say we would like to model this with DTM.
To do this we have to define the time steps
each document belongs to. In this case the first 3 documents were collected at the same time, while the last 7 were collected
a month later, and we wish to see how the topics change from month to month.
For this we will define the time_seq, which contains the time slice definition.
Step4: A simple corpus wrapper to load a premade corpus. You can use this with your own data.
Step5: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME
Step6: That is basically all we need to be able to invoke the Training.
If initialize_lda=True then DTM will create a LDA model first and store it in initial-lda-ss.dat.
If you already have itial-lda-ss.dat in the DTM folder then you can save time and re-use it with initialize_lda=False. If the file is missing then DTM wil exit with an error.
Step7: If everything worked we should be able to print out the topics
Step8: Document-Topic proportions
Next, we'll attempt to find the Document-Topic proportions. We will use the gamma class variable of the model to do the same. Gamma is a matrix such that gamma[5,10] is the proportion of the 10th topic in document 5.
To find, say, the topic proportions in Document 1, we do the following
Step9: DIM Example
The DTM wrapper in Gensim also has the capacity to run in Document Influence Model mode. The Model is described in this paper. What it allows you to do is find the 'influence' of a certain document on a particular topic. It is primarily used in identifying the scientific impact of research papers through the capability of that document's keywords influencing a topic.
'Influence' can be naively thought of like this - if more of a particular document's words appear in subsequent evolution of a topic, that document is understood to have influenced that topic more.
To run it in this mode, we now call DtmModel again, but with the model parameter set as fixed.
Note that running it in this mode will also generate the DTM topics similar to running plain DTM, but with added information on document influence.
Step10: The main difference between the DTM and DIM models are the addition of Influence files for each time-slice, which is interpreted with the influences_time variable.
To find, say, the influence of Document 2 on Topic 2 in Time-Slice 1, we do the following | Python Code:
import logging
import os
from gensim import corpora, utils
from gensim.models.wrappers.dtmmodel import DtmModel
import numpy as np
if not os.environ.get('DTM_PATH', None):
raise ValueError("SKIP: You need to set the DTM path")
Explanation: DTM Example
In this example we will present a sample usage of the DTM wrapper. Prior to using this you need to compile the DTM code yourself or use one of the binaries.
This tutorial is on Windows. Running it on Linux and OSX is the same.
In this example we will use a small already processed corpus. To see how to get a dataset to this stage please take a look at Gensim Tutorials
End of explanation
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logging.debug("test")
Explanation: First we wil setup logging
End of explanation
documents = [[u'senior', u'studios', u'studios', u'studios', u'creators', u'award', u'mobile', u'currently', u'challenges', u'senior', u'summary', u'senior', u'motivated', u'creative', u'senior', u'performs', u'engineering', u'tasks', u'infrastructure', u'focusing', u'primarily', u'programming', u'interaction', u'designers', u'engineers', u'leadership', u'teams', u'teams', u'crews', u'responsibilities', u'engineering', u'quality', u'functional', u'functional', u'teams', u'organizing', u'prioritizing', u'technical', u'decisions', u'engineering', u'participates', u'participates', u'reviews', u'participates', u'hiring', u'conducting', u'interviews', u'feedback', u'departments', u'define', u'focusing', u'engineering', u'teams', u'crews', u'facilitate', u'engineering', u'departments', u'deadlines', u'milestones', u'typically', u'spends', u'designing', u'developing', u'updating', u'bugs', u'mentoring', u'engineers', u'define', u'schedules', u'milestones', u'participating', u'reviews', u'interviews', u'sized', u'teams', u'interacts', u'disciplines', u'knowledge', u'skills', u'knowledge', u'knowledge', u'xcode', u'scripting', u'debugging', u'skills', u'skills', u'knowledge', u'disciplines', u'animation', u'networking', u'expertise', u'competencies', u'oral', u'skills', u'management', u'skills', u'proven', u'effectively', u'teams', u'deadline', u'environment', u'bachelor', u'minimum', u'shipped', u'leadership', u'teams', u'location', u'resumes', u'jobs', u'candidates', u'openings', u'jobs'], [u'maryland', u'client', u'producers', u'electricity', u'operates', u'storage', u'utility', u'retail', u'customers', u'engineering', u'consultant', u'maryland', u'summary', u'technical', u'technology', u'departments', u'expertise', u'maximizing', u'output', u'reduces', u'operating', u'participates', u'areas', u'engineering', u'conducts', u'testing', u'solve', u'supports', u'environmental', u'understands', u'objectives', u'operates', u'responsibilities', u'handles', u'complex', u'engineering', u'aspects', u'monitors', u'quality', u'proficiency', u'optimization', u'recommendations', u'supports', u'personnel', u'troubleshooting', u'commissioning', u'startup', u'shutdown', u'supports', u'procedure', u'operating', u'units', u'develops', u'simulations', u'troubleshooting', u'tests', u'enhancing', u'solving', u'develops', u'estimates', u'schedules', u'scopes', u'understands', u'technical', u'management', u'utilize', u'routine', u'conducts', u'hazards', u'utilizing', u'hazard', u'operability', u'methodologies', u'participates', u'startup', u'reviews', u'pssr', u'participate', u'teams', u'participate', u'regulatory', u'audits', u'define', u'scopes', u'budgets', u'schedules', u'technical', u'management', u'environmental', u'awareness', u'interfacing', u'personnel', u'interacts', u'regulatory', u'departments', u'input', u'objectives', u'identifying', u'introducing', u'concepts', u'solutions', u'peers', u'customers', u'coworkers', u'knowledge', u'skills', u'engineering', u'quality', u'engineering', u'commissioning', u'startup', u'knowledge', u'simulators', u'technologies', u'knowledge', u'engineering', u'techniques', u'disciplines', u'leadership', u'skills', u'proven', u'engineers', u'oral', u'skills', u'technical', u'skills', u'analytically', u'solve', u'complex', u'interpret', u'proficiency', u'simulation', u'knowledge', u'applications', u'manipulate', u'applications', u'engineering', u'calculations', u'programs', u'matlab', u'excel', u'independently', u'environment', u'proven', u'skills', u'effectively', u'multiple', u'tasks', u'planning', u'organizational', u'management', u'skills', u'rigzone', u'jobs', u'developer', u'exceptional', u'strategies', u'junction', u'exceptional', u'strategies', u'solutions', u'solutions', u'biggest', u'insurers', u'operates', u'investment'], [u'vegas', u'tasks', u'electrical', u'contracting', u'expertise', u'virtually', u'electrical', u'developments', u'institutional', u'utilities', u'technical', u'experts', u'relationships', u'credibility', u'contractors', u'utility', u'customers', u'customer', u'relationships', u'consistently', u'innovations', u'profile', u'construct', u'envision', u'dynamic', u'complex', u'electrical', u'management', u'grad', u'internship', u'electrical', u'engineering', u'infrastructures', u'engineers', u'documented', u'management', u'engineering', u'quality', u'engineering', u'electrical', u'engineers', u'complex', u'distribution', u'grounding', u'estimation', u'testing', u'procedures', u'voltage', u'engineering', u'troubleshooting', u'installation', u'documentation', u'bsee', u'certification', u'electrical', u'voltage', u'cabling', u'electrical', u'engineering', u'candidates', u'electrical', u'internships', u'oral', u'skills', u'organizational', u'prioritization', u'skills', u'skills', u'excel', u'cadd', u'calculation', u'autocad', u'mathcad', u'skills', u'skills', u'customer', u'relationships', u'solving', u'ethic', u'motivation', u'tasks', u'budget', u'affirmative', u'diversity', u'workforce', u'gender', u'orientation', u'disability', u'disabled', u'veteran', u'vietnam', u'veteran', u'qualifying', u'veteran', u'diverse', u'candidates', u'respond', u'developing', u'workplace', u'reflects', u'diversity', u'communities', u'reviews', u'electrical', u'contracting', u'southwest', u'electrical', u'contractors'], [u'intern', u'electrical', u'engineering', u'idexx', u'laboratories', u'validating', u'idexx', u'integrated', u'hardware', u'entails', u'planning', u'debug', u'validation', u'engineers', u'validation', u'methodologies', u'healthcare', u'platforms', u'brightest', u'solve', u'challenges', u'innovation', u'technology', u'idexx', u'intern', u'idexx', u'interns', u'supplement', u'interns', u'teams', u'roles', u'competitive', u'interns', u'idexx', u'interns', u'participate', u'internships', u'mentors', u'seminars', u'topics', u'leadership', u'workshops', u'relevant', u'planning', u'topics', u'intern', u'presentations', u'mixers', u'applicants', u'ineligible', u'laboratory', u'compliant', u'idexx', u'laboratories', u'healthcare', u'innovation', u'practicing', u'veterinarians', u'diagnostic', u'technology', u'idexx', u'enhance', u'veterinarians', u'efficiency', u'economically', u'idexx', u'worldwide', u'diagnostic', u'tests', u'tests', u'quality', u'headquartered', u'idexx', u'laboratories', u'employs', u'customers', u'qualifications', u'applicants', u'idexx', u'interns', u'potential', u'demonstrated', u'portfolio', u'recommendation', u'resumes', u'marketing', u'location', u'americas', u'verification', u'validation', u'schedule', u'overtime', u'idexx', u'laboratories', u'reviews', u'idexx', u'laboratories', u'nasdaq', u'healthcare', u'innovation', u'practicing', u'veterinarians'], [u'location', u'duration', u'temp', u'verification', u'validation', u'tester', u'verification', u'validation', u'middleware', u'specifically', u'testing', u'applications', u'clinical', u'laboratory', u'regulated', u'environment', u'responsibilities', u'complex', u'hardware', u'testing', u'clinical', u'analyzers', u'laboratory', u'graphical', u'interfaces', u'complex', u'sample', u'sequencing', u'protocols', u'developers', u'correction', u'tracking', u'tool', u'timely', u'troubleshoot', u'testing', u'functional', u'manual', u'automated', u'participate', u'ongoing', u'testing', u'coverage', u'planning', u'documentation', u'testing', u'validation', u'corrections', u'monitor', u'implementation', u'recurrence', u'operating', u'statistical', u'quality', u'testing', u'global', u'multi', u'teams', u'travel', u'skills', u'concepts', u'waterfall', u'agile', u'methodologies', u'debugging', u'skills', u'complex', u'automated', u'instrumentation', u'environment', u'hardware', u'mechanical', u'components', u'tracking', u'lifecycle', u'management', u'quality', u'organize', u'define', u'priorities', u'organize', u'supervision', u'aggressive', u'deadlines', u'ambiguity', u'analyze', u'complex', u'situations', u'concepts', u'technologies', u'verbal', u'skills', u'effectively', u'technical', u'clinical', u'diverse', u'strategy', u'clinical', u'chemistry', u'analyzer', u'laboratory', u'middleware', u'basic', u'automated', u'testing', u'biomedical', u'engineering', u'technologists', u'laboratory', u'technology', u'availability', u'click', u'attach'], [u'scientist', u'linux', u'asrc', u'scientist', u'linux', u'asrc', u'technology', u'solutions', u'subsidiary', u'asrc', u'engineering', u'technology', u'contracts', u'multiple', u'agencies', u'scientists', u'engineers', u'management', u'personnel', u'allows', u'solutions', u'complex', u'aeronautics', u'aviation', u'management', u'aviation', u'engineering', u'hughes', u'technical', u'technical', u'aviation', u'evaluation', u'engineering', u'management', u'technical', u'terminal', u'surveillance', u'programs', u'currently', u'scientist', u'travel', u'responsibilities', u'develops', u'technology', u'modifies', u'technical', u'complex', u'reviews', u'draft', u'conformity', u'completeness', u'testing', u'interface', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'skills', u'travel', u'programming', u'linux', u'environment', u'cisco', u'knowledge', u'terminal', u'environment', u'clearance', u'clearance', u'input', u'output', u'digital', u'automatic', u'terminal', u'management', u'controller', u'termination', u'testing', u'evaluating', u'policies', u'procedure', u'interface', u'installation', u'verification', u'certification', u'core', u'avionic', u'programs', u'knowledge', u'procedural', u'testing', u'interfacing', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'missions', u'asrc', u'subsidiaries', u'affirmative', u'employers', u'applicants', u'disability', u'veteran', u'technology', u'location', u'airport', u'bachelor', u'schedule', u'travel', u'contributor', u'management', u'asrc', u'reviews'], [u'technical', u'solarcity', u'niche', u'vegas', u'overview', u'resolving', u'customer', u'clients', u'expanding', u'engineers', u'developers', u'responsibilities', u'knowledge', u'planning', u'adapt', u'dynamic', u'environment', u'inventive', u'creative', u'solarcity', u'lifecycle', u'responsibilities', u'technical', u'analyzing', u'diagnosing', u'troubleshooting', u'customers', u'ticketing', u'console', u'escalate', u'knowledge', u'engineering', u'timely', u'basic', u'phone', u'functionality', u'customer', u'tracking', u'knowledgebase', u'rotation', u'configure', u'deployment', u'sccm', u'technical', u'deployment', u'deploy', u'hardware', u'solarcity', u'bachelor', u'knowledge', u'dell', u'laptops', u'analytical', u'troubleshooting', u'solving', u'skills', u'knowledge', u'databases', u'preferably', u'server', u'preferably', u'monitoring', u'suites', u'documentation', u'procedures', u'knowledge', u'entries', u'verbal', u'skills', u'customer', u'skills', u'competitive', u'solar', u'package', u'insurance', u'vacation', u'savings', u'referral', u'eligibility', u'equity', u'performers', u'solarcity', u'affirmative', u'diversity', u'workplace', u'applicants', u'orientation', u'disability', u'veteran', u'careerrookie'], [u'embedded', u'exelis', u'junction', u'exelis', u'embedded', u'acquisition', u'networking', u'capabilities', u'classified', u'customer', u'motivated', u'develops', u'tests', u'innovative', u'solutions', u'minimal', u'supervision', u'paced', u'environment', u'enjoys', u'assignments', u'interact', u'multi', u'disciplined', u'challenging', u'focused', u'embedded', u'developments', u'spanning', u'engineering', u'lifecycle', u'specification', u'enhancement', u'applications', u'embedded', u'freescale', u'applications', u'android', u'platforms', u'interface', u'customers', u'developers', u'refine', u'specifications', u'architectures', u'java', u'programming', u'scripts', u'python', u'debug', u'debugging', u'emulators', u'regression', u'revisions', u'specialized', u'setups', u'capabilities', u'subversion', u'technical', u'documentation', u'multiple', u'engineering', u'techexpousa', u'reviews'], [u'modeler', u'semantic', u'modeling', u'models', u'skills', u'ontology', u'resource', u'framework', u'schema', u'technologies', u'hadoop', u'warehouse', u'oracle', u'relational', u'artifacts', u'models', u'dictionaries', u'models', u'interface', u'specifications', u'documentation', u'harmonization', u'mappings', u'aligned', u'coordinate', u'technical', u'peer', u'reviews', u'stakeholder', u'communities', u'impact', u'domains', u'relationships', u'interdependencies', u'models', u'define', u'analyze', u'legacy', u'models', u'corporate', u'databases', u'architectural', u'alignment', u'customer', u'expertise', u'harmonization', u'modeling', u'modeling', u'consulting', u'stakeholders', u'quality', u'models', u'storage', u'agile', u'specifically', u'focus', u'modeling', u'qualifications', u'bachelors', u'accredited', u'modeler', u'encompass', u'evaluation', u'skills', u'knowledge', u'modeling', u'techniques', u'resource', u'framework', u'schema', u'technologies', u'unified', u'modeling', u'technologies', u'schemas', u'ontologies', u'sybase', u'knowledge', u'skills', u'interpersonal', u'skills', u'customers', u'clearance', u'applicants', u'eligibility', u'classified', u'clearance', u'polygraph', u'techexpousa', u'solutions', u'partnership', u'solutions', u'integration'], [u'technologies', u'junction', u'develops', u'maintains', u'enhances', u'complex', u'diverse', u'intensive', u'analytics', u'algorithm', u'manipulation', u'management', u'documented', u'individually', u'reviews', u'tests', u'components', u'adherence', u'resolves', u'utilizes', u'methodologies', u'environment', u'input', u'components', u'hardware', u'offs', u'reuse', u'cots', u'gots', u'synthesis', u'components', u'tasks', u'individually', u'analyzes', u'modifies', u'debugs', u'corrects', u'integrates', u'operating', u'environments', u'develops', u'queries', u'databases', u'repositories', u'recommendations', u'improving', u'documentation', u'develops', u'implements', u'algorithms', u'functional', u'assists', u'developing', u'executing', u'procedures', u'components', u'reviews', u'documentation', u'solutions', u'analyzing', u'conferring', u'users', u'engineers', u'analyzing', u'investigating', u'areas', u'adapt', u'hardware', u'mathematical', u'models', u'predict', u'outcome', u'implement', u'complex', u'database', u'repository', u'interfaces', u'queries', u'bachelors', u'accredited', u'substituted', u'bachelors', u'firewalls', u'ipsec', u'vpns', u'technology', u'administering', u'servers', u'apache', u'jboss', u'tomcat', u'developing', u'interfaces', u'firefox', u'internet', u'explorer', u'operating', u'mainframe', u'linux', u'solaris', u'virtual', u'scripting', u'programming', u'oriented', u'programming', u'ajax', u'script', u'procedures', u'cobol', u'cognos', u'fusion', u'focus', u'html', u'java', u'java', u'script', u'jquery', u'perl', u'visual', u'basic', u'powershell', u'cots', u'cots', u'oracle', u'apex', u'integration', u'competitive', u'package', u'bonus', u'corporate', u'equity', u'tuition', u'reimbursement', u'referral', u'bonus', u'holidays', u'insurance', u'flexible', u'disability', u'insurance', u'technologies', u'disability', u'accommodation', u'recruiter', u'techexpousa']]
Explanation: Now lets load a set of documents
End of explanation
time_seq = [3, 7] # first 3 documents are from time slice one
# and the other 7 are from the second time slice.
Explanation: This corpus contains 10 documents. Now lets say we would like to model this with DTM.
To do this we have to define the time steps
each document belongs to. In this case the first 3 documents were collected at the same time, while the last 7 were collected
a month later, and we wish to see how the topics change from month to month.
For this we will define the time_seq, which contains the time slice definition.
End of explanation
class DTMcorpus(corpora.textcorpus.TextCorpus):
def get_texts(self):
return self.input
def __len__(self):
return len(self.input)
corpus = DTMcorpus(documents)
Explanation: A simple corpus wrapper to load a premade corpus. You can use this with your own data.
End of explanation
# path to dtm home folder
dtm_home = os.environ.get('DTM_HOME', "dtm-master")
# path to the binary. on my PC the executable file is dtm-master/bin/dtm
dtm_path = os.path.join(dtm_home, 'bin', 'dtm') if dtm_home else None
# you can also copy the path down directly. Change this variable to your DTM executable before running.
dtm_path = "/home/bhargav/dtm/main"
Explanation: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME
End of explanation
model = DtmModel(dtm_path, corpus, time_seq, num_topics=2,
id2word=corpus.dictionary, initialize_lda=True)
Explanation: That is basically all we need to be able to invoke the Training.
If initialize_lda=True then DTM will create a LDA model first and store it in initial-lda-ss.dat.
If you already have itial-lda-ss.dat in the DTM folder then you can save time and re-use it with initialize_lda=False. If the file is missing then DTM wil exit with an error.
End of explanation
topics = model.show_topic(topicid=1, time=1, num_words=10)
topics
Explanation: If everything worked we should be able to print out the topics
End of explanation
doc_number = 1
num_topics = 2
for i in range(0, num_topics):
print ("Distribution of Topic %d %f" % (i, model.gamma_[doc_number, i]))
Explanation: Document-Topic proportions
Next, we'll attempt to find the Document-Topic proportions. We will use the gamma class variable of the model to do the same. Gamma is a matrix such that gamma[5,10] is the proportion of the 10th topic in document 5.
To find, say, the topic proportions in Document 1, we do the following:
End of explanation
model = DtmModel(dtm_path, corpus, time_seq, num_topics=2,
id2word=corpus.dictionary, initialize_lda=True, model='fixed')
Explanation: DIM Example
The DTM wrapper in Gensim also has the capacity to run in Document Influence Model mode. The Model is described in this paper. What it allows you to do is find the 'influence' of a certain document on a particular topic. It is primarily used in identifying the scientific impact of research papers through the capability of that document's keywords influencing a topic.
'Influence' can be naively thought of like this - if more of a particular document's words appear in subsequent evolution of a topic, that document is understood to have influenced that topic more.
To run it in this mode, we now call DtmModel again, but with the model parameter set as fixed.
Note that running it in this mode will also generate the DTM topics similar to running plain DTM, but with added information on document influence.
End of explanation
document_no = 1 #document 2
topic_no = 1 #topic number 2
time_slice = 0 #time slice 1
model.influences_time[time_slice][document_no][topic_no]
Explanation: The main difference between the DTM and DIM models are the addition of Influence files for each time-slice, which is interpreted with the influences_time variable.
To find, say, the influence of Document 2 on Topic 2 in Time-Slice 1, we do the following:
End of explanation |
13,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embrasing web standards
One of the main reason that allowed us to developp the current notebook web application
was to embrase the web technology.
By beeing a pure web application using HTML, Javascript and CSS, the Notebook can get
all the web technology improvement for free. Thus, as browsers support for different
media extend, The notebook web app should be able to be compatible without modification.
This is also true with performance of the User Interface as the speed of javascript VM increase.
The other advantage of using only web technology is that the code of the interface is fully accessible to the end user, and modifiable live.
Even if this task is not always easy, we strive to keep our code as accessible and reusable as possible.
This should allow with minimum effort to develop small extensions that customize the behavior of the web interface.
Tempering with Notebook app
The first tool that is availlable to you and that you shoudl be aware of are browser "developpers tool". The exact naming can change across browser, and might require the installation of extensions. But basically they can allow you to inspect/modify the DOM, and interact with the javascript code that run the frontend.
In Chrome and safari Developper tools are in the menu [Put mmenu name in english here]
In firefox you might need to install Firebug
others ?
Those will be your best friends to debug and try different approach for your extensions.
Injecting JS
using magics
Above tools can be tedious to edit long javascipt files. Hopefully we provide the %%javascript magic. This allows you to quickly inject javascript into the notebook. Still the javascript injected this way will not survive reloading. Hence it is a good tool for testing an refinig a script.
You might see here and there people modifying css and injecting js into notebook by reading file and publishing them into the notebook.
Not only this often break the flow of the notebook and make the re-execution of the notebook broken, but it also mean that you need to execute those cells on all the notebook every time you need to update the code.
This can still be usefull in some cases, like the %autosave magic that allows to control the time between each save. But this can be replaced by a Javascript dropdown menu to select save interval.
Step1: custom.js
To inject Javascript we provide an entry point
Step2: and custom js is in
Step3: Note that custom.js is ment to be modified by user, when writing a script, you can define it in a separate file and add a line of configuration into custom.js that will fetch and execute the file.
Warning
Step4: Registering a preset
This function can now be part of many preset of the CellToolBar.
Step5: You should now have access to two presets | Python Code:
## you can inspect the autosave code to see what it does.
%autosave??
Explanation: Embrasing web standards
One of the main reason that allowed us to developp the current notebook web application
was to embrase the web technology.
By beeing a pure web application using HTML, Javascript and CSS, the Notebook can get
all the web technology improvement for free. Thus, as browsers support for different
media extend, The notebook web app should be able to be compatible without modification.
This is also true with performance of the User Interface as the speed of javascript VM increase.
The other advantage of using only web technology is that the code of the interface is fully accessible to the end user, and modifiable live.
Even if this task is not always easy, we strive to keep our code as accessible and reusable as possible.
This should allow with minimum effort to develop small extensions that customize the behavior of the web interface.
Tempering with Notebook app
The first tool that is availlable to you and that you shoudl be aware of are browser "developpers tool". The exact naming can change across browser, and might require the installation of extensions. But basically they can allow you to inspect/modify the DOM, and interact with the javascript code that run the frontend.
In Chrome and safari Developper tools are in the menu [Put mmenu name in english here]
In firefox you might need to install Firebug
others ?
Those will be your best friends to debug and try different approach for your extensions.
Injecting JS
using magics
Above tools can be tedious to edit long javascipt files. Hopefully we provide the %%javascript magic. This allows you to quickly inject javascript into the notebook. Still the javascript injected this way will not survive reloading. Hence it is a good tool for testing an refinig a script.
You might see here and there people modifying css and injecting js into notebook by reading file and publishing them into the notebook.
Not only this often break the flow of the notebook and make the re-execution of the notebook broken, but it also mean that you need to execute those cells on all the notebook every time you need to update the code.
This can still be usefull in some cases, like the %autosave magic that allows to control the time between each save. But this can be replaced by a Javascript dropdown menu to select save interval.
End of explanation
profile_dir = ! ipython locate
profile_dir = profile_dir[0]
profile_dir
Explanation: custom.js
To inject Javascript we provide an entry point: custom.js that allow teh user to execute and load other resources into the notebook.
Javascript code in custom.js will be executed when the notebook app start and can then be used to customise almost anything in the UI and in the behavior of the notebook.
custom.js can be found in IPython profile dir, and so you can have different UI modification on a per profile basis, as well as share your modfication with others.
Because we like you....
You have been provided with an already existing profile folder with this tutorial...
start the notebook from the root of the tutorial directory with :
bash
$ ipython notebook --ProfileDir.location=./profile_euroscipy
but back to theory
End of explanation
import os.path
custom_js_path = os.path.join(profile_dir,'profile_default','static','custom','custom.js')
# my custom js
with open(custom_js_path) as f:
for l in f:
print l,
Explanation: and custom js is in
End of explanation
%%javascript
var CellToolbar = IPython.CellToolbar
var toggle = function(div, cell) {
var button_container = $(div)
// let's create a button that show the current value of the metadata
var button = $('<button/>').addClass('btn btn-mini').text(String(cell.metadata.foo));
// On click, change the metadata value and update the button label
button.click(function(){
var v = cell.metadata.foo;
cell.metadata.foo = !v;
button.text(String(!v));
})
// add the button to the DOM div.
button_container.append(button);
}
// now we register the callback under the name foo to give the
// user the ability to use it later
CellToolbar.register_callback('tuto.foo', toggle);
Explanation: Note that custom.js is ment to be modified by user, when writing a script, you can define it in a separate file and add a line of configuration into custom.js that will fetch and execute the file.
Warning : even if modification of custom.js take effect immediately after browser refresh (except if browser cache is aggressive), creating a file in static/ directory need a server restart.
Exercise :
Create a custom.js in the right location with the following content:
javascript
alert("hello world from custom.js")
Restart your server and open any notebook.
Be greeted by custom.js
Have a look at default custom.js, to see it's content and some more explanation.
For the quick ones :
We've seen above that you can change the autosave rate by using a magic. This is typically something I don't want to type everytime, and that I don't like to embed into my workwlow and documents. (reader don't care what my autosave time is), let's build an extension that allow to do it.
Create a dropdow elemement in the toolbar (DOM IPython.toolbar.element), you will need
IPython.notebook.set_autosave_interval(miliseconds)
know that 1min = 60 sec, and 1 sec = 1000 ms
```javascript
var label = jQuery('<label/>').text('AutoScroll Limit:');
var select = jQuery('<select/>')
//.append(jQuery('<option/>').attr('value', '2').text('2min (default)'))
.append(jQuery('<option/>').attr('value', undefined).text('disabled'))
// TODO:
//the_toolbar_element.append(label)
//the_toolbar_element.append(select);
select.change(function() {
var val = jQuery(this).val() // val will be the value in [2]
// TODO
// this will be called when dropdown changes
});
var time_m = [1,5,10,15,30];
for (var i=0; i < time_m.length; i++) {
var ts = time_m[i];
//[2] ____ this will be val on [1]
// |
// v
select.append($('<option/>').attr('value', ts).text(thr+'min'));
// this will fill up the dropdown select with
// 1 min
// 5 min
// 10 min
// 10 min
// ...
}
```
A non interactive example first
I like my cython to be nicely highlighted
javascript
IPython.config.cell_magic_highlight['magic_text/x-cython'] = {}
IPython.config.cell_magic_highlight['magic_text/x-cython'].reg = [/^%%cython/]
text/x-cython is the name of CodeMirror mode name, magic_ prefix will just patch the mode so that the first line that contains a magic does not screw up the highlighting. regis a list or regular expression that will trigger the change of mode.
Get more docs
Sadly you will have to read the js source file (but there are lots of comments) an/or build the javascript documentation using yuidoc.
If you have node and yui-doc installed:
bash
$ cd ~/ipython/IPython/html/static/notebook/js/
$ yuidoc . --server
warn: (yuidoc): Failed to extract port, setting to the default :3000
info: (yuidoc): Starting [email protected] using [email protected] with [email protected]
info: (yuidoc): Scanning for yuidoc.json file.
info: (yuidoc): Starting YUIDoc with the following options:
info: (yuidoc):
{ port: 3000,
nocode: false,
paths: [ '.' ],
server: true,
outdir: './out' }
info: (yuidoc): Scanning for yuidoc.json file.
info: (server): Starting server: http://127.0.0.1:3000
and browse http://127.0.0.1:3000 to get docs
Some convenience methods
By browsing the doc you will see that we have soem convenience methods that avoid to re-invent the UI everytime :
javascript
IPython.toolbar.add_buttons_group([
{
'label' : 'run qtconsole',
'icon' : 'icon-terminal', // select your icon from
// http://fortawesome.github.io/Font-Awesome/icons/
'callback': function(){IPython.notebook.kernel.execute('%qtconsole')}
}
// add more button here if needed.
]);
with a lot of icons you can select from.
Cell Metadata
The most requested feature is generaly to be able to distinguish individual cell in th enotebook, or run specific action with them.
To do so, you can either use IPython.notebook.get_selected_cell(), or rely on CellToolbar. This allow you to register aset of action and graphical element that will be attached on individual cells.
Cell Toolbar
You can see some example of what can be done by toggling the Cell Toolbar selector in the toolbar on top of the notebook. It provide two default presets that are Default and slideshow. Default allow edit the metadata attached to each cell manually.
First we define a function that takes at first parameter an element on the DOM in which to inject UI element. Second element will be the cell this element will be registerd with. Then we will need to register that function ad give it a name.
Register a callback
End of explanation
%%javascript
IPython.CellToolbar.register_preset('Tutorial 1',['tuto.foo','default.rawedit'])
IPython.CellToolbar.register_preset('Tutorial 2',['slideshow.select','tuto.foo'])
Explanation: Registering a preset
This function can now be part of many preset of the CellToolBar.
End of explanation
%load soln/celldiff.js
Explanation: You should now have access to two presets :
Tutorial 1
Tutorial 2
And check that the buttons you defin share state when you toggle preset.
Check moreover that the metadata of the cell is modified when you clisk the button, and that when saved on reloaded the metadata is still availlable.
Exercise:
Try to wrap the all code in a file, put this file in {profile}/static/custom/<a-name>.js, and add
require(['custom/<a-name>']);
in custom.js to have this script automatically loaded in all your notebooks.
require is provided by a javascript library that allow to express dependency. For simple extension like the previous one we directly mute the global namespace, but for more complexe extension you could pass acallback to require([...], <callback>) call, to allow the user to pass configuration information to your plugin.
In Python lang,
javascript
require(['a/b', 'c/d'], function( e, f){
e.something()
f.something()
})
could be read as
python
import a.b as e
import c.d as f
e.something()
f.something()
See for example @damianavila "ZenMode" plugin :
```javascript
// read that as
// import custom.zenmode.main as zenmode
require(['custom/zenmode/main'],function(zenmode){
zenmode.background('images/back12.jpg');
})
```
For the quickest
Try to use the following to bind a dropdown list to cell.metadata.difficulty.select.
It should be able to take the 4 following values :
<None>
Easy
Medium
Hard
We will use it to customise the output of the converted notebook depending of the tag on each cell
End of explanation |
13,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load $\delta$a$\delta$i
I have not installed dadi globally on huluvu. Instead, I left it in my Downloads directory '/home/claudius/Downloads/dadi'. In order for Python to find that module, I need to add that directory to the PYTHONPATH variable.
Step1: Load data
Step2: I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual.
Note, that the last line, indicating the mask, has length 37, but the folded spectrum has length 19. Dadi wants to mask counts from invariable sites. For an unfolded spectrum, i. e. polarised with respect to an inferred ancestral allele at each site, the first and the last count classes would correspond to invariable sites. In a folded spectrum, i. e. with counts of the minor allele at each site, the last count class corresponds to SNP's with minor sample allele frequency of $n/2$ (with even sample size).
Step3: According to the number of segregating sites, this spectrum should have good power to distinguish between alternative demographic models (see Adams2004). However, the noise in the data is extreme, as can be seen below, which might compromise this power and maybe even lead to false inferences.
Plot the data
Step4: Built-in 1D models
Step5: standard neutral model
Step6: The snm function does not take parameters to optimize. I can therefore get directly the expected model. The snm function does not take a fold argument. I am therefore going to calculated an unfolded expected spectrum and then fold.
Step7: What's happening in the 18th count class?
Step8: I am going to fold manually now.
Step9: When the sample size is even, then highest sample frequency class corresponds to just one unfolded class (18). This has been added to itself and those SNP's are counted twice at the moment. I need to divide this class by 2 to get the correct count for this folded class.
Step10: The folded expected spectrum is correct. Also, see figure 4.5 in Wakeley2009.
How to fold an unfolded spectrum
Step11: $\theta$ and implied $N_{ref}$
Step12: This theta estimate is a little bit higher than what I estimated with curve fitting in Fist_Steps_with_dadi.ipynb, which was 10198.849.
What effective ancestral population size would that imply?
According to section 4.4 in the dadi manual
Step13: This effective population size is consistent with those reported in Lynch2016 for other insect species.
Begin Digression
Step14: End Digression
Step15: The lower plot is for the scaled Poisson residuals.
$$
residuals = (model - data)/\sqrt{model}
$$
The model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expected standard deviation of the model counts.
The observed counts deviate by up to 30 standard deviations from the model!
What could be done about this?
The greatest deviations are seen for the first two frequency classes, the ones that should provide the greatest amount of information (Fu1994) for theta and therefore probably also other parameters. Toni has suggested that the doubleton class is inflated due to "miscalling" heterozygotes as homozygotes. When they contain a singleton they will be "called" as homozygote and therefore contribute to the doubleton count. This is aggravated by the fact that the sequenced individuals are all male which only possess one X chromosome. The X chromosome is the fourth largest of the 9 chromosomes of these grasshoppers (8 autosomes + X) (see Gosalvez1988, fig. 2). That is, about 1/9th of the sequenced RAD loci are haploid but ANGSD assumes all loci to be diploid. The genotype likelihoods it calculates are all referring to diploid genotypes.
I think one potential reason for the extreme deviations is that the genotype likelihoods are generally biased toward homozygote genotypes (i. e. also for autosomal loci) due to PCR duplicates (see eq. 1 in Nielsen2012). So, one potential improvement would be to remove PCR duplicates.
Another potential improvement could be found by subsampling 8/9th to 8/10th of the contigs in the SAF files and estimating an SFS from these. Given enough subsamples, one should eventually be found that maximally excludes loci from the X chromosome. This subsample is expected to produce the least squared deviations from an expected SFS under the standard neutral model. However, one could argue that this attempt to exclude problematic loci could also inadvertently remove loci that strongly deviate from neutral expectations due to non-neutral evolution, again reducing power to detect deviations from the standard neutral model. I think one could also just apply the selection criterion of the second MAF class to be lower than the first and just save all contig subsamples and SFS's that fulfill that criterioin, since that should be true for all demographic scenarios.
Exponential growth
Creating a folded spectrum exactly how dadi expects it
As seen above in the folded model spectrum, dadi just masks out entries that are not sensical in a folded spectrum, but keeps the length of the spectrum the same as the unfolded. That way the sample size (i. e. number of chromosomes) is determined correctly. Let's create a correct folded spectrum object for ery.
Step16: Now, the reported sample size is correct and we have a Spectrum object that dadi can handle correctly.
To fold or not to fold by ANGSD
Does estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the reference allele? Matteo Fumagalli thinks that this is sensible.
Load SFS folded by ANGSD
Step17: Load unfolded SFS
Step18: I have copied the unfolded SFS into the current directory.
Step19: The sizes of the residuals (scaled by the Poisson standard deviations) indicate that the two versions of the folded SFS of ery are significantly different.
Now, what does the parallelus data say?
Step20: The unfolded spectrum folded by dadi seems to be a bit better behaved than the one folded by ANGSD. I really wonder whether folding in ANGSD is needed.
The folded 2D spectrum from ANGSD is a 19 x 19 matrix. This is not a format that dadi can understand.
Step21: See this thread on the dadi forum.
Exponential growth model
Step22: Parallelised $\delta$a$\delta$i
I need to run the simulation with different starting values to check convergence.
I would like to do these runs in parallel. I have 12 cores available on huluvu.
Step23: I now have connections to 11 engines. I started the engines with ipcluster start -n 11 & in the terminal.
Step24: import variables to namespace of engines
Step25: import dadi on all engines
Step26: create parallel function to run dadi
Step27:
Step28: Unfortunately, parallelisation is not as straightforward as it should be.
Step29: Except for the last iteration, the two parameter estimates seem to have converged.
Step30: What is the log likelihood of the model given these two different parameter sets?
Step31: The lower log-likelihood for the last set of parameters inferred indicates that the optimisation got trapped in a local minimum in the last run of the optimisation.
What the majority of the parameter sets seem to indicate is that at about time $0.007 \times 2 N_{ref}$ generations in the past the ancestral population started to shrink exponentially, reaching a population size of about $0.14 \times N_{ref}$ at present.
Step32: Two epoch model
Step33: This model specifies a stepwise change in population size some time ago. It assumes that the population size has stayed constant since the change.
Step34: This model does not converge on a set of parameter values.
Step35: Both parameters seem to be correlated. With the available data, it may not be possible to distinguish between a moderate reduction in population size a long time ago (topright in the above figure) and a drastic reduction in population size a short time ago (bottomleft in the above figure).
Bottleneck then exponential growth
Step36: This model has three parameters. $\nu_B$ is the ratio of the population size (with respect to the ancestral population size $N_{ref}$) after the first stepwise change at time T in the past. The population is then asumed to undergo exponential growth/decline to a ratio of population size $\nu_F$ at present.
Step37: There is no convergence of parameters estimates. The parameter combinations stand for vastly different demographic scenarios. Most seem to suggest a population increase (up to 100 times the ancestral population size), followed by exponential decrease to about the ancestral population size.
Three epochs
Step38: This model tries to estimate three parameters. The populations is expected to undergo a stepwise population size change (bottleneck) at time TF + TB. At time TF it is expected to recover immediately to the current population size. | Python Code:
import sys
sys.path
sys.path.insert(0, '/home/claudius/Downloads/dadi')
sys.path
import dadi
import pylab
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
%matplotlib inline
Explanation: Load $\delta$a$\delta$i
I have not installed dadi globally on huluvu. Instead, I left it in my Downloads directory '/home/claudius/Downloads/dadi'. In order for Python to find that module, I need to add that directory to the PYTHONPATH variable.
End of explanation
% ll dadiExercises/
% cat dadiExercises/ERY.FOLDED.sfs.dadi_format
Explanation: Load data
End of explanation
fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
%pdoc dadi.Spectrum.from_file
fs_ery
ns = fs_ery.sample_sizes
ns
fs_ery.pop_ids = ['ery'] # must be an array, otherwise leads to error later on
# the number of segregating sites in the spectrum
fs_ery.sum()
Explanation: I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual.
Note, that the last line, indicating the mask, has length 37, but the folded spectrum has length 19. Dadi wants to mask counts from invariable sites. For an unfolded spectrum, i. e. polarised with respect to an inferred ancestral allele at each site, the first and the last count classes would correspond to invariable sites. In a folded spectrum, i. e. with counts of the minor allele at each site, the last count class corresponds to SNP's with minor sample allele frequency of $n/2$ (with even sample size).
End of explanation
%pdoc dadi.Plotting.plot_1d_fs
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
dadi.Plotting.plot_1d_fs(fs_ery, show=False)
Explanation: According to the number of segregating sites, this spectrum should have good power to distinguish between alternative demographic models (see Adams2004). However, the noise in the data is extreme, as can be seen below, which might compromise this power and maybe even lead to false inferences.
Plot the data
End of explanation
# show modules within dadi
dir(dadi)
dir(dadi.Demographics1D)
# show the source of the 'Demographics1D' method
%psource dadi.Demographics1D
Explanation: Built-in 1D models
End of explanation
# create link to method
func = dadi.Demographics1D.snm
# make the extrapolating version of the demographic model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# setting the smallest grid size slightly larger than the largest population sample size
pts_l = [40, 50, 60]
Explanation: standard neutral model
End of explanation
# calculate unfolded AFS under standard neutral model (up to a scaling factor theta)
model = func_ex(0, ns, pts_l)
model
dadi.Plotting.plot_1d_fs(model.fold()[:19], show=False)
Explanation: The snm function does not take parameters to optimize. I can therefore get directly the expected model. The snm function does not take a fold argument. I am therefore going to calculated an unfolded expected spectrum and then fold.
End of explanation
# get the source of the fold method, which is part of the Spectrum object
%psource dadi.Spectrum.fold
# get the docstring of the Spectrum object
%pdoc dadi.Spectrum
# retrieve the spectrum array from the Spectrum object
model.data
Explanation: What's happening in the 18th count class?
End of explanation
# reverse spectrum and add to itself
model_fold = model.data + model.data[::-1]
model_fold
# discard all count classes >n/2
model_fold = model_fold[:19]
model_fold
Explanation: I am going to fold manually now.
End of explanation
# divide highest sample frequency class by 2
model_fold[18] = model_fold[18]/2.0
model_fold
# create dadi Spectrum object from array, need to specify custom mask
model_folded = dadi.Spectrum(data=model_fold, mask_corners=False, mask= [1] + [0]*18)
model_folded
dadi.Plotting.plot_1d_fs(model_folded)
Explanation: When the sample size is even, then highest sample frequency class corresponds to just one unfolded class (18). This has been added to itself and those SNP's are counted twice at the moment. I need to divide this class by 2 to get the correct count for this folded class.
End of explanation
# fold the unfolded model
model_folded = model.fold()
#model_folded = model_folded[:(ns[0]+1)]
model_folded.pop_ids = ['ery'] # be sure to give an array, not a scalar string
model_folded
ll_model_folded = dadi.Inference.ll_multinom(model_folded, fs_ery)
print 'The log composite likelihood of the observed ery spectrum given a standard neutral model is {0:.3f}.'.format(ll_model_folded)
Explanation: The folded expected spectrum is correct. Also, see figure 4.5 in Wakeley2009.
How to fold an unfolded spectrum
End of explanation
theta = dadi.Inference.optimal_sfs_scaling(model_folded, fs_ery)
print 'The optimal value of theta is {0:.3f}.'.format(theta)
Explanation: $\theta$ and implied $N_{ref}$
End of explanation
mu = 3e-9
L = fs_ery.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites
print "The total sequence length is " + str(L)
N_ref = theta/L/mu/4
print "The effective ancestral population size (in number of diploid individuals) implied by this theta is: {0}.".format(int(N_ref))
Explanation: This theta estimate is a little bit higher than what I estimated with curve fitting in Fist_Steps_with_dadi.ipynb, which was 10198.849.
What effective ancestral population size would that imply?
According to section 4.4 in the dadi manual:
$$
\theta = 4 N_{ref} \mu_{L} \qquad \text{L: sequence length}
$$
Let's assume the mutation rate per nucleotide site per generation is $3\times 10^{-9}$ (see e. g. Liu2017). Then
$$
\mu_{L} = \mu_{site} \times L
$$
So
$$
\theta = 4 N_{ref} \mu_{site} \times L
$$
and
$$
N_{ref} = \frac{\theta}{4 \mu_{site} L}
$$
End of explanation
x = pylab.arange(0, 100)
y = 0.5**(x)
pylab.plot(x, y)
x[:10] * y[:10]
sum(x * y)
Explanation: This effective population size is consistent with those reported in Lynch2016 for other insect species.
Begin Digression:
End of explanation
model_folded * theta
pylab.semilogy(model_folded * theta, "bo-", label='SNM')
pylab.plot(fs_ery, "ro-", label='ery')
pylab.legend()
%psource dadi.Plotting.plot_1d_comp_Poisson
# compare model prediction and data visually with dadi function
dadi.Plotting.plot_1d_comp_multinom(model_folded[:19], fs_ery[:19], residual='linear')
Explanation: End Digression
End of explanation
fs_ery
# make copy of spectrum array
data_abc = fs_ery.data.copy()
# resize the array to the unfolded length
data_abc.resize((37,))
data_abc
fs_ery_ext = dadi.Spectrum(data_abc)
fs_ery_ext
fs_ery_ext.fold()
fs_ery_ext = fs_ery_ext.fold()
fs_ery_ext.pop_ids = ['ery']
fs_ery_ext
fs_ery_ext.sample_sizes
Explanation: The lower plot is for the scaled Poisson residuals.
$$
residuals = (model - data)/\sqrt{model}
$$
The model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expected standard deviation of the model counts.
The observed counts deviate by up to 30 standard deviations from the model!
What could be done about this?
The greatest deviations are seen for the first two frequency classes, the ones that should provide the greatest amount of information (Fu1994) for theta and therefore probably also other parameters. Toni has suggested that the doubleton class is inflated due to "miscalling" heterozygotes as homozygotes. When they contain a singleton they will be "called" as homozygote and therefore contribute to the doubleton count. This is aggravated by the fact that the sequenced individuals are all male which only possess one X chromosome. The X chromosome is the fourth largest of the 9 chromosomes of these grasshoppers (8 autosomes + X) (see Gosalvez1988, fig. 2). That is, about 1/9th of the sequenced RAD loci are haploid but ANGSD assumes all loci to be diploid. The genotype likelihoods it calculates are all referring to diploid genotypes.
I think one potential reason for the extreme deviations is that the genotype likelihoods are generally biased toward homozygote genotypes (i. e. also for autosomal loci) due to PCR duplicates (see eq. 1 in Nielsen2012). So, one potential improvement would be to remove PCR duplicates.
Another potential improvement could be found by subsampling 8/9th to 8/10th of the contigs in the SAF files and estimating an SFS from these. Given enough subsamples, one should eventually be found that maximally excludes loci from the X chromosome. This subsample is expected to produce the least squared deviations from an expected SFS under the standard neutral model. However, one could argue that this attempt to exclude problematic loci could also inadvertently remove loci that strongly deviate from neutral expectations due to non-neutral evolution, again reducing power to detect deviations from the standard neutral model. I think one could also just apply the selection criterion of the second MAF class to be lower than the first and just save all contig subsamples and SFS's that fulfill that criterioin, since that should be true for all demographic scenarios.
Exponential growth
Creating a folded spectrum exactly how dadi expects it
As seen above in the folded model spectrum, dadi just masks out entries that are not sensical in a folded spectrum, but keeps the length of the spectrum the same as the unfolded. That way the sample size (i. e. number of chromosomes) is determined correctly. Let's create a correct folded spectrum object for ery.
End of explanation
% cat dadiExercises/ERY.FOLDED.sfs.dadi_format
# load the spectrum that was created from folded SAF's
fs_ery_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
fs_ery_folded_by_Angsd
# extract unmasked entries of the SFS
m = fs_ery_folded_by_Angsd.mask
fs_ery_folded_by_Angsd[m == False]
Explanation: Now, the reported sample size is correct and we have a Spectrum object that dadi can handle correctly.
To fold or not to fold by ANGSD
Does estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the reference allele? Matteo Fumagalli thinks that this is sensible.
Load SFS folded by ANGSD
End of explanation
% ll ../ANGSD/SFS/ERY/
Explanation: Load unfolded SFS
End of explanation
% ll
% cat ERY.unfolded.sfs
# load unfolded spectrum
fs_ery_unfolded_by_ANGSD = dadi.Spectrum.from_file('ERY.unfolded.sfs')
fs_ery_unfolded_by_ANGSD
# fold unfolded spectrum
fs_ery_unfolded_by_Angsd_folded = fs_ery_unfolded_by_ANGSD.fold()
fs_ery_unfolded_by_Angsd_folded
# plot the two spectra
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
pylab.plot(fs_ery_folded_by_Angsd, 'ro-', label='folded by ANGSD')
pylab.plot(fs_ery_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')
pylab.legend()
pylab.savefig('ery_fold_comp.png')
%psource dadi.Plotting.plot_1d_comp_Poisson
dadi.Plotting.plot_1d_comp_Poisson(fs_ery_folded_by_Angsd[:19], fs_ery_unfolded_by_Angsd_folded[:19], \
residual='linear')
Explanation: I have copied the unfolded SFS into the current directory.
End of explanation
% ll dadiExercises/
% cat dadiExercises/PAR.FOLDED.sfs.dadi_format
# load the spectrum folded by ANGSD
fs_par_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format')
fs_par_folded_by_Angsd
% cat PAR.unfolded.sfs
# load spectrum that has been created from unfolded SAF's
fs_par_unfolded_by_Angsd = dadi.Spectrum.from_file('PAR.unfolded.sfs')
fs_par_unfolded_by_Angsd
fs_par_unfolded_by_Angsd_folded = fs_par_unfolded_by_Angsd.fold()
fs_par_unfolded_by_Angsd_folded
dadi.Plotting.plot_1d_comp_Poisson(fs_par_folded_by_Angsd[:19], fs_par_unfolded_by_Angsd_folded[:19], \
residual='linear')
#pylab.subplot(2,1,1)
pylab.plot(fs_par_folded_by_Angsd[:19], 'ro-', label='folded by ANGSD')
#pylab.subplot(2,1,2)
pylab.plot(fs_par_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')
pylab.legend()
pylab.savefig('par_fold_comp.png')
Explanation: The sizes of the residuals (scaled by the Poisson standard deviations) indicate that the two versions of the folded SFS of ery are significantly different.
Now, what does the parallelus data say?
End of explanation
%psource dadi.Spectrum.from_data_dict
Explanation: The unfolded spectrum folded by dadi seems to be a bit better behaved than the one folded by ANGSD. I really wonder whether folding in ANGSD is needed.
The folded 2D spectrum from ANGSD is a 19 x 19 matrix. This is not a format that dadi can understand.
End of explanation
# show the source of the 'Demographics1D' method
%psource dadi.Demographics1D.growth
# create link to function that specifies a simple growth or decline model
func = dadi.Demographics1D.growth
# create extrapolating version of the function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# set lower and upper bounds to nu and T
upper_bound = [100, 3]
lower_bound = [1e-2, 0]
# set starting value
p0 = [1, 1] # corresponds to constant population size
%pdoc dadi.Misc.perturb_params
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
p0
%psource dadi.Inference.optimize_log
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt
Explanation: See this thread on the dadi forum.
Exponential growth model
End of explanation
from ipyparallel import Client
cl = Client()
cl.ids
Explanation: Parallelised $\delta$a$\delta$i
I need to run the simulation with different starting values to check convergence.
I would like to do these runs in parallel. I have 12 cores available on huluvu.
End of explanation
# create load balanced view of the engines
lbview = cl.load_balanced_view()
lbview.block
# create direct view of all engines
dview = cl[:]
Explanation: I now have connections to 11 engines. I started the engines with ipcluster start -n 11 & in the terminal.
End of explanation
# set starting value for all engines
dview['p0'] = [1, 1]
dview['p0']
# set lower and upper bounds to nu and T for all engines
dview['upper_bound'] = [100, 3]
dview['lower_bound'] = [1e-2, 0]
dview['fs_ery'] = fs_ery
cl[0]['fs_ery']
dview['func_ex'] = func_ex
dview['pts_l'] = pts_l
Explanation: import variables to namespace of engines
End of explanation
with dview.sync_imports():
import sys
dview.execute('sys.path.insert(0, \'/home/claudius/Downloads/dadi\')')
cl[0]['sys.path']
with dview.sync_imports():
import dadi
Explanation: import dadi on all engines
End of explanation
@lbview.parallel(block=True)
def run_dadi(x): # for the function to be called with map, it needs to have one input variable
# perturb starting values by up to a factor of 2
p1 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt
run_dadi.map(range(20))
popt
# set starting value
p0 = [1, 1]
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt
Explanation: create parallel function to run dadi
End of explanation
def exp_growth(x):
p0 = [1, 1]
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt
popt = map(exp_growth, range(10))
# this will run a few minutes
# popt
import ipyparallel as ipp
c = ipp.Client()
c.ids
%%time
dview = c[:]
popt = dview.map_sync(exp_growth, range(10))
Explanation:
End of explanation
popt
Explanation: Unfortunately, parallelisation is not as straightforward as it should be.
End of explanation
ns = fs_ery_ext.sample_sizes
ns
print popt[0]
print popt[9]
Explanation: Except for the last iteration, the two parameter estimates seem to have converged.
End of explanation
model_one = func_ex(popt[0], ns, pts_l)
ll_model_one = dadi.Inference.ll_multinom(model_one, fs_ery_ext)
ll_model_one
model_two = func_ex(popt[9], ns, pts_l)
ll_model_two = dadi.Inference.ll_multinom(model_two, fs_ery_ext)
ll_model_two
Explanation: What is the log likelihood of the model given these two different parameter sets?
End of explanation
print 'The model suggests that exponential decline in population size started {0:.0f} generations ago.'.format(popt[0][1] * 2 * N_ref)
Explanation: The lower log-likelihood for the last set of parameters inferred indicates that the optimisation got trapped in a local minimum in the last run of the optimisation.
What the majority of the parameter sets seem to indicate is that at about time $0.007 \times 2 N_{ref}$ generations in the past the ancestral population started to shrink exponentially, reaching a population size of about $0.14 \times N_{ref}$ at present.
End of explanation
dir(dadi.Demographics1D)
%psource dadi.Demographics1D.two_epoch
Explanation: Two epoch model
End of explanation
func = dadi.Demographics1D.two_epoch
func_ex = dadi.Numerics.make_extrap_log_func(func)
upper_bound = [10, 3]
lower_bound = [1e-3, 0]
pts_l = [40, 50, 60]
def stepwise_pop_change(x):
# set initial values
p0 = [1, 1]
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
stepwise_pop_change(1)
stepwise_pop_change(1)
popt = map(stepwise_pop_change, range(10))
popt
Explanation: This model specifies a stepwise change in population size some time ago. It assumes that the population size has stayed constant since the change.
End of explanation
nu = [i[0] for i in popt]
nu
T = [i[1] for i in popt]
T
pylab.rcParams['font.size'] = 14.0
pylab.loglog(nu, T, 'bo')
pylab.xlabel(r'$\nu$')
pylab.ylabel('T')
Explanation: This model does not converge on a set of parameter values.
End of explanation
%psource dadi.Demographics1D
Explanation: Both parameters seem to be correlated. With the available data, it may not be possible to distinguish between a moderate reduction in population size a long time ago (topright in the above figure) and a drastic reduction in population size a short time ago (bottomleft in the above figure).
Bottleneck then exponential growth
End of explanation
func = dadi.Demographics1D.bottlegrowth
func_ex = dadi.Numerics.make_extrap_log_func(func)
upper_bound = [100, 100, 3]
lower_bound = [1e-3, 1e-3, 0]
pts_l = [40, 50, 60]
def bottleneck_growth(x):
p0 = [1, 1, 1] # corresponds to constant population size
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
%%time
popt = map(bottleneck_growth, range(10))
popt
Explanation: This model has three parameters. $\nu_B$ is the ratio of the population size (with respect to the ancestral population size $N_{ref}$) after the first stepwise change at time T in the past. The population is then asumed to undergo exponential growth/decline to a ratio of population size $\nu_F$ at present.
End of explanation
func = dadi.Demographics1D.three_epoch
func_ex = dadi.Numerics.make_extrap_log_func(func)
%psource dadi.Demographics1D.three_epoch
Explanation: There is no convergence of parameters estimates. The parameter combinations stand for vastly different demographic scenarios. Most seem to suggest a population increase (up to 100 times the ancestral population size), followed by exponential decrease to about the ancestral population size.
Three epochs
End of explanation
upper_bound = [100, 100, 3, 3]
lower_bound = [1e-3, 1e-3, 0, 0]
pts_l = [40, 50, 60]
def opt_three_epochs(x):
p0 = [1, 1, 1, 1] # corresponds to constant population size
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
%%time
popt = map(opt_three_epochs, range(10))
popt
Explanation: This model tries to estimate three parameters. The populations is expected to undergo a stepwise population size change (bottleneck) at time TF + TB. At time TF it is expected to recover immediately to the current population size.
End of explanation |
13,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
13,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked โย neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step36: Project 4
Step37: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
Step38: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step39: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step40: Project 5
Step41: Run the following cell to recreate the network and train it once again.
Step42: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step43: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step44: Project 6
Step45: Run the following cell to train your network with a small polarity cutoff.
Step46: And run the following cell to test it's performance. It should be
Step47: Run the following cell to train your network with a much larger polarity cutoff.
Step48: And run the following cell to test it's performance.
Step49: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator โย that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
# TODO: Convert ratios to logs
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews โ like "amazing"ย โ have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews โ like "terrible" โ have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews โ like "the" โ have values very close to 1. A perfectly neutral word โย one that was used in exactly the same number of positive reviews as negative reviews โย would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert their values using the following formulas:
For any postive words, convert the ratio using np.log(ratio)
For any negative words, convert the ratio using -np.log(1/(ratio + 0.01))
That second equation may look strange, but what it's doing is dividing one by a very small number, which will produce a larger positive number. Then, it takes the log of that, which produces numbers similar to the ones for the postive words. Finally, we negate the values by adding that minus sign up front. In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but oppositite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked โย neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = None
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[0]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = None
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = None
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation |
13,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Symbolic Calculator
This file shows how a simply symbolic calculator can be implemented using Ply.
Specification of the Scanner
Step1: The token Number specifies a fully featured floating point number.
Step2: The token IDENTIFIER specifies the name of a variable.
Step3: The token ASSIGN_OP specifies the assignment operator.
Step4: Specification of the Parser
Step5: The start variable of our grammar is statement.
Step6: There are two grammar rules for stmnts
Step7: An expr is a sequence of prods that are combined with the operators + and -.
The corresponding grammar rules are
Step8: A prod is a sequence of factors that are combined with the operators * and /.
The corresponding grammar rules are
Step9: A factor can is either an expression in parenthesis, a number, or an identifier.
factor
Step10: Setting the optional argument write_tables to False <B style="color
Step11: Let's look at the action table that is generated. | Python Code:
import ply.lex as lex
tokens = [ 'NUMBER', 'IDENTIFIER', 'ASSIGN_OP' ]
Explanation: A Simple Symbolic Calculator
This file shows how a simply symbolic calculator can be implemented using Ply.
Specification of the Scanner
End of explanation
def t_NUMBER(t):
r'0|[1-9][0-9]*(\.[0-9]+)?(e[+-]?([1-9][0-9]*))?'
t.value = float(t.value)
return t
Explanation: The token Number specifies a fully featured floating point number.
End of explanation
def t_IDENTIFIER(t):
r'[a-zA-Z][a-zA-Z0-9_]*'
return t
Explanation: The token IDENTIFIER specifies the name of a variable.
End of explanation
def t_ASSIGN_OP(t):
r':='
return t
literals = ['+', '-', '*', '/', '(', ')', ';']
t_ignore = ' \t'
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count('\n')
def t_error(t):
print(f"Illegal character '{t.value[0]}'")
t.lexer.skip(1)
__file__ = 'main'
lexer = lex.lex()
Explanation: The token ASSIGN_OP specifies the assignment operator.
End of explanation
import ply.yacc as yacc
Explanation: Specification of the Parser
End of explanation
start = 'stmnt'
Explanation: The start variable of our grammar is statement.
End of explanation
def p_stmnt_assign(p):
"stmnt : IDENTIFIER ASSIGN_OP expr ';'"
Names2Values[p[1]] = p[3]
def p_stmnt_expr(p):
"stmnt : expr ';'"
print(p[1])
Explanation: There are two grammar rules for stmnts:
stmnt : IDENTIFIER ":=" expr ";"
| expr ';'
;
- If a stmnt is an assignment, the expression on the right hand side of the assignment operator is
evaluated and the value is stored in the dictionary Names2Values. The key used in this dictionary
is the name of the variable on the left hand side ofthe assignment operator.
- If a stmnt is an expression, the expression is evaluated and the result of this evaluation is printed.
It is <b>very important</b> that in the grammar rules below the : is surrounded by space characters, for otherwise Ply will throw mysterious error messages at us!
Below, Names2Values is a dictionary mapping variable names to their values. It will be defined later.
End of explanation
def p_expr_plus(p):
"expr : expr '+' prod"
p[0] = p[1] + p[3]
def p_expr_minus(p):
"expr : expr '-' prod"
p[0] = p[1] - p[3]
def p_expr_prod(p):
"expr : prod"
p[0] = p[1]
Explanation: An expr is a sequence of prods that are combined with the operators + and -.
The corresponding grammar rules are:
expr : expr '+' prod
| expr '-' prod
| prod
;
End of explanation
def p_prod_mult(p):
"prod : prod '*' factor"
p[0] = p[1] * p[3]
def p_prod_div(p):
"prod : prod '/' factor"
p[0] = p[1] / p[3]
def p_prod_factor(p):
"prod : factor"
p[0] = p[1]
Explanation: A prod is a sequence of factors that are combined with the operators * and /.
The corresponding grammar rules are:
prod : prod '*' factor
| prod '/' factor
| factor
;
End of explanation
def p_factor_group(p):
"factor : '(' expr ')'"
p[0] = p[2]
def p_factor_number(p):
"factor : NUMBER"
p[0] = p[1]
def p_factor_id(p):
"factor : IDENTIFIER"
p[0] = Names2Values.get(p[1], float('nan'))
def p_error(p):
if p:
print(f'Syntax error at {p.value} in line {p.lexer.lineno}.')
else:
print('Syntax error at end of input.')
Explanation: A factor can is either an expression in parenthesis, a number, or an identifier.
factor : '(' expr ')'
| NUMBER
| IDENTIFIER
;
End of explanation
parser = yacc.yacc(write_tables=False, debug=True)
Explanation: Setting the optional argument write_tables to False <B style="color:red">is required</B> to prevent an obscure bug where the parser generator tries to read an empty parse table.
End of explanation
!cat parser.out
Names2Values = {}
def main():
while True:
s = input('calc > ')
if s == '':
break
yacc.parse(s)
main()
Explanation: Let's look at the action table that is generated.
End of explanation |
13,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CSIR-CSIRO
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:54
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back - Next
Widget List
Step1: Numeric widgets
There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
IntSlider
The slider is displayed with a specified, initial value. Lower and upper bounds are defined by min and max, and the value can be incremented according to the step parameter.
The slider's label is defined by description parameter
The slider's orientation is either 'horizontal' (default) or 'vertical'
readout displays the current value of the slider next to it. The options are True (default) or False
readout_format specifies the format function used to represent slider value. The default is '.2f'
Step2: FloatSlider
Step3: An example of sliders displayed vertically.
Step4: FloatLogSlider
The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider.
Step5: IntRangeSlider
Step6: FloatRangeSlider
Step7: IntProgress
Step8: FloatProgress
Step9: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
BoundedIntText
Step10: BoundedFloatText
Step11: IntText
Step12: FloatText
Step13: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
Step14: Checkbox
value specifies the value of the checkbox
indent parameter places an indented checkbox, aligned with other controls. Options are True (default) or False
Step15: Valid
The valid widget provides a read-only indicator.
Step16: Selection widgets
There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str).
<div class="alert alert-info">
Changes in *ipywidgets 8*
Step17: The following is also valid, displaying the words 'One', 'Two', 'Three' as the dropdown choices but returning the values 1, 2, 3.
Step18: RadioButtons
Step19: With dynamic layout and very long labels
Step20: Select
Step21: SelectionSlider
Step22: SelectionRangeSlider
The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
Step23: ToggleButtons
Step24: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
Step25: String widgets
There are several widgets that can be used to display a string value. The Text, Textarea, and Combobox widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label.
Text
Step26: Textarea
Step27: Combobox
Step28: Password
The Password widget hides user input on the screen. This widget is not a secure way to collect sensitive information because
Step29: Label
The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
Step30: HTML
Step31: HTML Math
Step32: Image
Step33: Button
Step34: The icon attribute can be used to define an icon; see the fontawesome page for available icons.
A callback function foo can be registered using button.on_click(foo). The function foo will be called when the button is clicked with the button instance as its single argument.
Output
The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples.
Play (Animation) widget
The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
Step35: Tags input widget
The TagsInput widget is useful to for selecting/creating a list of tags. You can drag and drop tags to reorder them, limit them to a set of allowed values, or even prevent making duplicate tags.
Step36: Date picker
For a list of browsers that support the date picker widget, see the MDN article for the HTML date input field.
Step37: Time picker
For a list of browsers that support the time picker widget, see the MDN article for the HTML time input field.
Step38: Datetime picker
For a list of browsers that support the datetime picker widget, see the MDN article for the HTML datetime-local input field. For the browsers that do not support the datetime-local input, we try to fall back on displaying separate date and time inputs.
Time zones
There are two points worth to note with regards to timezones for datetimes
Step39: Color picker
Step40: File Upload
The FileUpload allows to upload any type of file(s) into memory in the kernel.
Step41: The upload widget exposes a value attribute that contains the files uploaded. The value attribute is a tuple with a dictionary for each uploaded file. For instance
Step42: Container/Layout widgets
These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later.
Box
Step43: HBox
Step44: VBox
Step45: GridBox
This box uses the HTML Grid specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items.
Step46: Accordion
Step47: Tabs
In this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for Accordion.
Step48: Stacked
The Stacked widget can have multiple children widgets as for Tab and Accordion, but only shows one at a time depending on the value of selected_index
Step49: This can be used in combination with another selection-based widget to show different widgets depending on the selection
Step50: Accordion, Tab, and Stacked use selected_index, not value
Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index.
Setting selected_index = None closes all of the accordions or deselects all tabs.
In the cells below try displaying or setting the selected_index of the tab and/or accordion.
Step51: Nesting tabs and accordions
Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.
The example below makes a couple of tabs with an accordion children in one of them | Python Code:
import ipywidgets as widgets
Explanation: Index - Back - Next
Widget List
End of explanation
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
Explanation: Numeric widgets
There are many widgets distributed with ipywidgets that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
IntSlider
The slider is displayed with a specified, initial value. Lower and upper bounds are defined by min and max, and the value can be incremented according to the step parameter.
The slider's label is defined by description parameter
The slider's orientation is either 'horizontal' (default) or 'vertical'
readout displays the current value of the slider next to it. The options are True (default) or False
readout_format specifies the format function used to represent slider value. The default is '.2f'
End of explanation
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
Explanation: FloatSlider
End of explanation
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
)
Explanation: An example of sliders displayed vertically.
End of explanation
widgets.FloatLogSlider(
value=10,
base=10,
min=-10, # max exponent of base
max=10, # min exponent of base
step=0.2, # exponent step
description='Log Slider'
)
Explanation: FloatLogSlider
The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider.
End of explanation
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
Explanation: IntRangeSlider
End of explanation
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
Explanation: FloatRangeSlider
End of explanation
widgets.IntProgress(
value=7,
min=0,
max=10,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
style={'bar_color': 'maroon'},
orientation='horizontal'
)
Explanation: IntProgress
End of explanation
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
description='Loading:',
bar_style='info',
style={'bar_color': '#ffff00'},
orientation='horizontal'
)
Explanation: FloatProgress
End of explanation
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
Explanation: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
BoundedIntText
End of explanation
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False
)
Explanation: BoundedFloatText
End of explanation
widgets.IntText(
value=7,
description='Any:',
disabled=False
)
Explanation: IntText
End of explanation
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False
)
Explanation: FloatText
End of explanation
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
Explanation: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
End of explanation
widgets.Checkbox(
value=False,
description='Check me',
disabled=False,
indent=False
)
Explanation: Checkbox
value specifies the value of the checkbox
indent parameter places an indented checkbox, aligned with other controls. Options are True (default) or False
End of explanation
widgets.Valid(
value=False,
description='Valid!',
)
Explanation: Valid
The valid widget provides a read-only indicator.
End of explanation
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
)
Explanation: Selection widgets
There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str).
<div class="alert alert-info">
Changes in *ipywidgets 8*:
Selection widgets no longer accept a dictionary of options. Pass a list of key-value pairs instead.
</div>
Dropdown
End of explanation
widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
Explanation: The following is also valid, displaying the words 'One', 'Two', 'Three' as the dropdown choices but returning the values 1, 2, 3.
End of explanation
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple', # Defaults to 'pineapple'
# layout={'width': 'max-content'}, # If the items' names are long
description='Pizza topping:',
disabled=False
)
Explanation: RadioButtons
End of explanation
widgets.Box(
[
widgets.Label(value='Pizza topping with a very long label:'),
widgets.RadioButtons(
options=[
'pepperoni',
'pineapple',
'anchovies',
'and the long name that will fit fine and the long name that will fit fine and the long name that will fit fine '
],
layout={'width': 'max-content'}
)
]
)
Explanation: With dynamic layout and very long labels
End of explanation
widgets.Select(
options=['Linux', 'Windows', 'macOS'],
value='macOS',
# rows=10,
description='OS:',
disabled=False
)
Explanation: Select
End of explanation
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
Explanation: SelectionSlider
End of explanation
import datetime
dates = [datetime.date(2015, i, 1) for i in range(1, 13)]
options = [(i.strftime('%b'), i) for i in dates]
widgets.SelectionRangeSlider(
options=options,
index=(0, 11),
description='Months (2015)',
disabled=False
)
Explanation: SelectionRangeSlider
The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
End of explanation
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
Explanation: ToggleButtons
End of explanation
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
#rows=10,
description='Fruits',
disabled=False
)
Explanation: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
End of explanation
widgets.Text(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
Explanation: String widgets
There are several widgets that can be used to display a string value. The Text, Textarea, and Combobox widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label.
Text
End of explanation
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
Explanation: Textarea
End of explanation
widgets.Combobox(
# value='John',
placeholder='Choose Someone',
options=['Paul', 'John', 'George', 'Ringo'],
description='Combobox:',
ensure_option=True,
disabled=False
)
Explanation: Combobox
End of explanation
widgets.Password(
value='password',
placeholder='Enter password',
description='Password:',
disabled=False
)
Explanation: Password
The Password widget hides user input on the screen. This widget is not a secure way to collect sensitive information because:
The contents of the Password widget are transmitted unencrypted.
If the widget state is saved in the notebook the contents of the Password widget is stored as plain text.
End of explanation
widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()])
Explanation: Label
The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
End of explanation
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
Explanation: HTML
End of explanation
widgets.HTMLMath(
value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$",
placeholder='Some HTML',
description='Some HTML',
)
Explanation: HTML Math
End of explanation
file = open("images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400,
)
Explanation: Image
End of explanation
button = widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
button
Explanation: Button
End of explanation
play = widgets.Play(
value=50,
min=0,
max=100,
step=1,
interval=500,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
Explanation: The icon attribute can be used to define an icon; see the fontawesome page for available icons.
A callback function foo can be registered using button.on_click(foo). The function foo will be called when the button is clicked with the button instance as its single argument.
Output
The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples.
Play (Animation) widget
The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
End of explanation
tags = widgets.TagsInput(
value=['pizza', 'fries'],
allowed_tags=['pizza', 'fries', 'tomatoes', 'steak'],
allow_duplicates=False
)
tags
color_tags = widgets.ColorsInput(
value=['red', '#2f6d30'],
# allowed_tags=['red', 'blue', 'green'],
# allow_duplicates=False
)
color_tags
Explanation: Tags input widget
The TagsInput widget is useful to for selecting/creating a list of tags. You can drag and drop tags to reorder them, limit them to a set of allowed values, or even prevent making duplicate tags.
End of explanation
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
Explanation: Date picker
For a list of browsers that support the date picker widget, see the MDN article for the HTML date input field.
End of explanation
widgets.TimePicker(
description='Pick a Time',
disabled=False
)
Explanation: Time picker
For a list of browsers that support the time picker widget, see the MDN article for the HTML time input field.
End of explanation
widgets.DatetimePicker(
description='Pick a Time',
disabled=False
)
Explanation: Datetime picker
For a list of browsers that support the datetime picker widget, see the MDN article for the HTML datetime-local input field. For the browsers that do not support the datetime-local input, we try to fall back on displaying separate date and time inputs.
Time zones
There are two points worth to note with regards to timezones for datetimes:
- The browser always picks datetimes using its timezone.
- The kernel always gets the datetimes in the default system timezone of the kernel (see https://docs.python.org/3/library/datetime.html#datetime.datetime.astimezone with None as the argument).
This means that if the kernel and browser have different timezones, the default string serialization of the timezones might differ, but they will still represent the same point in time.
End of explanation
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
Explanation: Color picker
End of explanation
widgets.FileUpload(
accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf'
multiple=False # True to accept multiple files upload else False
)
Explanation: File Upload
The FileUpload allows to upload any type of file(s) into memory in the kernel.
End of explanation
widgets.Controller(
index=0,
)
Explanation: The upload widget exposes a value attribute that contains the files uploaded. The value attribute is a tuple with a dictionary for each uploaded file. For instance:
```python
uploader = widgets.FileUpload()
display(uploader)
upload something...
once a file is uploaded, use the .value attribute to retrieve the content:
uploader.value
=> (
=> {
=> 'name': 'example.txt',
=> 'type': 'text/plain',
=> 'size': 36,
=> 'last_modified': datetime.datetime(2020, 1, 9, 15, 58, 43, 321000, tzinfo=datetime.timezone.utc),
=> 'content': <memory at 0x10c1b37c8>
=> },
=> )
```
Entries in the dictionary can be accessed either as items, as one would any dictionary, or as attributes:
```
uploaded_file = uploader.value[0]
uploaded_file["size"]
=> 36
uploaded_file.size
=> 36
```
The contents of the file uploaded are in the value of the content key. They are a memory view:
```python
uploaded_file.content
=> <memory at 0x10c1b37c8>
```
You can extract the content to bytes:
```python
uploaded_file.content.tobytes()
=> b'This is the content of example.txt.\n'
```
If the file is a text file, you can get the contents as a string by decoding it:
```python
import codecs
codecs.decode(uploaded_file.content, encoding="utf-8")
=> 'This is the content of example.txt.\n'
```
You can save the uploaded file to the filesystem from the kernel:
python
with open("./saved-output.txt", "wb") as fp:
fp.write(uploaded_file.content)
To convert the uploaded file into a Pandas dataframe, you can use a BytesIO object:
python
import io
import pandas as pd
pd.read_csv(io.BytesIO(uploaded_file.content))
If the uploaded file is an image, you can visualize it with an image widget:
python
widgets.Image(value=uploaded_file.content.tobytes())
<div class="alert alert-info">
Changes in *ipywidgets 8*:
The `FileUpload` changed significantly in ipywidgets 8:
- The `.value` traitlet is now a list of dictionaries, rather than a dictionary mapping the uploaded name to the content. To retrieve the original form, use `{f["name"]: f.content.tobytes() for f in uploader.value}`.
- The `.data` traitlet has been removed. To retrieve it, use `[f.content.tobytes() for f in uploader.value]`.
- The `.metadata` traitlet has been removed. To retrieve it, use `[{k: v for k, v in f.items() if k != "content"} for f in w.value]`.
</div>
<div class="alert alert-warning">
Warning: When using the `FileUpload` Widget, uploaded file content might be saved in the notebook if widget state is saved.
</div>
Controller
The Controller allows a game controller to be used as an input device.
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
Explanation: Container/Layout widgets
These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later.
Box
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
Explanation: HBox
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
Explanation: VBox
End of explanation
items = [widgets.Label(str(i)) for i in range(8)]
widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(3, 100px)"))
Explanation: GridBox
This box uses the HTML Grid specification to lay out its children in two dimensional grid. The example below lays out the 8 items inside in 3 columns and as many rows as needed to accommodate the items.
End of explanation
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()], titles=('Slider', 'Text'))
accordion
Explanation: Accordion
End of explanation
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
tab.titles = [str(i) for i in range(len(children))]
tab
Explanation: Tabs
In this example the children are set after the tab is created. Titles for the tabs are set in the same way they are for Accordion.
End of explanation
button = widgets.Button(description='Click here')
slider = widgets.IntSlider()
stacked = widgets.Stacked([button, slider])
stacked # will show only the button
Explanation: Stacked
The Stacked widget can have multiple children widgets as for Tab and Accordion, but only shows one at a time depending on the value of selected_index:
End of explanation
dropdown = widgets.Dropdown(options=['button', 'slider'])
widgets.jslink((dropdown, 'index'), (stacked, 'selected_index'))
widgets.VBox([dropdown, stacked])
Explanation: This can be used in combination with another selection-based widget to show different widgets depending on the selection:
End of explanation
tab.selected_index = 3
accordion.selected_index = None
Explanation: Accordion, Tab, and Stacked use selected_index, not value
Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index.
Setting selected_index = None closes all of the accordions or deselects all tabs.
In the cells below try displaying or setting the selected_index of the tab and/or accordion.
End of explanation
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.titles = ('An accordion', 'Copy of the accordion')
tab_nest
Explanation: Nesting tabs and accordions
Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.
The example below makes a couple of tabs with an accordion children in one of them
End of explanation |
13,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source alignment and coordinate frames
The aim of this tutorial is to show how to visually assess that the data are
well aligned in space for computing the forward solution, and understand
the different coordinate frames involved in this process.
Step1: Understanding coordinate frames
For M/EEG source imaging, there are three coordinate frames (further
explained in the next section) that we must bring into alignment using two 3D
transformation matrices <rotation and translation matrix_>_
that define how to rotate and translate points in one coordinate frame
to their equivalent locations in another.
Step2: Coordinate frame definitions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. raw
Step3: It is quite clear that the MRI surfaces (head, brain) are not well aligned
to the head digitization points (dots).
A good example
Here is the same plot, this time with the trans properly defined
(using a precomputed matrix).
Step4: Defining the headโMRI trans using the GUI
You can try creating the headโMRI transform yourself using
Step5: Alignment without MRI
The surface alignments above are possible if you have the surfaces available
from Freesurfer. | Python Code:
import os.path as op
import numpy as np
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
trans_fname = op.join(data_path, 'MEG', 'sample',
'sample_audvis_raw-trans.fif')
raw = mne.io.read_raw_fif(raw_fname)
trans = mne.read_trans(trans_fname)
src = mne.read_source_spaces(op.join(subjects_dir, 'sample', 'bem',
'sample-oct-6-src.fif'))
Explanation: Source alignment and coordinate frames
The aim of this tutorial is to show how to visually assess that the data are
well aligned in space for computing the forward solution, and understand
the different coordinate frames involved in this process.
:depth: 2
Let's start out by loading some data.
End of explanation
fig = mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
subjects_dir=subjects_dir, surfaces='head-dense',
show_axes=True, dig=True, eeg=[], meg='sensors',
coord_frame='meg')
mne.viz.set_3d_view(fig, 45, 90, distance=0.6, focalpoint=(0., 0., 0.))
print('Distance from head origin to MEG origin: %0.1f mm'
% (1000 * np.linalg.norm(raw.info['dev_head_t']['trans'][:3, 3])))
print('Distance from head origin to MRI origin: %0.1f mm'
% (1000 * np.linalg.norm(trans['trans'][:3, 3])))
dists = mne.dig_mri_distances(raw.info, trans, 'sample',
subjects_dir=subjects_dir)
print('Distance from %s digitized points to head surface: %0.1f mm'
% (len(dists), 1000 * np.mean(dists)))
Explanation: Understanding coordinate frames
For M/EEG source imaging, there are three coordinate frames (further
explained in the next section) that we must bring into alignment using two 3D
transformation matrices <rotation and translation matrix_>_
that define how to rotate and translate points in one coordinate frame
to their equivalent locations in another.
:func:mne.viz.plot_alignment is a very useful function for inspecting
these transformations, and the resulting alignment of EEG sensors, MEG
sensors, brain sources, and conductor models. If the subjects_dir and
subject parameters are provided, the function automatically looks for the
Freesurfer MRI surfaces to show from the subject's folder.
We can use the show_axes argument to see the various coordinate frames
given our transformation matrices. These are shown by axis arrows for each
coordinate frame:
shortest arrow is (R)ight/X
medium is forward/(A)nterior/Y
longest is up/(S)uperior/Z
i.e., a RAS coordinate system in each case. We can also set
the coord_frame argument to choose which coordinate
frame the camera should initially be aligned with.
Let's take a look:
End of explanation
mne.viz.plot_alignment(raw.info, trans=None, subject='sample', src=src,
subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
Explanation: Coordinate frame definitions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. raw:: html
<style>
.pink {color:DarkSalmon; font-weight:bold}
.blue {color:DeepSkyBlue; font-weight:bold}
.gray {color:Gray; font-weight:bold}
.magenta {color:Magenta; font-weight:bold}
.purple {color:Indigo; font-weight:bold}
.green {color:LimeGreen; font-weight:bold}
.red {color:Red; font-weight:bold}
</style>
.. role:: pink
.. role:: blue
.. role:: gray
.. role:: magenta
.. role:: purple
.. role:: green
.. role:: red
Neuromag/Elekta/MEGIN head coordinate frame ("head", :pink:pink axes)
The head coordinate frame is defined through the coordinates of
anatomical landmarks on the subject's head: Usually the Nasion (NAS),
and the left and right preauricular points (LPA and RPA).
Different MEG manufacturers may have different definitions of the
coordinate head frame. A good overview can be seen in the
FieldTrip FAQ on coordinate systems.
For Neuromag/Elekta/MEGIN, the head coordinate frame is defined by the
intersection of
the line between the LPA (:red:red sphere) and RPA
(:purple:purple sphere), and
the line perpendicular to this LPA-RPA line one that goes through
the Nasion (:green:green sphere).
The axes are oriented as X originโRPA, Y originโNAS,
Z originโupward (orthogonal to X and Y).
.. note:: The required 3D coordinates for defining the head coordinate
frame (NAS, LPA, RPA) are measured at a stage separate from
the MEG data recording. There exist numerous devices to
perform such measurements, usually called "digitizers". For
example, see the devices by the company Polhemus_.
MEG device coordinate frame ("meg", :blue:blue axes)
The MEG device coordinate frame is defined by the respective MEG
manufacturers. All MEG data is acquired with respect to this coordinate
frame. To account for the anatomy and position of the subject's head, we
use so-called head position indicator (HPI) coils. The HPI coils are
placed at known locations on the scalp of the subject and emit
high-frequency magnetic fields used to coregister the head coordinate
frame with the device coordinate frame.
From the Neuromag/Elekta/MEGIN user manual:
The origin of the device coordinate system is located at the center
of the posterior spherical section of the helmet with X axis going
from left to right and Y axis pointing front. The Z axis is, again
normal to the plane with positive direction up.
.. note:: The HPI coils are shown as :magenta:magenta spheres.
Coregistration happens at the beginning of the recording and
the data is stored in raw.info['dev_head_t'].
MRI coordinate frame ("mri", :gray:gray axes)
Defined by Freesurfer, the MRI (surface RAS) origin is at the
center of a 256ร256ร256 1mm anisotropic volume (may not be in the center
of the head).
.. note:: We typically align the MRI coordinate frame to the head
coordinate frame through a rotation and translation matrix_,
that we refer to in MNE as trans.
A bad example
Let's try using trans=None, which (incorrectly!) equates the MRI
and head coordinate frames.
End of explanation
mne.viz.plot_alignment(raw.info, trans=trans, subject='sample',
src=src, subjects_dir=subjects_dir, dig=True,
surfaces=['head-dense', 'white'], coord_frame='meg')
Explanation: It is quite clear that the MRI surfaces (head, brain) are not well aligned
to the head digitization points (dots).
A good example
Here is the same plot, this time with the trans properly defined
(using a precomputed matrix).
End of explanation
# mne.gui.coregistration(subject='sample', subjects_dir=subjects_dir)
Explanation: Defining the headโMRI trans using the GUI
You can try creating the headโMRI transform yourself using
:func:mne.gui.coregistration.
First you must load the digitization data from the raw file
(Head Shape Source). The MRI data is already loaded if you provide the
subject and subjects_dir. Toggle Always Show Head Points to see
the digitization points.
To set the landmarks, toggle Edit radio button in MRI Fiducials.
Set the landmarks by clicking the radio button (LPA, Nasion, RPA) and then
clicking the corresponding point in the image.
After doing this for all the landmarks, toggle Lock radio button. You
can omit outlier points, so that they don't interfere with the finetuning.
.. note:: You can save the fiducials to a file and pass
mri_fiducials=True to plot them in
:func:mne.viz.plot_alignment. The fiducials are saved to the
subject's bem folder by default.
* Click Fit Head Shape. This will align the digitization points to the
head surface. Sometimes the fitting algorithm doesn't find the correct
alignment immediately. You can try first fitting using LPA/RPA or fiducials
and then align according to the digitization. You can also finetune
manually with the controls on the right side of the panel.
* Click Save As... (lower right corner of the panel), set the filename
and read it with :func:mne.read_trans.
For more information, see step by step instructions
in these slides
<https://www.slideshare.net/mne-python/mnepython-coregistration>_.
Uncomment the following line to align the data yourself.
End of explanation
sphere = mne.make_sphere_model(info=raw.info, r0='auto', head_radius='auto')
src = mne.setup_volume_source_space(sphere=sphere, pos=10.)
mne.viz.plot_alignment(
raw.info, eeg='projected', bem=sphere, src=src, dig=True,
surfaces=['brain', 'outer_skin'], coord_frame='meg', show_axes=True)
Explanation: Alignment without MRI
The surface alignments above are possible if you have the surfaces available
from Freesurfer. :func:mne.viz.plot_alignment automatically searches for
the correct surfaces from the provided subjects_dir. Another option is
to use a spherical conductor model <eeg_sphere_model>. It is
passed through bem parameter.
End of explanation |
13,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jak zรญskat datum stรกtnรญch svรกtku pro danรฉ obdobรญ pro analรฝzu v Pandas?
V pandas existuje zpลฏsob jak pracovat s datem a celkovฤ kalendรกลem. Lze napล. podle pลipravenรฉho kalendรกลe filtrovat data a provรกdฤt tzv. analรฝzu sezรณnnosti. V tomto ฤlรกnku se zamฤลรญm, jak na zรญskรกnรญ seznamu stรกtnรญch svรกtku v US pro danรฉ ฤasovรฉ obdobรญ.
Funkcionalita svรกtkลฏ v pandas lze udฤlat pomocรญ Holidays / Holiday Calendars, kterรก vyuลพรญvรก tลรญdy AbstractHolidayCalendar z modulu pandas.holiday. Modul pandas.holiday mimo jinรฉ obsahuje nฤjakรฉ pลeddefinovanรฉ typy US svรกtkลฏ.
Pro vytvoลenรญ seznamu US svรกtkลฏ jednoduลกe rozลกรญลรญm tลรญdu AbstractHolidayCalendar o svลฏj vlastnรญ seznam svรกtkลฏ v atributu tลรญdy s nรกzvem rules. Pokud chci pลidat do seznamu vlastnรญ svรกtek, kterรฝ nenรญ defaultnฤ v modulu pandas.holiday, mลฏลพu si ho vytvoลit pomocรญ tลรญdy pandas.holiday.Holiday.
Pลรญklad vytvรกลenรญ seznamu US svรกtkลฏ
Step1: Dalลกรญ alternativy
US svรกtky na internetu
Krom modulu pandas.holiday, je moลพno pouลพรญt staลพenรญ tabulky dat pomocรญ knihovnz requests a pandas.read_html napลรญklad z webu CME https | Python Code:
import datetime as dt
from pandas.tseries.holiday import AbstractHolidayCalendar, Holiday, nearest_workday, \
USMartinLutherKingJr, USPresidentsDay, GoodFriday, USMemorialDay, \
USLaborDay, USThanksgivingDay
class USTradingCalendar(AbstractHolidayCalendar):
rules = [
Holiday('NewYearsDay', month=1, day=1, observance=nearest_workday),
USMartinLutherKingJr,
USPresidentsDay,
GoodFriday,
USMemorialDay,
Holiday('USIndependenceDay', month=7, day=4, observance=nearest_workday),
USLaborDay,
USThanksgivingDay,
Holiday('Christmas', month=12, day=25, observance=nearest_workday)
]
def get_trading_close_holidays(date_from, date_to):
inst = USTradingCalendar()
return inst.holidays(date_from, date_to)
date_from = dt.datetime(2015, 1, 1)
date_to = dt.datetime(2020, 12, 31)
holidays = get_trading_close_holidays(date_from, date_to)
holidays
Explanation: Jak zรญskat datum stรกtnรญch svรกtku pro danรฉ obdobรญ pro analรฝzu v Pandas?
V pandas existuje zpลฏsob jak pracovat s datem a celkovฤ kalendรกลem. Lze napล. podle pลipravenรฉho kalendรกลe filtrovat data a provรกdฤt tzv. analรฝzu sezรณnnosti. V tomto ฤlรกnku se zamฤลรญm, jak na zรญskรกnรญ seznamu stรกtnรญch svรกtku v US pro danรฉ ฤasovรฉ obdobรญ.
Funkcionalita svรกtkลฏ v pandas lze udฤlat pomocรญ Holidays / Holiday Calendars, kterรก vyuลพรญvรก tลรญdy AbstractHolidayCalendar z modulu pandas.holiday. Modul pandas.holiday mimo jinรฉ obsahuje nฤjakรฉ pลeddefinovanรฉ typy US svรกtkลฏ.
Pro vytvoลenรญ seznamu US svรกtkลฏ jednoduลกe rozลกรญลรญm tลรญdu AbstractHolidayCalendar o svลฏj vlastnรญ seznam svรกtkลฏ v atributu tลรญdy s nรกzvem rules. Pokud chci pลidat do seznamu vlastnรญ svรกtek, kterรฝ nenรญ defaultnฤ v modulu pandas.holiday, mลฏลพu si ho vytvoลit pomocรญ tลรญdy pandas.holiday.Holiday.
Pลรญklad vytvรกลenรญ seznamu US svรกtkลฏ:
End of explanation
CME_HOLIDAY_CALENDAR_URL = 'https://www.cmegroup.com/tools-information/holiday-calendar.html'
import pandas as pd
import requests
r = requests.get(CME_HOLIDAY_CALENDAR_URL)
holiday_calendar = pd.read_html(r.text)[0]
holiday_calendar
dates = holiday_calendar['Includes the following dates:']
dates
Explanation: Dalลกรญ alternativy
US svรกtky na internetu
Krom modulu pandas.holiday, je moลพno pouลพรญt staลพenรญ tabulky dat pomocรญ knihovnz requests a pandas.read_html napลรญklad z webu CME https://www.cmegroup.com/tools-information/holiday-calendar.html, popลรญpadฤ i z Wikipedie - https://en.wikipedia.org/wiki/Public_holidays_in_the_United_States. Bohuลพel takto staลพenรก data se musรญ jeลกtฤ ruฤnฤ upravit a pลeparsovat.
Pลรญklad zรญskรกnรญ US stรกtnรญch svรกtkลฏ z CME:
End of explanation |
13,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hoja 2. Ejercicios Python
Este cuaderno estรก pensado para que sigรกis practicando con Python y ODEs. En esta hoja nos centraremos en el รกlgebra lineal y la difusiรณn.
La base, como siempre, la tenรฉis en las diapositivas de clase, las explicaciones y los notebooks Convecciรณn con Algebra Lineal y Difusiรณn-1D.
Empezamos como siempre, cargando los mรณdulos que vayamos a utilizar
Step1: Hasta ahora, hemos trabajado con vectores, aplicรกndolos a la resoluciรณn de la ecuaciรณn de convecciรณn en 1-D. Vamos a ver ahora cรณmo resolver la ecuaciรณn de difusiรณn y cรณmo aplicar matrices y operaciones de รกlgebra lineal para resolver un esquema implicito.
Ejercicio 1
Escribe una matriz diagonal D que tenga elementos igual a 2 por encima y por debajo de la diagonal principal
Step2: Calcula ahora la matriz traspuesta y la matriz inversa de la matriz D
Step3: Ejercicio 2
Sigue los pasos del notebook Convecciรณn con Algebra Lineal para montar el sistema $u^{n+1} = D \cdot u^{n}$. Aplรญcale el algoritmo upwind ( $u_i^{n+1} = u_i^n - \mathrm{Co} (u_{i}^n-u_{i-1}^n)$ ) para obtener una soluciรณn. ยฟQuรฉ es lo que pasa? ยฟEs la misma soluciรณn que la que obtenรญamos a travรฉs de bucles?
Step4: Hasta ahora, no nos hemos preocupado de las condiciones de contorno. Pero como veis, estas tienen gran importancia en el resultado final. Las condiciones de contorno corresponden a los extremos de nuestra matriz D. Trata de implementar condiciones de periodicidad en este problema.
Step5: Pista
Step6: Reto | Python Code:
import numpy as np
import matplotlib.pyplot as plt #Esta es otra forma de importar el submรณdulo pyplot!
#Igual de vรกlida que la que hemos visto en clase
%matplotlib inline
Explanation: Hoja 2. Ejercicios Python
Este cuaderno estรก pensado para que sigรกis practicando con Python y ODEs. En esta hoja nos centraremos en el รกlgebra lineal y la difusiรณn.
La base, como siempre, la tenรฉis en las diapositivas de clase, las explicaciones y los notebooks Convecciรณn con Algebra Lineal y Difusiรณn-1D.
Empezamos como siempre, cargando los mรณdulos que vayamos a utilizar:
End of explanation
# Introduce aquรญ tu cรณdigo
Explanation: Hasta ahora, hemos trabajado con vectores, aplicรกndolos a la resoluciรณn de la ecuaciรณn de convecciรณn en 1-D. Vamos a ver ahora cรณmo resolver la ecuaciรณn de difusiรณn y cรณmo aplicar matrices y operaciones de รกlgebra lineal para resolver un esquema implicito.
Ejercicio 1
Escribe una matriz diagonal D que tenga elementos igual a 2 por encima y por debajo de la diagonal principal
End of explanation
# Introduce aquรญ tu cรณdigo
Explanation: Calcula ahora la matriz traspuesta y la matriz inversa de la matriz D
End of explanation
# Introduce aquรญ tu cรณdigo
Explanation: Ejercicio 2
Sigue los pasos del notebook Convecciรณn con Algebra Lineal para montar el sistema $u^{n+1} = D \cdot u^{n}$. Aplรญcale el algoritmo upwind ( $u_i^{n+1} = u_i^n - \mathrm{Co} (u_{i}^n-u_{i-1}^n)$ ) para obtener una soluciรณn. ยฟQuรฉ es lo que pasa? ยฟEs la misma soluciรณn que la que obtenรญamos a travรฉs de bucles?
End of explanation
# Introduce aquรญ tu cรณdigo
Explanation: Hasta ahora, no nos hemos preocupado de las condiciones de contorno. Pero como veis, estas tienen gran importancia en el resultado final. Las condiciones de contorno corresponden a los extremos de nuestra matriz D. Trata de implementar condiciones de periodicidad en este problema.
End of explanation
# Introduce aquรญ tu cรณdigo
Explanation: Pista: las condiciones periรณdicas implican que $u_{n_x} = u_0$
Ejercicio 3
Por lo que parece, no supone un gran cambio utilizar matrices en lugar de bucles, y la complejidad aumenta. ยฟPor quรฉ utilizarlas entonces? Bien, porque hay ciertos esquemas que sรณlo pueden resolverse de esta manera: los mรฉtodos implicitos.
En un mรฉtodo implรญcito utilizamos los datos correspondientes al paso temporal siguiente. Por tanto, hay que resolver un sistema del tipo $u^{n+1} \cdot D' = u^n$, donde la matriz $D'$ es una matriz diferente a D y sรณlo se conserva la notaciรณn por comodidad.
Repite la soluciรณn del ejercicio anterior, pero utilizando ahora un mรฉtodo implicito. ยฟQuรฉ cambios observas?
End of explanation
# Introduce tu cรณdigo aquรญ
Explanation: Reto: El mรฉtodo de Crank-Nicolson es un mรฉtodo numรฉrico muy utilizado en la resoluciรณn de ODEs. Busca informaciรณn del mรฉtodo (aquรญ, por ejemplo) y trata de implementarlo de acuerdo a lo aprendido hasta ahora.
End of explanation |
13,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auto-generating Epochs metadata
This tutorial shows how to auto-generate metadata for ~mne.Epochs, based on
events via mne.epochs.make_metadata.
We are going to use data from the erp-core-dataset (derived from
Step1: Creating metadata from events
The basics of make_metadata
Now it's time to think about the time windows to use for epoching and
metadata generation. It is important to understand that these time windows
need not be the same! That is, the automatically generated metadata might
include information about events from only a fraction of the epochs duration;
or it might include events that occurred well outside a given epoch.
Let us look at a concrete example. In the Flankers task of the ERP CORE
dataset, participants were required to respond to visual stimuli by pressing
a button. We're interested in looking at the visual evoked responses (ERPs)
of trials with correct responses. Assume that based on literature
studies, we decide that responses later than 1500 ms after stimulus onset are
to be considered invalid, because they don't capture the neuronal processes
of interest here. We can approach this in the following way with the help of
mne.epochs.make_metadata
Step2: Specifying time-locked events
We can see that the generated table has 802 rows, each one corresponding to
an individual event in all_events. The first column, event_name,
contains the name of the respective event around which the metadata of that
specific column was generated โ we'll call that the "time-locked event",
because we'll assign it time point zero.
The names of the remaining columns correspond to the event names specified in
the all_event_id dictionary. These columns contain floats; the values
represent the latency of that specific event in seconds, relative to
the time-locked event (the one mentioned in the event_name column).
For events that didn't occur within the given time window, you'll see
a value of NaN, simply indicating that no event latency could be
extracted.
Now, there's a problem here. We want investigate the visual ERPs only,
conditional on responses. But the metadata that was just created contains
one row for every event, including responses. While we could create
epochs for all events, allowing us to pass those metadata, and later subset
the created events, there's a more elegant way to handle things
Step3: Keeping only the first events of a group
The metadata now contains 400 rows โ one per stimulation โ and the same
number of columns as before. Great!
We have two types of responses in our data
Step4: We're facing a similar issue with the stimulus events, and now there are not
only two, but four different types
Step5: This can easily lead to confusion during later stages of processing, so let's
create a column for the first stimulus โ which will always be the time-locked
stimulus, as our time interval starts at 0 seconds. We can pass a list of
strings to keep_first.
Step6: Adding new columns to describe stimulation side and response correctness
Perfect! Now it's time to define which responses were correct and incorrect.
We first add a column encoding the side of stimulation, and then simply
check whether the response matches the stimulation side, and add this result
to another column.
Step7: Creating Epochs with metadata, and visualizing ERPs
It's finally time to create our epochs! We set the metadata directly on
instantiation via the metadata parameter. Also it is important to
remember to pass events and event_id as returned from
~mne.epochs.make_metadata, as we only created metadata for a subset of
our original events by passing row_events. Otherwise, the length
of the metadata and the number of epochs would not match and MNE-Python
would raise an error.
Step8: Lastly, let's visualize the ERPs evoked by the visual stimulation, once for
all trials with correct responses, and once for all trials with correct
responses and a response time greater than 0.5 seconds
(i.e., slow responses).
Step9: Aside from the fact that the data for the (much fewer) slow responses looks
noisier โย which is entirely to be expected โย not much of an ERP difference
can be seen.
Applying the knowledge
Step10: Exactly like in the previous example, create new columns stimulus_side
and response_correct.
Step11: Now it's already time to epoch the data! When deciding upon the epochs
duration for this specific analysis, we need to ensure we see quite a bit of
signal from before and after the motor response. We also must be aware of
the fact that motor-/muscle-related signals will most likely be present
before the response button trigger pulse appears in our data, so the time
period close to the response event should not be used for baseline
correction. But at the same time, we don't want to use a baseline
period that extends too far away from the button event. The following values
seem to work quite well.
Step12: Let's do a final sanity check
Step13: Bummer! It seems the very first two responses were recorded before the
first stimulus appeared
Step14: Time to calculate the ERPs for correct and incorrect responses.
For visualization, we'll only look at sensor FCz, which is known to show
the ERN nicely in the given paradigm. We'll also create a topoplot to get an
impression of the average scalp potentials measured in the first 100 ms after
an incorrect response.
Step15: We can see a strong negative deflection immediately after incorrect
responses, compared to correct responses. The topoplot, too, leaves no doubt | Python Code:
from pathlib import Path
import matplotlib.pyplot as plt
import mne
data_dir = Path(mne.datasets.erp_core.data_path())
infile = data_dir / 'ERP-CORE_Subject-001_Task-Flankers_eeg.fif'
raw = mne.io.read_raw(infile, preload=True)
raw.filter(l_freq=0.1, h_freq=40)
raw.plot(start=60)
# extract events
all_events, all_event_id = mne.events_from_annotations(raw)
Explanation: Auto-generating Epochs metadata
This tutorial shows how to auto-generate metadata for ~mne.Epochs, based on
events via mne.epochs.make_metadata.
We are going to use data from the erp-core-dataset (derived from
:footcite:Kappenman2021). This is EEG data from a single participant
performing an active visual task (Eriksen flanker task).
<div class="alert alert-info"><h4>Note</h4><p>If you wish to skip the introductory parts of this tutorial, you may jump
straight to `tut-autogenerate-metadata-ern` after completing the data
import and event creation in the
`tut-autogenerate-metadata-preparation` section.</p></div>
This tutorial is loosely divided into two parts:
We will first focus on producing ERP time-locked to the visual
stimulation, conditional on response correctness and response time in
order to familiarize ourselves with the ~mne.epochs.make_metadata
function.
After that, we will calculate ERPs time-locked to the responses โ again,
conditional on response correctness โ to visualize the error-related
negativity (ERN), i.e. the ERP component associated with incorrect
behavioral responses.
Preparation
Let's start by reading, filtering, and producing a simple visualization of the
raw data. The data is pretty clean and contains very few blinks, so there's no
need to apply sophisticated preprocessing and data cleaning procedures.
We will also convert the ~mne.Annotations contained in this dataset to events
by calling mne.events_from_annotations.
End of explanation
# metadata for each epoch shall include events from the range: [0.0, 1.5] s,
# i.e. starting with stimulus onset and expanding beyond the end of the epoch
metadata_tmin, metadata_tmax = 0.0, 1.5
# auto-create metadata
# this also returns a new events array and an event_id dictionary. we'll see
# later why this is important
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'])
# let's look at what we got!
metadata
Explanation: Creating metadata from events
The basics of make_metadata
Now it's time to think about the time windows to use for epoching and
metadata generation. It is important to understand that these time windows
need not be the same! That is, the automatically generated metadata might
include information about events from only a fraction of the epochs duration;
or it might include events that occurred well outside a given epoch.
Let us look at a concrete example. In the Flankers task of the ERP CORE
dataset, participants were required to respond to visual stimuli by pressing
a button. We're interested in looking at the visual evoked responses (ERPs)
of trials with correct responses. Assume that based on literature
studies, we decide that responses later than 1500 ms after stimulus onset are
to be considered invalid, because they don't capture the neuronal processes
of interest here. We can approach this in the following way with the help of
mne.epochs.make_metadata:
End of explanation
row_events = ['stimulus/compatible/target_left',
'stimulus/compatible/target_right',
'stimulus/incompatible/target_left',
'stimulus/incompatible/target_right']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events)
metadata
Explanation: Specifying time-locked events
We can see that the generated table has 802 rows, each one corresponding to
an individual event in all_events. The first column, event_name,
contains the name of the respective event around which the metadata of that
specific column was generated โ we'll call that the "time-locked event",
because we'll assign it time point zero.
The names of the remaining columns correspond to the event names specified in
the all_event_id dictionary. These columns contain floats; the values
represent the latency of that specific event in seconds, relative to
the time-locked event (the one mentioned in the event_name column).
For events that didn't occur within the given time window, you'll see
a value of NaN, simply indicating that no event latency could be
extracted.
Now, there's a problem here. We want investigate the visual ERPs only,
conditional on responses. But the metadata that was just created contains
one row for every event, including responses. While we could create
epochs for all events, allowing us to pass those metadata, and later subset
the created events, there's a more elegant way to handle things:
~mne.epochs.make_metadata has a row_events parameter that
allows us to specify for which events to create metadata rows, while
still creating columns for all events in the event_id dictionary.
Because the metadata, then, only pertains to a subset of our original events,
it's important to keep the returned events and event_id around for
later use when we're actually going to create our epochs, to ensure that
metadata, events, and event descriptions stay in sync.
End of explanation
keep_first = 'response'
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# visualize response times regardless of side
metadata['response'].plot.hist(bins=50, title='Response Times')
# the "first_response" column contains only "left" and "right" entries, derived
# from the initial event named "response/left" and "response/right"
print(metadata['first_response'])
Explanation: Keeping only the first events of a group
The metadata now contains 400 rows โ one per stimulation โ and the same
number of columns as before. Great!
We have two types of responses in our data: response/left and
response/right. We would like to map those to "correct" and "incorrect".
To make this easier, we can ask ~mne.epochs.make_metadata to generate an
entirely new column that refers to the first response observed during the
given time interval. This works by passing a subset of the
:term:hierarchical event descriptors (HEDs, inspired by
:footcite:BigdelyShamloEtAl2013) used to name events via the keep_first
parameter. For example, in the case of the HEDs response/left and
response/right, we could pass keep_first='response' to generate a new
column, response, containing the latency of the respective event. This
value pertains only the first (or, in this specific example: the only)
response, regardless of side (left or right). To indicate which event
type (here: response side) was matched, a second column is added:
first_response. The values in this column are the event types without the
string used for matching, as it is already encoded as the column name, i.e.
in our example, we expect it to only contain 'left' and 'right'.
End of explanation
metadata.loc[metadata['stimulus/compatible/target_left'].notna() &
metadata['stimulus/compatible/target_right'].notna(),
:]
Explanation: We're facing a similar issue with the stimulus events, and now there are not
only two, but four different types: stimulus/compatible/target_left,
stimulus/compatible/target_right, stimulus/incompatible/target_left,
and stimulus/incompatible/target_right. Even more, because in the present
paradigm stimuli were presented in rapid succession, sometimes multiple
stimulus events occurred within the 1.5 second time window we're using to
generate our metadata. See for example:
End of explanation
keep_first = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_first=keep_first)
# all times of the time-locked events should be zero
assert all(metadata['stimulus'] == 0)
# the values in the new "first_stimulus" and "first_response" columns indicate
# which events were selected via "keep_first"
metadata[['first_stimulus', 'first_response']]
Explanation: This can easily lead to confusion during later stages of processing, so let's
create a column for the first stimulus โ which will always be the time-locked
stimulus, as our time interval starts at 0 seconds. We can pass a list of
strings to keep_first.
End of explanation
# left-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['first_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['first_response'],
'response_correct'] = True
correct_response_count = metadata['response_correct'].sum()
print(f'Correct responses: {correct_response_count}\n'
f'Incorrect responses: {len(metadata) - correct_response_count}')
Explanation: Adding new columns to describe stimulation side and response correctness
Perfect! Now it's time to define which responses were correct and incorrect.
We first add a column encoding the side of stimulation, and then simply
check whether the response matches the stimulation side, and add this result
to another column.
End of explanation
epochs_tmin, epochs_tmax = -0.1, 0.4 # epochs range: [-0.1, 0.4] s
reject = {'eeg': 250e-6} # exclude epochs with strong artifacts
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
events=events, event_id=event_id, metadata=metadata,
reject=reject, preload=True)
Explanation: Creating Epochs with metadata, and visualizing ERPs
It's finally time to create our epochs! We set the metadata directly on
instantiation via the metadata parameter. Also it is important to
remember to pass events and event_id as returned from
~mne.epochs.make_metadata, as we only created metadata for a subset of
our original events by passing row_events. Otherwise, the length
of the metadata and the number of epochs would not match and MNE-Python
would raise an error.
End of explanation
vis_erp = epochs['response_correct'].average()
vis_erp_slow = epochs['(not response_correct) & '
'(response > 0.3)'].average()
fig, ax = plt.subplots(2, figsize=(6, 6))
vis_erp.plot(gfp=True, spatial_colors=True, axes=ax[0])
vis_erp_slow.plot(gfp=True, spatial_colors=True, axes=ax[1])
ax[0].set_title('Visual ERPs โ All Correct Responses')
ax[1].set_title('Visual ERPs โ Slow Correct Responses')
fig.tight_layout()
fig
Explanation: Lastly, let's visualize the ERPs evoked by the visual stimulation, once for
all trials with correct responses, and once for all trials with correct
responses and a response time greater than 0.5 seconds
(i.e., slow responses).
End of explanation
metadata_tmin, metadata_tmax = -1.5, 0
row_events = ['response/left', 'response/right']
keep_last = ['stimulus', 'response']
metadata, events, event_id = mne.epochs.make_metadata(
events=all_events, event_id=all_event_id,
tmin=metadata_tmin, tmax=metadata_tmax, sfreq=raw.info['sfreq'],
row_events=row_events,
keep_last=keep_last)
Explanation: Aside from the fact that the data for the (much fewer) slow responses looks
noisier โย which is entirely to be expected โย not much of an ERP difference
can be seen.
Applying the knowledge: visualizing the ERN component
In the following analysis, we will use the same dataset as above, but
we'll time-lock our epochs to the response events, not to the stimulus
onset. Comparing ERPs associated with correct and incorrect behavioral
responses, we should be able to see the error-related negativity (ERN) in
the difference wave.
Since we want to time-lock our analysis to responses, for the automated
metadata generation we'll consider events occurring up to 1500 ms before
the response trigger.
We only wish to consider the last stimulus and response in each time
window: Remember that we're dealing with rapid stimulus presentations in
this paradigm; taking the last response โย at time point zero โ and the last
stimulus โ the one closest to the response โ ensures we actually create
the right stimulus-response pairings. We can achieve this by passing the
keep_last parameter, which works exactly like keep_first we got to
know above, only that it keeps the last occurrences of the specified
events and stores them in columns whose names start with last_.
End of explanation
# left-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_left',
'incompatible/target_left']),
'stimulus_side'] = 'left'
# right-side stimulation
metadata.loc[metadata['last_stimulus'].isin(['compatible/target_right',
'incompatible/target_right']),
'stimulus_side'] = 'right'
# first assume all responses were incorrect, then mark those as correct where
# the stimulation side matches the response side
metadata['response_correct'] = False
metadata.loc[metadata['stimulus_side'] == metadata['last_response'],
'response_correct'] = True
metadata
Explanation: Exactly like in the previous example, create new columns stimulus_side
and response_correct.
End of explanation
epochs_tmin, epochs_tmax = -0.6, 0.4
baseline = (-0.4, -0.2)
reject = {'eeg': 250e-6}
epochs = mne.Epochs(raw=raw, tmin=epochs_tmin, tmax=epochs_tmax,
baseline=baseline, reject=reject,
events=events, event_id=event_id, metadata=metadata,
preload=True)
Explanation: Now it's already time to epoch the data! When deciding upon the epochs
duration for this specific analysis, we need to ensure we see quite a bit of
signal from before and after the motor response. We also must be aware of
the fact that motor-/muscle-related signals will most likely be present
before the response button trigger pulse appears in our data, so the time
period close to the response event should not be used for baseline
correction. But at the same time, we don't want to use a baseline
period that extends too far away from the button event. The following values
seem to work quite well.
End of explanation
epochs.metadata.loc[epochs.metadata['last_stimulus'].isna(), :]
Explanation: Let's do a final sanity check: we want to make sure that in every row, we
actually have a stimulus. We use epochs.metadata (and not metadata)
because when creating the epochs, we passed the reject parameter, and
MNE-Python always ensures that epochs.metadata stays in sync with the
available epochs.
End of explanation
epochs = epochs['last_stimulus.notna()']
Explanation: Bummer! It seems the very first two responses were recorded before the
first stimulus appeared: the values in the stimulus column are None.
There is a very simple way to select only those epochs that do have a
stimulus (i.e., are not None):
End of explanation
resp_erp_correct = epochs['response_correct'].average()
resp_erp_incorrect = epochs['not response_correct'].average()
mne.viz.plot_compare_evokeds({'Correct Response': resp_erp_correct,
'Incorrect Response': resp_erp_incorrect},
picks='FCz', show_sensors=True,
title='ERPs at FCz, time-locked to response')
# topoplot of average field from time 0.0-0.1 s
resp_erp_incorrect.plot_topomap(times=0.05, average=0.05, size=3,
title='Avg. topography 0โ100 ms after '
'incorrect responses')
Explanation: Time to calculate the ERPs for correct and incorrect responses.
For visualization, we'll only look at sensor FCz, which is known to show
the ERN nicely in the given paradigm. We'll also create a topoplot to get an
impression of the average scalp potentials measured in the first 100 ms after
an incorrect response.
End of explanation
# difference wave: incorrect minus correct responses
resp_erp_diff = mne.combine_evoked([resp_erp_incorrect, resp_erp_correct],
weights=[1, -1])
fig, ax = plt.subplots()
resp_erp_diff.plot(picks='FCz', axes=ax, selectable=False, show=False)
# make ERP trace bolder
ax.lines[0].set_linewidth(1.5)
# add lines through origin
ax.axhline(0, ls='dotted', lw=0.75, color='gray')
ax.axvline(0, ls=(0, (10, 10)), lw=0.75, color='gray',
label='response trigger')
# mark trough
trough_time_idx = resp_erp_diff.copy().pick('FCz').data.argmin()
trough_time = resp_erp_diff.times[trough_time_idx]
ax.axvline(trough_time, ls=(0, (10, 10)), lw=0.75, color='red',
label='max. negativity')
# legend, axis labels, title
ax.legend(loc='lower left')
ax.set_xlabel('Time (s)', fontweight='bold')
ax.set_ylabel('Amplitude (ยตV)', fontweight='bold')
ax.set_title('Channel: FCz')
fig.suptitle('ERN (Difference Wave)', fontweight='bold')
fig
Explanation: We can see a strong negative deflection immediately after incorrect
responses, compared to correct responses. The topoplot, too, leaves no doubt:
what we're looking at is, in fact, the ERN.
Some researchers suggest to construct the difference wave between ERPs for
correct and incorrect responses, as it more clearly reveals signal
differences, while ideally also improving the signal-to-noise ratio (under
the assumption that the noise level in "correct" and "incorrect" trials is
similar). Let's do just that and put it into a publication-ready
visualization.
End of explanation |
13,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step8: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y - x)
dy = (x*(rho - z)) - y
dz = (x*y) - beta*z
return np.array([dx, dy, dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time, sigma, rho, beta):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
#ic = np.array([x, y, z])
t = np.linspace(0, max_time, int(250*max_time))
soln = odeint(lorentz_derivs, # function to compute the derivatives
ic, # array of initial conditions
t, # array of times
args=(sigma,rho,beta), # extra args
atol=1e-9, rtol=1e-8) # absolute and relative error tolerances
return t,soln
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
ic = np.random.uniform(-15,15,3)
t, soln = solve_lorentz(ic,max_time,sigma,rho,beta)
plt.plot(t, soln[:,0], label='something')
plt.plot(t, soln[:,1], label='something else')
plt.xlabel('t')
plt.ylabel('count')
plt.legend();
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
def plot_lorentz(N, max_time, sigma, rho, beta):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
ic = np.random.uniform(-15,15,3)
t, soln = solve_lorentz(ic,max_time,sigma,rho,beta)
for n in range(N):
plt.plot(t, soln[:,n])
plt.xlabel('t')
plt.ylabel('count')
plt.show()
interact(plot_lorentz,max_time=[1,10],N=[1,50],sigma=[0.0,50.0],rho=[0.0,50.0],beta=fixed(8/3))
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
13,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2015, 2016 Sebastian Raschka
https
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: <hr>
Note
If you have problems with creating the movie_data.csv file in the previous chapter, you can find a download a zip archive at
https
Step3: Decompressing
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
or
tar -xvzf aclImdb_v1.tar.gz
for the verbose mode
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
C) The code below decompresses directly via Python.
Step4: Reading the dataset
The decompressed file is in csv format, we can read it via panda as usual.
PyPrind (Python Progress Indicator)
* useful for visualizing progress for processing large datasets
* pip install pyprind
Compatibility Note
Step5: Optional
Step6: Read back the data-frame from file, local or remote.
Step7: Shuffling the DataFrame
Step8: Introducing the bag-of-words model
Movie reviews vary in lengths
* cannot use them directly as inputs for models that expect fixed dimension inputs
We need to convert the dataset into numerical form
* e.g. categorical variables (nominal or ordinal) into numerical variables
Bag-of-words
Step9: Print the contents of the vocabulary to get a better understanding of the underlying concepts
Step10: The vocabulary is stored in a Python dictionary
* key
Step11: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary.
For example, the 1st feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences.
Those values in the feature vectors are also called the raw term frequencies
Step12: As we saw in the previous subsection, the word 'is' had the largest term frequency in the 3rd document, being the most frequently occurring word.
However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
Scikit-learn tf-idf
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we defined earlier.
The equations for the idf and tf-idf that were implemented in scikit-learn are
Step13: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors
Step14: Cleaning text data
The text may contain stuff irrelevant for sentiment analysis.
* html tags, punctuation, non-letter characters, etc.
Step15: Use regular expression for cleaning text data
References
Step16: Example results
Step17: Emotions are moved to the end; ordering doesn't matter for 1-gram analysis.
Cleanup the data
Step18: Processing documents into tokens
Split an entity into constituting components, e.g. words for documents.
Stemming
Step19: Remove stop-words
Stop-words are extremely common in all texts
* e.g. is, and, has, etc.
Remove them completely can help document analysis.
Step20: Training a logistic regression model for document classification
Let's try to apply logistic regression to classify the movie reviews.
Use cleaned-up documents (no html tags or punctuations except for emoticons), but leave tokenization as hyper-parameters.
Split into training and test datasets
Step21: Grid-search hyper-parameters
Two grid sets
Step22: The CV accuracy and test accuracy would be a bit lower if we use a subset of all data, but are still reasonable.
Step23: Start comment
Step24: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices)
Step25: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model
Step26: As we can see, the result above is consistent with the average score computed the cross_val_score.
Step27: End comment.
<hr>
<hr>
Naive Bayes
Popular for text classification, e.g. spam filtering.
* easy to implement
* fast to compute
* good performance with small datasets
See http
Step28: Python generators
http
Step29: Out-of-core Vectorizer
CountVectorizer holds complete vocabulary in memory
TfidfVectorizer keeps all training data in memory
HashVectorizer comes for rescue
* hash words into histogram bins
* can have collision, but with low probability
* collision reduces histogram resolution, but still suffices for classification and can reduce number of features and thus over-fitting
Hash
A function that maps items into cells in a hash table.
* easy/fast to compute
* can have collision, i.e. different items map into the same hash entry
* try to minimize and/or handle collision
<a href="https
Step30: Start out-of-core learning
Training | Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
Explanation: Copyright (c) 2015, 2016 Sebastian Raschka
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 8 - Applying Machine Learning To Sentiment Analysis
Let's apply what we have learned so far for a real case study.
Many people express opinions on the internet and social media sites.
Such opinions are a rich source of information for many applications:
* business
* politics
* science
Apply natural language processing (NLP), in particular sentiment analysis, over movie reviews
Challenges
Written opinions/reviews have varying lengths
* cannot be treated as fixed-dimension inputs
Not all Raw texts suitable for direct machine learning
* need clean up
How to pick and train a machine learning model
Handle large datasets
* potentially out-of-core
Topics
Data-preprocessing
* cleaning and preparing text data from movie reviews
* building (fixed-dimension) feature vectors from (variable-dimension) text documents
Training a machine learning model to classify positive and negative movie reviews
Working with large text datasets using out-of-core learning
<img src="./images/01_09.png" width=100%>
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Overview
Obtaining the IMDb movie review dataset
Introducing the bag-of-words model
Transforming words into feature vectors
Assessing word relevancy via term frequency-inverse document frequency
Cleaning text data
Processing documents into tokens
Training a logistic regression model for document classification
Working with bigger data โ online algorithms and out-of-core learning
Summary
Obtaining the IMDb movie review dataset
The IMDB movie review set can be downloaded from http://ai.stanford.edu/~amaas/data/sentiment/.
* also available under ../datasets/movie/ as part of the gibhub repo
50,000 movie reviews, manually labeled as being positive or negative for classification.
End of explanation
import urllib.request
import os
# the file we eventually need to access
csv_filename = 'movie_data.csv'
# a global variable to select data source: local or remote
data_source = 'local'
if data_source == 'local':
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = '../datasets/movie/'
zip_filename = 'movie_data.csv.zip'
else: # remote
url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
basepath = '.'
zip_filename = 'aclImdb_v1.tar.gz'
remote_file = os.path.join(url, zip_filename)
local_file = os.path.join(basepath, zip_filename)
csv_file = os.path.join(basepath, csv_filename)
if not os.path.isfile(csv_file) and not os.path.isfile(local_file):
urllib.request.urlretrieve(remote_file, local_file)
Explanation: <hr>
Note
If you have problems with creating the movie_data.csv file in the previous chapter, you can find a download a zip archive at
https://github.com/1iyiwei/pyml/tree/master/code/datasets/movie
<hr>
End of explanation
# The code below decompresses directly via Python.
import os
import zipfile
import tarfile
# change the `basepath` to the directory of the
# unzipped movie dataset
csv_file = os.path.join(basepath, csv_filename)
zip_file = os.path.join(basepath, zip_filename)
if not os.path.isfile(csv_file):
if tarfile.is_tarfile(zip_file):
tartar = tarfile.open(zip_file, "r")
#with tarfile.TarFile(zip_file, "r") as tartar:
tartar.extractall(basepath)
tartar.close()
else:
with zipfile.ZipFile(zip_file, "r") as zipper:
zipper.extractall(basepath)
zipper.close()
Explanation: Decompressing
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
or
tar -xvzf aclImdb_v1.tar.gz
for the verbose mode
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
C) The code below decompresses directly via Python.
End of explanation
import pyprind
import pandas as pd
import os
db_path = 'aclImdb';
if not os.path.isfile(csv_file):
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(db_path, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
Explanation: Reading the dataset
The decompressed file is in csv format, we can read it via panda as usual.
PyPrind (Python Progress Indicator)
* useful for visualizing progress for processing large datasets
* pip install pyprind
Compatibility Note:
I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to 'utf-8', which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute
>>> import sys
>>> sys.getdefaultencoding()
If the returned result is not 'utf-8', you probably need to change your Python's encoding to 'utf-8', for example by typing export PYTHONIOENCODING=utf8 in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch ipython notebook.
Alternatively, you can replace the lines
with open(os.path.join(path, file), 'r') as infile:
...
pd.read_csv('./movie_data.csv')
...
df.to_csv('./movie_data.csv', index=False)
by
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
...
pd.read_csv('./movie_data.csv', encoding='utf-8')
...
df.to_csv('./movie_data.csv', index=False, encoding='utf-8')
in the following cells to achieve the desired effect.
End of explanation
if not os.path.isfile(csv_file):
df.to_csv(os.path.join(basepath, csv_filename), index=False, encoding='utf-8')
Explanation: Optional: Saving the assembled data as CSV file:
End of explanation
import pandas as pd
df = pd.read_csv(os.path.join(basepath, csv_filename), encoding='utf-8')
Explanation: Read back the data-frame from file, local or remote.
End of explanation
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
# first few entries
df.head(3)
# a complete review
print(df.values[0])
Explanation: Shuffling the DataFrame:
End of explanation
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
# fixed-dimension features we can use for machine learning
bag = count.fit_transform(docs)
Explanation: Introducing the bag-of-words model
Movie reviews vary in lengths
* cannot use them directly as inputs for models that expect fixed dimension inputs
We need to convert the dataset into numerical form
* e.g. categorical variables (nominal or ordinal) into numerical variables
Bag-of-words: represent text as numerical feature vectors
* create a vocabulary of unique tokens, e.g. words
* compute a histogram counting the number of occurances of each word
The feature vector would be sparse since most of the entries are $0$
Transforming documents into feature vectors
By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
End of explanation
# the dictionary trained from the document data
print(count.vocabulary_)
Explanation: Print the contents of the vocabulary to get a better understanding of the underlying concepts:
End of explanation
# convert from sparse dictionary to dense array
# fixed dimension feature
print(bag.toarray())
Explanation: The vocabulary is stored in a Python dictionary
* key: words
* value: integer indices
End of explanation
np.set_printoptions(precision=2)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
Explanation: Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary.
For example, the 1st feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences.
Those values in the feature vectors are also called the raw term frequencies: tf (t,d)โthe number of times a term t occurs in a document d.
N-gram
N contiguous sequence of items
1-gram: individual words
* e.g. the, sun, is, shining
2-gram: pairs of adjacent words
* e.g. the sun, sun is, is shining
CountVectorizer can work with n-gram via the ngram_range parameter.
Assessing word relevancy via term frequency-inverse document frequency
Term-frequency (tf) alone is not enough.
* common words typically don't contain useful or discriminatory information.
* e.g., the, is, and ...
Also consider inverse document frequency (idf)
* downweight those frequently occurring words in the feature vectors.
The tf-idf can be defined as the product of the term frequency and the inverse document frequency:
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
Here the tf(t, d) is the term frequency introduced above,
and the inverse document frequency $idf(t, d)$ can be calculated as:
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)}$$
* $n_d$ is the total number of documents
* $df(d, t)$ is the number of documents $d$ that contain the term $t$.
$idf$ gives higher weights to rarer words.
Note
* adding the constant 1 to the denominator to avoid division-by-zero.
* the log is used to ensure that low document frequencies are not given too much weight.
Scikit-learn implements yet another transformer, the TfidfTransformer, that takes the raw term frequencies from CountVectorizer as input and transforms them into tf-idfs:
End of explanation
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
Explanation: As we saw in the previous subsection, the word 'is' had the largest term frequency in the 3rd document, being the most frequently occurring word.
However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
Scikit-learn tf-idf
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we defined earlier.
The equations for the idf and tf-idf that were implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that was implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the TfidfTransformer normalizes the tf-idfs directly.
By default (norm='l2'), scikit-learn's TfidfTransformer applies the L2-normalization, which
returns a vector of length 1 by dividing an un-normalized feature vector v by its L2-norm:
$$v_{\text{norm}} = \frac{v}{||v||2} = \frac{v}{\sqrt{v{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
Example
To make sure that we understand how TfidfTransformer works, let us walk
through an example and calculate the tf-idf of the word is in the 3rd document.
The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
End of explanation
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True) # notice norm is None not l2
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1] # for the last document
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
Explanation: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29].
However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously.
The final step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows:
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$
$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$
$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
As we can see, the results match the results returned by scikit-learn's TfidfTransformer (below).
End of explanation
df.loc[0, 'review'][-50:]
Explanation: Cleaning text data
The text may contain stuff irrelevant for sentiment analysis.
* html tags, punctuation, non-letter characters, etc.
End of explanation
import re
def preprocessor(text):
# [] for set of characters, ^ inside [] means invert, i.e. not > below
# * means 0 or more occurances of the pattern
text = re.sub('<[^>]*>', '', text) # remove html tags between pairs of < and >
# () for group, subpart of the whole pattern we look for
# findall will return tuples each containing groups
# (?:) means not returing the group result for findall
# | means or, \ for escape sequence
# first group eye : or ; or =
# second group nose - 0 or 1 time via ?
# third group mouth ) or ( or D or P
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
# matching examples:
# :-)
# =D
# convert to lower case as upper/lower case doesn't matter for sentiment
# replace all non-word characters by space
# \w: letters, digits, _
# \W: the complement set
text = re.sub('[\W]+', ' ', text.lower())
# add back emoticons, though in different orders
# and without nose "-", e.g. :) and :-) are considered the same
text = text + ' '.join(emoticons).replace('-', '')
return text
Explanation: Use regular expression for cleaning text data
References:
* https://developers.google.com/edu/python/regular-expressions
* https://docs.python.org/3.4/library/re.html
Remove all punctuations except for emoticons which convey sentiments.
End of explanation
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
Explanation: Example results
End of explanation
df['review'] = df['review'].apply(preprocessor)
Explanation: Emotions are moved to the end; ordering doesn't matter for 1-gram analysis.
Cleanup the data
End of explanation
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
# split along white spaces
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
Explanation: Processing documents into tokens
Split an entity into constituting components, e.g. words for documents.
Stemming: transform a word into root form.
* e.g. running $\rightarrow$ run
* see http://www.nltk.org/book/ for more details and options for stemming.
End of explanation
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
Explanation: Remove stop-words
Stop-words are extremely common in all texts
* e.g. is, and, has, etc.
Remove them completely can help document analysis.
End of explanation
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
# Use a smaller subset if it took too long to run the full datasets above
train_subset_size = 2500
test_subset_size = 2500
#print(X_train.shape)
if train_subset_size > 0:
X_train = X_train[:train_subset_size]
y_train = y_train[:train_subset_size]
if test_subset_size > 0:
X_test = X_test[:test_subset_size]
y_test = y_test[:test_subset_size]
#print(X_train.shape)
Explanation: Training a logistic regression model for document classification
Let's try to apply logistic regression to classify the movie reviews.
Use cleaned-up documents (no html tags or punctuations except for emoticons), but leave tokenization as hyper-parameters.
Split into training and test datasets
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
Explanation: Grid-search hyper-parameters
Two grid sets: with and without idf
Different regularization strengths via $C$.
Use pipeline as before.
End of explanation
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: The CV accuracy and test accuracy would be a bit lower if we use a subset of all data, but are still reasonable.
End of explanation
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
Explanation: Start comment:
Please note that gs_lr_tfidf.best_score_ is the average k-fold cross-validation score. I.e., if we have a GridSearchCV object with 5-fold cross-validation (like the one above), the best_score_ attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
End of explanation
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
Explanation: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices):
End of explanation
gs.best_score_
Explanation: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model:
End of explanation
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
Explanation: As we can see, the result above is consistent with the average score computed the cross_val_score.
End of explanation
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path=csv_file))
Explanation: End comment.
<hr>
<hr>
Naive Bayes
Popular for text classification, e.g. spam filtering.
* easy to implement
* fast to compute
* good performance with small datasets
See http://sebastianraschka.com/Articles/2014_naive_bayes_1.html for more details.
Working with bigger data - online algorithms and out-of-core learning
The grid-search in the previous section is quite computationally expensive.
But real world datasets can be much larger!
Out-of-core learning can help us deal with large datasets without super-computers.
SGDClassifier: stochastic gradient descent classifier
End of explanation
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
Explanation: Python generators
http://stackoverflow.com/questions/231767/what-does-the-yield-keyword-do-in-python
End of explanation
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21, # large enough to minimize has collision
preprocessor=None,
tokenizer=tokenizer)
# logistic regression for loss
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path=csv_file)
Explanation: Out-of-core Vectorizer
CountVectorizer holds complete vocabulary in memory
TfidfVectorizer keeps all training data in memory
HashVectorizer comes for rescue
* hash words into histogram bins
* can have collision, but with low probability
* collision reduces histogram resolution, but still suffices for classification and can reduce number of features and thus over-fitting
Hash
A function that maps items into cells in a hash table.
* easy/fast to compute
* can have collision, i.e. different items map into the same hash entry
* try to minimize and/or handle collision
<a href="https://en.wikipedia.org/wiki/Hash_function">
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/Hash_table_4_1_1_0_0_1_0_LL.svg/300px-Hash_table_4_1_1_0_0_1_0_LL.svg.png">
</a>
End of explanation
# full size
num_batches = 45
batch_size = 1000
test_size = 5000
# subset if the fullset took too long to run
batch_size = 100
test_size = 500
import pyprind
pbar = pyprind.ProgBar(num_batches)
classes = np.array([0, 1])
for _ in range(num_batches):
X_train, y_train = get_minibatch(doc_stream, size=batch_size)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=test_size)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
Explanation: Start out-of-core learning
Training: 45,000 samples
Test: 5,000 samples
End of explanation |
13,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Task-1.-Compiling-Ebola-Data"><span class="toc-item-num">Task 1. </span>Compiling Ebola Data</a></div>
<div class="lev1"><a href="#Task-2.-RNA-Sequences"><span class="toc-item-num">Task 2. </span>RNA Sequences</a></div>
<div class="lev1"><a href="#Task-3.-Class-War-in-Titanic"><span class="toc-item-num">Task 3. </span>Class War in Titanic</a></div></p>
Step1: Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
First, we define some helpful functions that will help us during the parsing of the data.
- get_files
Step2: sum_row
Step3: Now, we define for each country a function, which, for a given file, returns a dictionnary with the country, date, upper and lower bounds for the new cases, and upper and lower bounds for the new deaths.
As we don't know if the new cases / deaths for the 'probable' and 'suspected' cases is reliable, we decided to create an upper bound with the sum of the 'confirmed', 'probable' and 'suspected' new cases / deaths, and a lower bound with only the 'confirmed' new cases / deaths.
The structure of these functions are the same for each country, only the name of the descrption of the data changes.
Step4: As the files for the Sierra Leone does not contain data for the new deaths, we first extract the total deaths for each day, and we will process them later to get the new deaths.
Step5: We now transform the data for the Sierra Leone
Step6: We can now insert the data in a dataframe. For Liberia, December's data is in a completely different format so we dropped it
Step7: Finally, to have some final general idea for the data, we average the bounds.
Step8: Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
We load the first spreadsheet from the file's Sheet 1. Then we add a new column that is the same for all the data in this import, which corresponds to the barcode of the code.
Then we rename the columns for more clarity.
Step9: Now we repeat this operation for every other spreadsheet except the metadata. At each iteration we simply concatenate the data at the end of the previous data, this accumulating all the files' data into a single dataframe. We don't care about any index right now since we will use a random one later.
Step10: Finally, we do a merge with the metadata. We join on the BARCODE column. This column will be the index of the metadata when we import it in this case. Finally we set the index for the three columns BARCODE, GROUP and SAMPLE which are all the columns of the metada and are unique.
The only NaN value we found was the NA value on the metadata, which may indicate that there is no sample for the first group. We replaced it anyway by unknown.
Step11: Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
Step12: For each of the following questions state clearly your assumptions and discuss your findings
Step13: Next we can list the data types of each field.
Step14: When it comes to the object fields, we can be a bit more precise. name, sec, ticket, cabin, embarked, boat and home.dex are all strings.
Next, we call the describe method to list some statistics on the data. We thus obtain the range of all of the numeric fields of our data.
Step15: Moreover, we can also note some ranges of other fields. For example, sex has only two possible values female and male. embarked can only be S, C and Q.
For a better visual result, we decided to replace the travel classes, ports to more readable values. As we make them categorical types, the performance stays the same.
Step16: Then we make categorical data as actually categorical.
Step17: 2. We plot the histogram of the travel class.
Step18: Next we plot the histogram of the three embark ports.
Step19: Next we plot the histogram of the sex.
Step20: Next, we cut the ages data into decades and plot the histogram of the devades.
Step21: 3. We plot the cabin floor data as a pie chart.
Step22: 4. Here, we plot the proportion of people that survived in the first class.
Step23: Next, we plot the proportion of people that survived in the second class.
Step24: Finally, we plot the proportion of people that survived in the third class.
Step25: As we can see, the lower the class, the higher the probability of death.
5. Here we add new columns that will help us to calculate proportions of survived people in the last part.
Step26: Here we set these new columns to appropriate values. We essentialy separate the survived columns for easier summing later on. Finnaly we slice the data to take only the columns of interest.
Step27: We group the data by the sec and class of the passangers and we sum it. Then we have the sum of alive and dead people groupped as we wish and we can easily calculate the proportion of them that survived, which we plot as a histogram.
Step28: We can see that there is a huge difference of survival between the classes and sexes
Step29: Next, we set the correct category to people below or above the median age. The people that have the median age are grouped with the people below it. Next we set this column as a categorical column.
Step30: Next, we take the columns that are of interest to us and group by age category, sec and travel class. Then we sum over these groups, obtaining the people that lived and those that died which which we can compute the proportion and display it as a dataframe. | Python Code:
DATA_FOLDER = 'Data' # Use the data folder provided in Tutorial 02 - Intro to Pandas.
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
from dateutil.parser import parse
from os import listdir
from os.path import isfile, join
sns.set_context('notebook')
Explanation: Table of Contents
<p><div class="lev1"><a href="#Task-1.-Compiling-Ebola-Data"><span class="toc-item-num">Task 1. </span>Compiling Ebola Data</a></div>
<div class="lev1"><a href="#Task-2.-RNA-Sequences"><span class="toc-item-num">Task 2. </span>RNA Sequences</a></div>
<div class="lev1"><a href="#Task-3.-Class-War-in-Titanic"><span class="toc-item-num">Task 3. </span>Class War in Titanic</a></div></p>
End of explanation
def get_files(country):
path = DATA_FOLDER + "/ebola/" + country + "_data/"
return [f for f in listdir(path) if isfile(join(path, f))]
Explanation: Task 1. Compiling Ebola Data
The DATA_FOLDER/ebola folder contains summarized reports of Ebola cases from three countries (Guinea, Liberia and Sierra Leone) during the recent outbreak of the disease in West Africa. For each country, there are daily reports that contain various information about the outbreak in several cities in each country.
Use pandas to import these data files into a single Dataframe.
Using this DataFrame, calculate for each country, the daily average per month of new cases and deaths.
Make sure you handle all the different expressions for new cases and deaths that are used in the reports.
First, we define some helpful functions that will help us during the parsing of the data.
- get_files: returns all the .csv files for a given country
End of explanation
def sum_row(row, total_col):
return float(row[total_col].values[0])
def sum_rows(rows, total_col):
tot = 0
for row in rows:
tot += sum_row(row, total_col)
return tot
Explanation: sum_row: for a given row, returns the total value for the new cases / deaths. We first defined this function as the sum of all new cases / deaths in all provinces, but we discovered some strange data for some provinces, so we decided to only take into account the 'total' column
sum_rows: sum all the rows given in argument
End of explanation
def get_row_guinea(file):
country = 'guinea'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file)
total_col = "Totals"
new_cases_lower = sum_row(raw[raw.Description == "New cases of confirmed"], total_col)
new_cases_upper = sum_row(raw[raw.Description == "Total new cases registered so far"], total_col)
new_deaths_lower = sum_row(raw[(raw.Description == "New deaths registered today (confirmed)") | (raw.Description == "New deaths registered")], total_col)
new_deaths_upper = sum_row(raw[(raw.Description == "New deaths registered today") | (raw.Description == "New deaths registered")], total_col)
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}
def get_row_liberia(file):
country = 'liberia'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file).fillna(0)
total_col = "National"
new_cases_lower = sum_row(raw[raw.Variable == "New case/s (confirmed)"], total_col)
list_cases_upper = (["New Case/s (Suspected)",
"New Case/s (Probable)",
"New case/s (confirmed)"])
new_cases_upper = sum_rows([raw[raw.Variable == row] for row in list_cases_upper], total_col)
new_deaths_lower = sum_row(raw[raw.Variable == "Newly reported deaths"], total_col)
new_deaths_upper = new_deaths_lower
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'NewDeathsLower' : new_deaths_lower, 'NewDeathsUpper' : new_deaths_upper}
Explanation: Now, we define for each country a function, which, for a given file, returns a dictionnary with the country, date, upper and lower bounds for the new cases, and upper and lower bounds for the new deaths.
As we don't know if the new cases / deaths for the 'probable' and 'suspected' cases is reliable, we decided to create an upper bound with the sum of the 'confirmed', 'probable' and 'suspected' new cases / deaths, and a lower bound with only the 'confirmed' new cases / deaths.
The structure of these functions are the same for each country, only the name of the descrption of the data changes.
End of explanation
def get_row_sl(file):
country = 'sl'
date = file[:10]
raw = pd.read_csv(DATA_FOLDER + "/ebola/" + country + "_data/" + file).fillna(0)
total_col = "National"
new_cases_lower = sum_row(raw[raw.variable == "new_confirmed"], total_col)
list_cases_upper = (["new_suspected",
"new_probable",
"new_confirmed"])
new_cases_upper = sum_rows([raw[raw.variable == row] for row in list_cases_upper], total_col)
list_death_upper = (["death_suspected",
"death_probable",
"death_confirmed"])
total_death_upper = sum_rows([raw[raw.variable == row] for row in list_death_upper], total_col)
total_death_lower = sum_row(raw[raw.variable == "death_confirmed"], total_col)
return {'Country' : country, 'Date' : parse(date), 'NewCasesLower' : new_cases_lower, 'NewCasesUpper' : new_cases_upper, 'TotalDeathLower' : total_death_lower, 'TotalDeathUpper' : total_death_upper}
rows_guinea = [get_row_guinea(file) for file in get_files("guinea")]
rows_liberia = [get_row_liberia(file) for file in get_files("liberia")]
Explanation: As the files for the Sierra Leone does not contain data for the new deaths, we first extract the total deaths for each day, and we will process them later to get the new deaths.
End of explanation
rows_sl_total_deaths = [get_row_sl(file) for file in get_files("sl")]
dic_sl_total_deaths = {}
for row in rows_sl_total_deaths:
dic_sl_total_deaths[row['Date']] = row
rows_sl = []
for date, entry in dic_sl_total_deaths.items():
date_before = date - datetime.timedelta(days=1)
if date_before in dic_sl_total_deaths:
if entry['TotalDeathUpper'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathUpper'] != 0 and entry['TotalDeathLower'] != 0 and dic_sl_total_deaths[date_before]['TotalDeathLower'] != 0:
copy = dict(entry)
del copy['TotalDeathUpper']
del copy['TotalDeathLower']
copy['NewDeathsUpper'] = entry['TotalDeathUpper'] - dic_sl_total_deaths[date_before]['TotalDeathUpper']
copy['NewDeathsLower'] = entry['TotalDeathLower'] - dic_sl_total_deaths[date_before]['TotalDeathLower']
rows_sl.append(copy)
Explanation: We now transform the data for the Sierra Leone :
- we first create a new dictionary for which the keys are date, and the values are the previously extracted values from the .csv files.
- then for each value in this dictionary, we try to get the value of the day before, and perform the difference to get the new deaths of this day.
End of explanation
raw_dataframe = pd.DataFrame(columns=['Country', 'Date', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper'])
for row in rows_sl, rows_guinea:
raw_dataframe = raw_dataframe.append(row, ignore_index = True)
for row in rows_liberia:
if row['Date'].month != 12: #December data is erroneous
raw_dataframe = raw_dataframe.append(row, ignore_index = True)
raw_dataframe
dataframe = raw_dataframe.set_index(['Country', 'Date'])
dataframe_no_day = raw_dataframe
dataframe_no_day['Year'] = raw_dataframe['Date'].apply(lambda x: x.year)
dataframe_no_day['Month'] = raw_dataframe['Date'].apply(lambda x: x.month)
final_df = dataframe_no_day[['Country', 'Year', 'Month', 'NewCasesLower', 'NewCasesUpper', 'NewDeathsLower', 'NewDeathsUpper']].groupby(['Country', 'Year', 'Month']).mean()
final_df
Explanation: We can now insert the data in a dataframe. For Liberia, December's data is in a completely different format so we dropped it: for instance, for some days, the new cases are the new cases for the day and for some other they are the total cases for this country.
End of explanation
s1 = final_df[['NewCasesLower', 'NewCasesUpper']].mean(axis=1)
s2 = final_df[['NewDeathsLower', 'NewDeathsUpper']].mean(axis=1)
final = pd.concat([s1, s2], axis=1)
final.columns = ['NewCasesAverage', 'NewDeathsAverage']
final
Explanation: Finally, to have some final general idea for the data, we average the bounds.
End of explanation
mid = pd.read_excel(DATA_FOLDER + '/microbiome/MID1.xls', sheetname='Sheet 1', header=None)
mid.fillna('unknown', inplace=True)
mid['BARCODE'] = 'MID1'
mid.columns = ['Taxon', 'Count', 'BARCODE']
Explanation: Task 2. RNA Sequences
In the DATA_FOLDER/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10<sup>th</sup> file that describes the content of each.
Use pandas to import the first 9 spreadsheets into a single DataFrame.
Then, add the metadata information from the 10<sup>th</sup> spreadsheet as columns in the combined DataFrame.
Make sure that the final DataFrame has a unique index and all the NaN values have been replaced by the tag unknown.
We load the first spreadsheet from the file's Sheet 1. Then we add a new column that is the same for all the data in this import, which corresponds to the barcode of the code.
Then we rename the columns for more clarity.
End of explanation
for i in range(2, 10):
midi = pd.read_excel(DATA_FOLDER + '/microbiome/MID' + str(i) + '.xls', sheetname='Sheet 1', header=None)
midi.fillna('unknown', inplace=True)
midi['BARCODE'] = 'MID' + str(i)
midi.columns = ['Taxon', 'Count', 'BARCODE']
mid = pd.concat([mid, midi])
Explanation: Now we repeat this operation for every other spreadsheet except the metadata. At each iteration we simply concatenate the data at the end of the previous data, this accumulating all the files' data into a single dataframe. We don't care about any index right now since we will use a random one later.
End of explanation
metadata = pd.read_excel(DATA_FOLDER + '/microbiome/metadata.xls', sheetname='Sheet1', index_col=0)
metadata.fillna('unknown', inplace=True)
merged = pd.merge(mid, metadata, right_index=True, left_on='BARCODE')
merged = merged.set_index(keys=['BARCODE', 'Taxon'])
merged
Explanation: Finally, we do a merge with the metadata. We join on the BARCODE column. This column will be the index of the metadata when we import it in this case. Finally we set the index for the three columns BARCODE, GROUP and SAMPLE which are all the columns of the metada and are unique.
The only NaN value we found was the NA value on the metadata, which may indicate that there is no sample for the first group. We replaced it anyway by unknown.
End of explanation
from IPython.core.display import HTML
HTML(filename=DATA_FOLDER+'/titanic.html')
Explanation: Task 3. Class War in Titanic
Use pandas to import the data file Data/titanic.xls. It contains data on all the passengers that travelled on the Titanic.
End of explanation
titanic = pd.read_excel(DATA_FOLDER + '/titanic.xls', sheetname='titanic')
titanic
Explanation: For each of the following questions state clearly your assumptions and discuss your findings:
1. Describe the type and the value range of each attribute. Indicate and transform the attributes that can be Categorical.
2. Plot histograms for the travel class, embarkation port, sex and age attributes. For the latter one, use discrete decade intervals.
3. Calculate the proportion of passengers by cabin floor. Present your results in a pie chart.
4. For each travel class, calculate the proportion of the passengers that survived. Present your results in pie charts.
5. Calculate the proportion of the passengers that survived by travel class and sex. Present your results in a single histogram.
6. Create 2 equally populated age categories and calculate survival proportions by age category, travel class and sex. Present your results in a DataFrame with unique index.
1. We start by importing the data from the file.
End of explanation
titanic.dtypes
Explanation: Next we can list the data types of each field.
End of explanation
titanic.describe()
Explanation: When it comes to the object fields, we can be a bit more precise. name, sec, ticket, cabin, embarked, boat and home.dex are all strings.
Next, we call the describe method to list some statistics on the data. We thus obtain the range of all of the numeric fields of our data.
End of explanation
class_dic = {1 : 'First Class', 2 : 'Second Class', 3 : 'Third Class', np.nan : np.nan}
survived_dic = {0 : 'Deceased' , 1 : 'Survived', np.nan : np.nan}
emarked_dic = {'C' : 'Cherbourg', 'Q' : 'Queenstown', 'S' : 'Southampton', np.nan : np.nan}
titanic['pclass'] = titanic['pclass'].apply(lambda x: class_dic[x])
titanic['survived'] = titanic['survived'].apply(lambda x: survived_dic[x])
titanic['embarked'] = titanic['embarked'].apply(lambda x: emarked_dic[x])
Explanation: Moreover, we can also note some ranges of other fields. For example, sex has only two possible values female and male. embarked can only be S, C and Q.
For a better visual result, we decided to replace the travel classes, ports to more readable values. As we make them categorical types, the performance stays the same.
End of explanation
titanic['pclass'] = titanic.pclass.astype('category')
titanic['survived'] = titanic.survived.astype('category')
titanic['sex'] = titanic.sex.astype('category')
titanic['embarked'] = titanic.embarked.astype('category')
titanic['cabin'] = titanic.cabin.astype('category')
titanic['boat'] = titanic.boat.astype('category')
Explanation: Then we make categorical data as actually categorical.
End of explanation
titanic.pclass.value_counts(sort=False).plot(kind='bar')
Explanation: 2. We plot the histogram of the travel class.
End of explanation
titanic.embarked.value_counts().plot(kind='bar')
Explanation: Next we plot the histogram of the three embark ports.
End of explanation
titanic.sex.value_counts().plot(kind='bar')
Explanation: Next we plot the histogram of the sex.
End of explanation
pd.cut(titanic.age, range(0,90,10)).value_counts(sort=False).plot(kind='bar')
Explanation: Next, we cut the ages data into decades and plot the histogram of the devades.
End of explanation
titanic.cabin.dropna().apply(lambda x : x[0]).value_counts(sort=False).plot(kind='pie')
Explanation: 3. We plot the cabin floor data as a pie chart.
End of explanation
titanic[titanic.pclass == "First Class"].survived.value_counts(sort=False).plot(kind='pie')
Explanation: 4. Here, we plot the proportion of people that survived in the first class.
End of explanation
titanic[titanic.pclass == "Second Class"].survived.value_counts(sort=False).plot(kind='pie')
Explanation: Next, we plot the proportion of people that survived in the second class.
End of explanation
titanic[titanic.pclass == "Third Class"].survived.value_counts(sort=False).plot(kind='pie')
Explanation: Finally, we plot the proportion of people that survived in the third class.
End of explanation
titanic.insert(0, 'alive', 0)
titanic.insert(0, 'dead', 0)
titanic.insert(0, 'ratio', 0)
Explanation: As we can see, the lower the class, the higher the probability of death.
5. Here we add new columns that will help us to calculate proportions of survived people in the last part.
End of explanation
titanic.loc[titanic['survived'] == "Survived", 'alive'] = 1
titanic.loc[titanic['survived'] == "Deceased", 'dead'] = 1
df = titanic[['pclass', 'sex', 'alive', 'dead', 'ratio']]
Explanation: Here we set these new columns to appropriate values. We essentialy separate the survived columns for easier summing later on. Finnaly we slice the data to take only the columns of interest.
End of explanation
aggregated = df.groupby(['sex', 'pclass']).sum()
(aggregated['alive'] / (aggregated['alive'] + aggregated['dead'])).plot(kind='bar')
Explanation: We group the data by the sec and class of the passangers and we sum it. Then we have the sum of alive and dead people groupped as we wish and we can easily calculate the proportion of them that survived, which we plot as a histogram.
End of explanation
titanic.dropna(axis=0, subset=['age'], inplace=True)
titanic.insert(0, 'age_category', 0)
median = titanic['age'].median()
Explanation: We can see that there is a huge difference of survival between the classes and sexes : for instance, the third class males have 7 less times chance of survival than the first class females.
6. Next we insert a new column that will be the age category of each person. Since we wan't to split the people in two equal groups based on age, we compute the median age of passangers. We also drop the passengers with an unknown age value, to avoid bad results for the median computation.
End of explanation
titanic.loc[titanic['age'] > median, 'age_category'] = "Age > " + str(median)
titanic.loc[titanic['age'] <= median, 'age_category'] = "Age <= " + str(median)
titanic['age_category'] = titanic.age_category.astype('category')
Explanation: Next, we set the correct category to people below or above the median age. The people that have the median age are grouped with the people below it. Next we set this column as a categorical column.
End of explanation
sub = titanic[['pclass', 'sex', 'age_category', 'alive', 'dead', 'ratio']]
subagg = sub.groupby(['age_category', 'sex', 'pclass']).sum()
subagg['ratio'] = (subagg['alive'] / (subagg['alive'] + subagg['dead']))
only_ratio = subagg[['ratio']]
only_ratio
Explanation: Next, we take the columns that are of interest to us and group by age category, sec and travel class. Then we sum over these groups, obtaining the people that lived and those that died which which we can compute the proportion and display it as a dataframe.
End of explanation |
13,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Exercise 1
Imports
Step1: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential"
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
from scipy.optimize import minimize, rosen, rosen_der
Explanation: Optimization Exercise 1
Imports
End of explanation
def hat(x,a,b):
return -a*(x**2) + b*(x**4)
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
a = 5.0
b = 1.0
x = np.arange(-3, 3, 0.1)
y = hat(x,a,b)
plt.plot(x,y)
assert True # leave this to grade the plot
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
n = minimize(hat, x, (a,b), method='BFGS')
n.x
assert True # leave this for grading the plot
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation |
13,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demo code for network-to-network information transfer for supplementary figure 1
Takuya Ito
04/19/2017
Step1: 0.0 Basic parameters
Step2: 1.0 Run information transfer mapping procedure
1.1 First load in resting-state FC matrices for subjects
Step4: 1.2 Perform network-to-network information transfer mapping procedure using python module
1.2.1 First construct a wrapper to pass through a multiprocessing scheme
Step5: 1.2.2 Run using multiprocessing (parallel processing) to speed-up computation
Step6: 2.0 Compute group statistics
Keep track of networks to matrix indices
Step7: 2.1 Perform multiple comparisons (using false discovery rate)
Step8: 2.2 Plot results | Python Code:
import sys
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import multiprocessing as mp
%matplotlib inline
import os
os.environ['OMP_NUM_THREADS'] = str(1)
import warnings
warnings.filterwarnings('ignore')
import networkinformationtransfer as n2n
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
Explanation: Demo code for network-to-network information transfer for supplementary figure 1
Takuya Ito
04/19/2017
End of explanation
# Set basic parameters
datadir = './data/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
# Load in network array
networkdef = np.loadtxt(datadir + 'network_array.csv', delimiter=',')
# Load in network keys (each network associated with a number in network array)
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1 (merging two auditory networks)
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Redefine new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
Explanation: 0.0 Basic parameters
End of explanation
fcmat = {}
for subj in subjNums:
fcmat[subj] = np.loadtxt('data/FC_Estimates/' + subj + '_multregconn_restfc.csv', delimiter=',')
Explanation: 1.0 Run information transfer mapping procedure
1.1 First load in resting-state FC matrices for subjects
End of explanation
def informationTransferMappingWrapper((subj,fcmat)):
A wrapper so we can use multiprocessing to run subjects in parallel
out = n2n.networkToNetworkInformationTransferMapping(subj,fcmat,null=False)
return out
Explanation: 1.2 Perform network-to-network information transfer mapping procedure using python module
1.2.1 First construct a wrapper to pass through a multiprocessing scheme
End of explanation
inputs = []
for subj in subjNums:
inputs.append((subj,fcmat[subj]))
pool = mp.Pool(processes=32)
results = pool.map_async(informationTransferMappingWrapper,inputs).get()
pool.close()
pool.join()
# Collect results
ruledims = ['logic','sensory','motor']
ite_matrix = {}
for ruledim in ruledims:
ite_matrix[ruledim] = np.zeros((len(networkmappings),len(networkmappings),len(subjNums)))
scount = 0
for result in results:
for ruledim in ruledims:
ite_matrix[ruledim][:,:,scount] = result[ruledim]
scount += 1
Explanation: 1.2.2 Run using multiprocessing (parallel processing) to speed-up computation
End of explanation
# Create dictionary that reflects network ordering for matrix rows and columns
netkeys = {0:'vis',1:'smn',2:'con',3:'dmn',4:'fpn', 5:'aud', 6:'dan'}
num_networks=len(netkeys)
baseline = 0.0
avg_rho = {}
tstats = {}
pvals = {}
for ruledim in ruledims:
avg_rho[ruledim] = np.zeros((num_networks,num_networks))
tstats[ruledim] = np.zeros((num_networks,num_networks))
pvals[ruledim] = np.zeros((num_networks,num_networks))
for net1 in netkeys:
for net2 in netkeys:
# Skip if net1 and net2
if net1==net2:
avg_rho[ruledim][net1,net2] = np.nan
tstats[ruledim][net1,net2] = np.nan
pvals[ruledim][net1,net2] = np.nan
continue
# Store results
avg_rho[ruledim][net1,net2] = np.mean(ite_matrix[ruledim][net1,net2,:])
t, p = stats.ttest_1samp(ite_matrix[ruledim][net1,net2,:],0)
# One-sided t-test
tstats[ruledim][net1,net2] = t
if t>0:
p=p/2.0
else:
p = 1-p/2.0
pvals[ruledim][net1,net2] = p
Explanation: 2.0 Compute group statistics
Keep track of networks to matrix indices
End of explanation
# Compute group stats
baseline = 0.0
triu_indices = np.triu_indices(len(networkmappings),k=1)
tril_indices = np.tril_indices(len(networkmappings),k=-1)
qmat = {}
for ruledim in ruledims:
qmat[ruledim] = np.zeros((num_networks,num_networks))
tmpq = []
tmpq.extend(pvals[ruledim][triu_indices])
tmpq.extend(pvals[ruledim][tril_indices])
tmpq = mc.fdrcorrection0(tmpq)[1]
qmat[ruledim][triu_indices] = tmpq[0:len(triu_indices[0])]
qmat[ruledim][tril_indices] = tmpq[len(triu_indices[0]):]
Explanation: 2.1 Perform multiple comparisons (using false discovery rate)
End of explanation
for ruledim in ruledims:
plt.figure(figsize=(12,10))
# First visualize unthresholded results
plt.subplot(121)
plt.title('NetworkToNetwork Information Transfer Mapping\n' + ruledim + ' domain', fontsize=14, y=1.04)
mat = avg_rho[ruledim]
np.fill_diagonal(mat,0)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat, origin='lower',norm=norm, vmin=0, cmap='seismic', interpolation='none')
plt.xticks(netkeys.keys(),netkeys.values())
plt.yticks(netkeys.keys(),netkeys.values())
plt.ylabel('Source Network',fontsize=16)
plt.xlabel('Target Network',fontsize=16)
plt.colorbar(fraction=.046)
# Next visualize thresholded results (after multiple comparisons)
plt.subplot(122)
plt.title('NetworkToNetwork Information Transfer Mapping\n' + ruledim + ' domain' , fontsize=14, y=1.04)
mat = avg_rho[ruledim]
thresh = qmat[ruledim] < 0.05
# Threshold using q < 0.05
mat = np.multiply(mat,thresh)
np.fill_diagonal(mat,0)
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat, origin='lower',norm=norm, vmin=0, cmap='seismic', interpolation='none')
plt.xticks(netkeys.keys(),netkeys.values())
plt.yticks(netkeys.keys(),netkeys.values())
plt.ylabel('Source Network',fontsize=16)
plt.xlabel('Target Network',fontsize=16)
plt.colorbar(fraction=.046)
plt.tight_layout()
Explanation: 2.2 Plot results
End of explanation |
13,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing the ground state energies obtained by density matrix renormalization group, exact diagonalization, and an SDP hierarchy
We would like to compare the ground state energy of the following spinless fermionic system [1]
Step1: For now, we are only interested in relatively small systems, we will try lattice sizes between $2\times 2$ and $5\times 5$. With this, we set the parameters for DMRG and ED
Step2: We will need a helper function to extract the ground state energy from the solutions
Step3: We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\times 4$.
Step4: DMRG scales to all the lattice sizes we want
Step5: Calculating the ground state energy with SDP
The ground state energy problem can be rephrased as a polynomial optimiziation problem of noncommuting variables. We use Ncpol2sdpa to translate this optimization problem to a sparse SDP relaxation [4]. The relaxation is solved with SDPA, a high-performance SDP solver that deals with sparse problems efficiently [5]. First we need to import a few more functions
Step6: We set the additional parameters for this formulation, including the order of the relaxation
Step7: Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step
Step8: Comparison
The level-one relaxation matches the ground state energy given by DMRG and ED. | Python Code:
import pyalps
Explanation: Comparing the ground state energies obtained by density matrix renormalization group, exact diagonalization, and an SDP hierarchy
We would like to compare the ground state energy of the following spinless fermionic system [1]:
$H_{\mathrm{free}}=\sum_{<rs>}\left[c_{r}^{\dagger} c_{s}+c_{s}^{\dagger} c_{r}-\gamma(c_{r}^{\dagger} c_{s}^{\dagger}+c_{s}c_{r} )\right]-2\lambda\sum_{r}c_{r}^{\dagger}c_{r},$
where $<rs>$ goes through nearest neighbour pairs in a two-dimensional lattice. The fermionic operators are subject to the following constraints:
${c_{r}, c_{s}^{\dagger}}=\delta_{rs}I_{r}$
${c_r^\dagger, c_s^\dagger}=0,$
${c_{r}, c_{s}}=0.$
Our primary goal is to benchmark the SDP hierarchy of Reference [2]. The baseline methods are density matrix renormalization group (DMRG) and exact diagonalization (ED), both of which are included in Algorithms and Libraries for Physics Simulations (ALPS, [3]). The range of predefined Hamiltonians is limited, so we simplify the equation by setting $\gamma=0$.
Prerequisites
To run this notebook, ALPS, Sympy, Scipy, and SDPA must be installed. A recent version of Ncpol2sdpa is also necessary.
Calculating the ground state energy with DMRG and ED
DMRG and ED are included in ALPS. To start the calculations, we need to import the Python interface:
End of explanation
lattice_range = [2, 3, 4, 5]
parms = [{
'LATTICE' : "open square lattice", # Set up the lattice
'MODEL' : "spinless fermions", # Select the model
'L' : L, # Lattice dimension
't' : -1 , # This and the following
'mu' : 2, # are parameters to the
'U' : 0 , # Hamiltonian.
'V' : 0,
'Nmax' : 2 , # These parameters are
'SWEEPS' : 20, # specific to the DMRG
'MAXSTATES' : 300, # solver.
'NUMBER_EIGENVALUES' : 1,
'MEASURE_ENERGY' : 1
} for L in lattice_range ]
Explanation: For now, we are only interested in relatively small systems, we will try lattice sizes between $2\times 2$ and $5\times 5$. With this, we set the parameters for DMRG and ED:
End of explanation
def extract_ground_state_energies(data):
E0 = []
for Lsets in data:
allE = []
for q in pyalps.flatten(Lsets):
allE.append(q.y[0])
E0.append(allE[0])
return sorted(E0, reverse=True)
Explanation: We will need a helper function to extract the ground state energy from the solutions:
End of explanation
prefix_sparse = 'comparison_sparse'
input_file_sparse = pyalps.writeInputFiles(prefix_sparse, parms[:-1])
res = pyalps.runApplication('sparsediag', input_file_sparse)
sparsediag_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_sparse))
sparsediag_ground_state_energy = extract_ground_state_energies(sparsediag_data)
sparsediag_ground_state_energy.append(0)
Explanation: We invoke the solvers and extract the ground state energies from the solutions. First we use exact diagonalization, which, unfortunately does not scale beyond a lattice size of $4\times 4$.
End of explanation
prefix_dmrg = 'comparison_dmrg'
input_file_dmrg = pyalps.writeInputFiles(prefix_dmrg, parms)
res = pyalps.runApplication('dmrg',input_file_dmrg)
dmrg_data = pyalps.loadEigenstateMeasurements(
pyalps.getResultFiles(prefix=prefix_dmrg))
dmrg_ground_state_energy = extract_ground_state_energies(dmrg_data)
Explanation: DMRG scales to all the lattice sizes we want:
End of explanation
from sympy.physics.quantum.dagger import Dagger
from ncpol2sdpa import SdpRelaxation, generate_operators, \
fermionic_constraints, get_neighbors
Explanation: Calculating the ground state energy with SDP
The ground state energy problem can be rephrased as a polynomial optimiziation problem of noncommuting variables. We use Ncpol2sdpa to translate this optimization problem to a sparse SDP relaxation [4]. The relaxation is solved with SDPA, a high-performance SDP solver that deals with sparse problems efficiently [5]. First we need to import a few more functions:
End of explanation
level = 1
gam, lam = 0, 1
Explanation: We set the additional parameters for this formulation, including the order of the relaxation:
End of explanation
sdp_ground_state_energy = []
for lattice_dimension in lattice_range:
n_vars = lattice_dimension * lattice_dimension
C = generate_operators('C%s' % (lattice_dimension), n_vars)
hamiltonian = 0
for r in range(n_vars):
hamiltonian -= 2*lam*Dagger(C[r])*C[r]
for s in get_neighbors(r, lattice_dimension):
hamiltonian += Dagger(C[r])*C[s] + Dagger(C[s])*C[r]
hamiltonian -= gam*(Dagger(C[r])*Dagger(C[s]) + C[s]*C[r])
substitutions = fermionic_constraints(C)
sdpRelaxation = SdpRelaxation(C)
sdpRelaxation.get_relaxation(level, objective=hamiltonian, substitutions=substitutions)
sdpRelaxation.solve()
sdp_ground_state_energy.append(sdpRelaxation.primal)
Explanation: Then we iterate over the lattice range, defining a new Hamiltonian and new constraints in each step:
End of explanation
data = [dmrg_ground_state_energy,\
sparsediag_ground_state_energy,\
sdp_ground_state_energy]
labels = ["DMRG", "ED", "SDP"]
print ("{:>4} {:>9} {:>10} {:>10} {:>10}").format("", *lattice_range)
for label, row in zip(labels, data):
print ("{:>4} {:>7.6f} {:>7.6f} {:>7.6f} {:>7.6f}").format(label, *row)
Explanation: Comparison
The level-one relaxation matches the ground state energy given by DMRG and ED.
End of explanation |
13,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib Exercise 3
Imports
Step2: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is
Step3: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction
Step4: Next make a visualization using one of the pcolor functions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Matplotlib Exercise 3
Imports
End of explanation
def well2d(x, y, nx, ny, L=1.0):
Compute the 2d quantum well wave function.
i=np.sin(nx*np.pi*x/L)
o=np.sin(ny*np.pi*y/L)
return((2/L)*i*o)
psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1)
assert len(psi)==10
assert psi.shape==(10,)
Explanation: Contour plots of 2d wavefunctions
The wavefunction of a 2d quantum well is:
$$ \psi_{n_x,n_y}(x,y) = \frac{2}{L}
\sin{\left( \frac{n_x \pi x}{L} \right)}
\sin{\left( \frac{n_y \pi y}{L} \right)} $$
This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well.
Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays.
End of explanation
x = np.linspace(0,1,100)
y=np.linspace(0,1,100)
X, Y = np.meshgrid(x, y)
P = well2d(X, Y, 3, 2, L=1.0)
plt.contourf(X, Y, P)
plt.colorbar()
plt.set_cmap('cool')
plt.tight_layout()
plt.title('Wave Function for Different x and y combinations')
plt.xlabel('x')
plt.ylabel('y')
assert True # use this cell for grading the contour plot
Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction:
Use $n_x=3$, $n_y=2$ and $L=0$.
Use the limits $[0,1]$ for the x and y axis.
Customize your plot to make it effective and beautiful.
Use a non-default colormap.
Add a colorbar to you visualization.
First make a plot using one of the contour functions:
End of explanation
plt.pcolormesh(X, Y, P)
plt.colorbar()
plt.set_cmap('seismic')
plt.tight_layout()
plt.title('Wave Function for Different x and y combinations')
plt.xlabel('x')
plt.ylabel('y')
assert True # use this cell for grading the pcolor plot
Explanation: Next make a visualization using one of the pcolor functions:
End of explanation |
13,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
seaborn.swarmplot
Violinplots summarize numeric data over a set of categories. They are essentially a box plot with a kernel density estimate (KDE) overlaid along the range of the box and reflected to make it look nice. They provide more information than a boxplot because they also include information about how the data is distributed within the inner quartiles.
dataset
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
Step4: Change the order of categories
Step5: Change the order that the colors are chosen
Change orientation to horizontal
Step6: Desaturate
Step7: Adjust width of violins
Step8: Change the size of outlier markers
Step9: Adjust the bandwidth of the KDE filtering parameter. Smaller values will use a thinner kernel and thus will contain higher feature resolution but potentially noise. Here are examples of low and high settings to demonstrate the difference.
Step10: Finalize | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
Explanation: seaborn.swarmplot
Violinplots summarize numeric data over a set of categories. They are essentially a box plot with a kernel density estimate (KDE) overlaid along the range of the box and reflected to make it look nice. They provide more information than a boxplot because they also include information about how the data is distributed within the inner quartiles.
dataset: IMDB 5000 Movie Dataset
End of explanation
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index
howmany=10
df = df.loc[df.Category.isin(top_categories[:howmany])]
df.rename(columns={"duration":"Duration"},inplace=True)
df.head()
Explanation: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
End of explanation
p = sns.swarmplot(data=df,
x = 'Category',
y = 'Duration')
Explanation: Basic plot
End of explanation
df = df.loc[df.Duration < 250]
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration')
Explanation: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()))
Explanation: Change the order of categories
End of explanation
p = sns.violinplot(data=df,
y = 'Category',
x = 'Duration',
order = sorted(df.Category.unique()),
orient="h")
Explanation: Change the order that the colors are chosen
Change orientation to horizontal
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
saturation=.25)
Explanation: Desaturate
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
width=.25)
Explanation: Adjust width of violins
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
fliersize=20)
Explanation: Change the size of outlier markers
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=.05)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=5)
Explanation: Adjust the bandwidth of the KDE filtering parameter. Smaller values will use a thinner kernel and thus will contain higher feature resolution but potentially noise. Here are examples of low and high settings to demonstrate the difference.
End of explanation
sns.set(rc={"axes.facecolor":"#e6e6e6",
"axes.grid":False,
'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
palette = 'spectral',
order = sorted(df.Category.unique()),
notch=True)
plt.xticks(rotation=45)
l = plt.xlabel('')
plt.ylabel('Duration (min)')
plt.text(4.85,200, "Violin Plot", fontsize = 95, color="black", fontstyle='italic')
p.get_figure().savefig('../../figures/swarmplot.png')
Explanation: Finalize
End of explanation |
13,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2D Data Plots and Analysis
Unit 7, Lecture 4
Numerical Methods and Statistics
Prof. Andrew White, Feburary 27th 2018
Step1: Working with 2D data
Now we'll consider 2D numeric data. Recall that we're taking two measurements simultaneously so that their should be an equal number of data points for each dimension. Furthermore, they should be paired. For example, measuring people's weight and height is valid. Measuring one group of people's height and then a different group of people's weight is not valid.
Our example for this lecture will be one of the most famous datasets of all time
Step2: Sample Covariance
Let's begin by computing sample covariance. The syntax for covariance is similar to the std syntax for standard deviation. We'll compute the sample covariance between petal width and sepal width.
Step3: This is called a covariance matrix
Step4: To read this larger matrix, recall the column descriptions
Step5: What happened? It turns out, our data are not sorted according to sepal length, so the lines go from value to value. There is no reason that our data should be ordered by sepal length, so we need to use dot markers to get rid of the lines.
Step6: Now the other plot
Step7: That is suprising! The "low" sample variance plot looks like it has as much correlation as the "high" sample covariance. That's because sample variance measures both the underlying variance of both dimensions and their correlation. The reason this is a low sample covariance is that the y-values change less than in the first plot.
Sample Correlation
Since the covariance includes the correlation between variables and the variance of the two variables, sample correlation tries to remove the variacne so we can view only correlation.
$$r_{xy} = \frac{\sigma_{xy}}{\sigma_x \sigma_y}$$
Similar to the covariance, there is something called the correlation matrix or the normalized covariance matrix.
Step8: Note that we don't have to pass in ddof because it cancels in the correlation coefficient expression. Now we also see that the two plots from above have similar correlations as we saw visually.
Caveats of Correlation
Let's try creating some synthetic data to observe properties of correlation. I'm using the rvs function to sample data from distributions using scipy.stats. | Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, pi
import scipy
import scipy.stats
plt.style.use('seaborn-whitegrid')
Explanation: 2D Data Plots and Analysis
Unit 7, Lecture 4
Numerical Methods and Statistics
Prof. Andrew White, Feburary 27th 2018
End of explanation
import pydataset
data = pydataset.data('iris').values
#remove species column
data = data[:,:4].astype(float)
Explanation: Working with 2D data
Now we'll consider 2D numeric data. Recall that we're taking two measurements simultaneously so that their should be an equal number of data points for each dimension. Furthermore, they should be paired. For example, measuring people's weight and height is valid. Measuring one group of people's height and then a different group of people's weight is not valid.
Our example for this lecture will be one of the most famous datasets of all time: the Iris dataset. It's a commonly used dataset in education and describes measurements in centimeters of 150 Iris flowers. The measured data are the columns and each row is an iris flower. They are sepal length, sepal width, pedal length, pedal width, and species. We'll ignore species for our example.
Flower Anatomy
<img src=https://upload.wikimedia.org/wikipedia/commons/7/78/Petal-sepal.jpg style='width: 350px;'>
End of explanation
np.cov(data[:,1], data[:,3], ddof=1)
Explanation: Sample Covariance
Let's begin by computing sample covariance. The syntax for covariance is similar to the std syntax for standard deviation. We'll compute the sample covariance between petal width and sepal width.
End of explanation
#add rowvar = False to indicate we want cov
#over our columns and not rows
np.cov(data, rowvar=False, ddof=1)
Explanation: This is called a covariance matrix:
$$\left[\begin{array}{lr}
\sigma_{xx} & \sigma_{xy}\
\sigma_{yx} & \sigma_{yy}\
\end{array}\right]$$
The diagonals are the sample variances and the off-diagonal elements are the sample covariances. It is symmetric, since $\sigma_{xy} = \sigma_{yx}$. The value we observed for sample covariance is negative covariance the measurements. That means as one increases, the other decreases. The ddof was set to 1, meaning that the divosor for sample covariance is $N - 1$. Remember that $N$ is the number of pairs of $x$ and $y$ values.
The covariance matrix can be any size. So we can explore all possible covariances simultaneously.
End of explanation
plt.plot(data[:,0], data[:,2])
plt.show()
Explanation: To read this larger matrix, recall the column descriptions: sepal length (0), sepal width (1), pedal length (2), pedal width (3). Then use the row and column index to identify which sample covariance is being computed. The row and column indices are interchangable because it is symmetric. For example, the sample covariance of sepal length with sepal width is $-0.042$ centimeters.
Scatter Plot
To get a better sense of this data, we can use a scatter plot. Let's see a high positive sample covariance and a low positive sample covariance. Sepal length and pedal length have a high positive sample covariance and sepal width with pedal width has a low positive sample covariance.
End of explanation
plt.title('Sample Covariance: 1.27 cm')
plt.plot(data[:,0], data[:,2], 'o')
plt.xlabel('Sepal Length [cm]')
plt.ylabel('Pedal Length [cm]')
plt.show()
Explanation: What happened? It turns out, our data are not sorted according to sepal length, so the lines go from value to value. There is no reason that our data should be ordered by sepal length, so we need to use dot markers to get rid of the lines.
End of explanation
plt.title('Sample Covariance: 0.52 cm')
plt.plot(data[:,0], data[:,3], 'o')
plt.xlabel('Sepal Length [cm]')
plt.ylabel('Pedal Width [cm]')
plt.show()
Explanation: Now the other plot
End of explanation
np.corrcoef(data, rowvar=False)
Explanation: That is suprising! The "low" sample variance plot looks like it has as much correlation as the "high" sample covariance. That's because sample variance measures both the underlying variance of both dimensions and their correlation. The reason this is a low sample covariance is that the y-values change less than in the first plot.
Sample Correlation
Since the covariance includes the correlation between variables and the variance of the two variables, sample correlation tries to remove the variacne so we can view only correlation.
$$r_{xy} = \frac{\sigma_{xy}}{\sigma_x \sigma_y}$$
Similar to the covariance, there is something called the correlation matrix or the normalized covariance matrix.
End of explanation
x = scipy.stats.norm.rvs(size=15, scale=4)
y = scipy.stats.norm.rvs(size=15, scale=4)
cor = np.corrcoef(x,y)[0,1]
plt.title('r = {}'.format(cor))
plt.plot(x, y, 'o')
plt.xlabel('x')
plt.ylabel('$y$')
plt.show()
x = scipy.stats.norm.rvs(size=100, scale=4)
y = x ** 2
cor = np.corrcoef(x,y)[0,1]
plt.title('r = {}'.format(cor))
plt.plot(x, y, 'o')
plt.xlabel('x')
plt.ylabel('$x^2$')
plt.show()
Explanation: Note that we don't have to pass in ddof because it cancels in the correlation coefficient expression. Now we also see that the two plots from above have similar correlations as we saw visually.
Caveats of Correlation
Let's try creating some synthetic data to observe properties of correlation. I'm using the rvs function to sample data from distributions using scipy.stats.
End of explanation |
13,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Model without Spots
Step3: Adding Spots
Let's add a spot to the primary component in our binary, which we have already misaligned by 30 degrees in pitch.
The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See the spots tutorial for more details.
We'll place this spot at the South Pole, which should be pointing towards the observer because we pitched the north pole away from the observer.
Step4: We'll also add a mesh dataset so that we can see the positioning of the spot with respect to the misaligned component.
Step5: Location of Spot
Step6: Comparing Light Curves
Note that the pitch means the polar spot is always facing towards the observer slightly, and so is always visible (unless eclipsed). | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.set_value(qualifier='pitch', component='primary', value=30)
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none', model='no_spot')
Explanation: Model without Spots
End of explanation
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=15, colat=180, long=0)
Explanation: Adding Spots
Let's add a spot to the primary component in our binary, which we have already misaligned by 30 degrees in pitch.
The 'colat' parameter defines the colatitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See the spots tutorial for more details.
We'll place this spot at the South Pole, which should be pointing towards the observer because we pitched the north pole away from the observer.
End of explanation
b.add_dataset('mesh', times=[0.75], columns=['teffs'])
b.run_compute(irrad_method='none', model='with_spot')
Explanation: We'll also add a mesh dataset so that we can see the positioning of the spot with respect to the misaligned component.
End of explanation
afig, mplfig = b.plot(kind='mesh', fc='teffs', fcmap='plasma', ec='none', show=True)
Explanation: Location of Spot
End of explanation
afig, mplfig = b.plot(kind='lc', show=True, legend=True)
Explanation: Comparing Light Curves
Note that the pitch means the polar spot is always facing towards the observer slightly, and so is always visible (unless eclipsed).
End of explanation |
13,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
13,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Notebook arguments
sigma (float)
Step2: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient
Step3: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$
Step4: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise
Step5: An ideal transient (no noise, no integration)
Step6: A simulated transient (including noise + integration)
Step7: Plot the computed curves
Step8: Fit data
Fit the "Integrated Exponential" model
Step9: Fit the "Simple Exponential" model
Step10: Print and plot fit results
Step11: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is
Step12: The fixed kinetic curve parameters are
Step13: While tau is varied, taking the following values
Step14: <div class="alert alert-info">
**NOTE**
Step15: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
Step16: <div class="alert alert-danger">
**WARNING**
Step17: Results2 - Integrated Exponential | Python Code:
sigma = 0.016
time_window = 30
time_step = 5
time_start = -900
time_stop = 900
decimation = 20
t0_vary = True
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0) # time origin
num_sim_cycles = 1000
taus = (30, 60)
# Cell inserted during automated execution.
time_start = -900
num_sim_cycles = 1000
t0_vary = True
time_window = 180
taus = (30, 60, 120, 240)
decimation = 20
time_stop = 900
time_step = 10
true_params = {'init_value': 0.3, 't0': 0, 'tau': 60, 'final_value': 0.8}
sigma = 0.053
Explanation: Executed: Tue Oct 11 12:08:23 2016
Duration: 967 seconds.
End of explanation
%matplotlib inline
import numpy as np
import lmfit
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import models # custom module
Explanation: Notebook arguments
sigma (float): standard deviation of additive Gaussian noise to be simulated
time_window (float): seconds, integration window duration
time_step (float): seconds, time step for the moving integration window
time_start (float): seconds, start of time axis (kinetics starts at t = t0).
time_stop (float): seconds, stop of time axis (kinetics starts at t = t0).
t0_vary (bool): whether models should vary the curve origin (t0) during the fit
true_params (dict): parameters used to generate simulated kinetic curves
num_sim_cycles (int): number of times fit is repeated (Monte-Carlo)
taus (tuple): list of values for time-costant tau simulated during repeated fits (Monte-Carlo).
Simulated Kinetic Curve Fit
<p class=lead>This notebook fits simulated exponential transients with additive Gaissian noise in order to study time-constant fitting accuracy.
In particular we compare a simple exponential model with a more realistic model
with integration window, checking the effect on the fit results.
<p>
You can either run this notebook directly, or run it through the [master notebook](Simulated Kinetic Curve Fit - Run-All.ipynb) for batch processing.
## Imports
End of explanation
labels = ('tau', 'init_value', 'final_value')
model = models.factory_model_exp(t0_vary=True)
Explanation: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient:
$$ y = f(t) = A \cdot e^{-t/\tau} + K$$
The python function implementing it is:
models.exp_func().
Next cell defines and initializes the fitting model (lmfit.model.Model) including parameters' constrains:
End of explanation
modelw = models.factory_model_expwin(t_window=time_window, decimation=decimation, t0_vary=t0_vary)
Explanation: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$:
$$f(t) = A \cdot e^{-t/\tau} + K$$
$$y(t) = \int_{t}^{t+w} f(t')\;dt'$$
In other words, when we process a measurement in time chunks, we are integrating
a non-stationary signal $f(t)$ over a time window $w$. This integration causes
a smoothing of $f(t)$, regardless of the fact that time is binned or
is swiped-through with a moving windows (overlapping chunks).
Numerically, $t$ is discretized with step equal to (time_step / decimation).
The python function implementing this model function is:
models.expwindec_func().
And, finally, we define and initialize the fitting model parameters' constrains:
End of explanation
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
t.size
Explanation: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise:
$$ Y(t_k) = f(t_k) + N_k $$
$$ {N_k} \sim {\rm Normal}{\mu=0; \sigma}$$
$$ \Delta t = t_k - t_{k-1} = \texttt{time_step}$$
2. Integrated Exponential + Noise
For the "integrating window" model, we first define a finer time axis $\theta_i$
which oversamples $t_k$ by a factor $n$. Then we define the function $Y_f$
adding Gaussian noise $\sqrt{n}\,N_i$, with $n$ times larger variance:
$$ Y_f(\theta_i) = f(\theta_i) + \sqrt{n}\,N_i $$
$$ \Delta \theta = \theta_i - \theta_{i-1} = \texttt{time_step} \;/\; n$$
Finally, by averaging each time window, we compute the data on the coarse time axis $t_k$:
$$ Y_w(t_k) = \frac{1}{m}\sum_{i} Y_f(\theta_i)$$
Here, for each $t_k$, we compute the mean of $m$ consecutive $Y_f$ values. The number $m$
is chosen so that $m\, \Delta \theta$ is equal to the time window.
Noise amplitude
The amplitude of the additive noise ($\sigma$) is estimated from the experimental kinetic curves.
In particular we take the variance from the POST period (i.e. the steady state period after the transient).
The POST period has been chosen because it exhibits higher variance than the PRE period (i.e. the steady state period
before the transient). These values have been calculated in 8-spot bubble-bubble kinetics - Summary.
In both models we define the noise amplitude as sigma (see first cell):
sigma = 0.016
Time axis
We also define the parameters for the time axis $t$:
time_start = -900 # seconds
time_stop = 900 # seconds
time_step = 5 # seconds
Kinetic curve paramenters
The simulated kinetic curve has the following parameters:
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0) # time origin
<div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Single kinetic curve fit
Here we simulate one kinetic curve and fit it with the two models (simple exponential and integrated exponential).
Draw simulated data
Time axis for simulated data:
End of explanation
y = models.expwindec_func(t, t_window=time_window, **true_params)
y.shape
Explanation: An ideal transient (no noise, no integration):
End of explanation
time_window, time_step
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, **true_params)
yr.shape
Explanation: A simulated transient (including noise + integration):
End of explanation
plt.plot(t, y, '-', label='model')
plt.plot(t, yr, 'o', label='model + noise')
Explanation: Plot the computed curves:
End of explanation
#%%timeit
resw = modelw.fit(yr, t=t, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit data
Fit the "Integrated Exponential" model:
End of explanation
#%%timeit
res = model.fit(yr, t=t + 0.5*time_window, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit the "Simple Exponential" model:
End of explanation
fig = plt.figure(figsize=(14, 8))
res.plot(fig=fig)
ci = lmfit.conf_interval(res, res)
lmfit.report_fit(res)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
fig = plt.figure(figsize=(14, 8))
resw.plot(fig=fig)
ci = lmfit.conf_interval(resw, resw)
lmfit.report_fit(resw)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
Explanation: Print and plot fit results:
End of explanation
num_sim_cycles
Explanation: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is:
End of explanation
{k: v for k, v in true_params.items() if k is not "tau"}
Explanation: The fixed kinetic curve parameters are:
End of explanation
taus
t0_vary
Explanation: While tau is varied, taking the following values:
End of explanation
def draw_samples_and_fit(true_params):
# Create the data
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, decimation=100, **true_params)
# Fit the model
tc = t + 0.5*time_window
kws = dict(fit_kws=dict(nan_policy='omit'), verbose=False)
res = model.fit(yr, t=tc, tau=90, method='nelder', **kws)
res = model.fit(yr, t=tc, **kws)
resw = modelw.fit(yr, t=t, tau=400, decimation=decimation, method='nelder', **kws)
resw = modelw.fit(yr, t=t, decimation=decimation, **kws)
return res, resw
def monte_carlo_sim(true_params, N):
df1 = pd.DataFrame(index=range(N), columns=labels)
df2 = df1.copy()
for i in range(N):
res1, res2 = draw_samples_and_fit(true_params)
for var in labels:
df1.loc[i, var] = res1.values[var]
df2.loc[i, var] = res2.values[var]
return df1, df2
Explanation: <div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Functions
Here we define two functions:
draw_samples_and_fit() draws a set of data and fits it with both models
monte_carlo_sim() run the Monte-Carlo simulation: calls draw_samples_and_fit() many times.
NOTE: Global variables are used by previous functions.
End of explanation
mc_results1, mc_results2 = [], []
%%timeit -n1 -r1 # <-- prints execution time
for tau in taus:
true_params['tau'] = tau
df1, df2 = monte_carlo_sim(true_params, num_sim_cycles)
mc_results1.append(df1)
mc_results2.append(df2)
Explanation: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
End of explanation
for tau, df in zip(taus, mc_results1):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: <div class="alert alert-danger">
**WARNING**: The previous cell can take a long to execute. Execution time scales with **`num_sim_cycles * len(taus)`**.
</div>
Results1 - Simple Exponential
End of explanation
for tau, df in zip(taus, mc_results2):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: Results2 - Integrated Exponential
End of explanation |
13,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Beginners' tutorial
Step2: Table of Contents
Bayesian probability revision
The model
Bayesian optimisation
Bayesian sampling/MCMC
Conclusion
Bayesian probability revision <a class="anchor" id="probability"></a>
This notebook uses the following naming conventions
Step5: Now that we have a model and an initial value $y0$, we want to estimate, for any given parameter $k$, $y$ for all values of $t$.
Ordinary Differential Equations can be solved with numerical methods such as Runge-Kutta. The simplest of these is known as Forward Euler.
The popular Python package scipy also has a built-in method, odeint, that integrates ODEs.
Step6: Bayesian optimisation <a class="anchor" id="optimisation"></a>
In optimisation, you measure the output of a system with a known input, compare it to the measured output of the 'real' system, and so estimate the parameters most likely to give that output.
We will use the example model shown above
Step8: We're trying to find which value of $k$ makes our model output the closest to the real system output, assuming that the input dose for each is the same. To do this we define a function, scalar_to_minimise, whose output reduces in size, the closer model output is to measured system output.
Step10: By modelling many values of $k$ and finding the one that gives the lowest value from scalar_to_minimise, we can make a good estimate of $k$ in the real-life system. As you can see, this is inherently imperfect, but can be quite accurate.
Step11: Using PINTS
The work above estimates $k$ pretty well simply based on minimising the output of scalar_to_minimise. So how, and more importantly why, do we use PINTS?
PINTS formalises the process of creating an optimiser, making it more re-usable in different settings. Rather than rewriting code every time new measures are needed, PINTS allows us to change to different error measures, optimisation methods, and more, relatively simply. Not only that, but PINTS is both faster to write and faster to run than the section above!
Step12: There is an important conceptual change in the code above. Previously, we have dealt with a physical model of the real-life system of interest (the function onecomp). In the example described, the physical model takes $y(t_n, param)$ as input and gives $\frac{d}{dt}y(t_n, param)$ as output. With PINTS, the physical model is provided by the user as a simulate function.
PINTS provides a statistical model, which takes measured data $y_{noisy}(t)$ and provides parameter(s) as output. In the example described, the statistical model is a pints.SingleOutputProblem instance, problem, taking measured data, a time array, and the user-defined class PintsOneComp as input.
In order to allow PINTS to use the physical model, the user needs to provide a class inheriting from pints.ForwardModel.
Bayesian sampling <a class="anchor" id="sampling"></a>
Optimisation gives us the most likely parameters, but sampling can give a probability distribution of different parameter values. The method we'll use here is Metropolis Hastings. In our example model, our only model parameter ($\theta$) is $k$ (the decay constant in equation 3 at the start of this tutorial).
Therefore our problem is this
Step15: The functions below provide the prior and likelihood for the numerator of Bayes' rule.
In this case, we use a uniform prior $P(\theta) ~ U(0,10)$, i.e. for any input between 0 & 10, it outputs 0.1, else 0.
The likelihood $P(y|\theta)$ (probability that theta could give the output y_data) in this case gives $\frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(y_{data}-y_{\theta})^2}{2\sigma^2}}$.
Step17: The function propose_new_theta does steps 2 & 3 in the instructions above. It takes the model, the current $\theta$ value, and measured data.
It returns an accepted value of $\theta$.
This is proposed_theta if accepted, else it is theta.
Step18: Let's see this work. First, let's see what the likelihood is when we fix the measured data and take a variety of different proposed $k$ parameters.
Step19: Using PINTS
Now let's try the same thing as we have just done above, but using PINTS!
Once again, it's much easier to type and quicker to run than our manual method above. | Python Code:
import numpy as np
import math
import matplotlib.pyplot as plt
import scipy
from scipy import optimize, integrate
import pints
Explanation: Beginners' tutorial: Bayesian inference & optimisation with Pints
Prerequisites for running the code in this notebook are Python 3 with the modules below. All of the modules used, except Pints, are available from PyPI with the Python shell command pip install <module name here>.
End of explanation
# Defining variables for use later
k = 1.5 # from equation 3
y0 = 1
times = np.linspace(0,10,50)
# A one-compartment PK model is basically an ODE for an exponential decay curve
def onecomp(y, t, k):
A one-compartment PK model, aka simple exponential decay
Parameters
----------
y: float
y at time t
t: float
time
k: float
model parameter
dydt = -k * y
return dydt
Explanation: Table of Contents
Bayesian probability revision
The model
Bayesian optimisation
Bayesian sampling/MCMC
Conclusion
Bayesian probability revision <a class="anchor" id="probability"></a>
This notebook uses the following naming conventions:-
$y$: Measured output from the system (or $y_{model}$, the model output)
$\theta$, theta: Model parameters (in our example, $\theta=k$)
$k$: Decay constant (the single parameter in our example model)
$\sigma$, sigma: Variance of the measurement error
We want to find $P(\theta|y)$ (the posterior probability), which is the probability distribution on the left hand side of Bayes' rule (equation 1, below).
\begin{equation}
P(\theta|y) = \frac{P(y|\theta) P(\theta)}{P(y)}
\tag{1}
\end{equation}
For our example, a Gaussian probability distribution (i.e. normal distribution) is commonly used to compare to measured data, $P(y|\theta) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(y-\theta)^2}{2\sigma^2}}$.
This seems reasonable because random noise often behaves in a roughly Gaussian way. We decide on a uniform prior between 0 and 10, to show that before analysis we believe that $k$ could equally be any value between 0 and 10.
Therefore we can calculate the numerator of Bayes' rule as follows.
\begin{equation}
\text{Numerator} = P(y|\theta) P(\theta) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(y-\theta)^2}{2\sigma^2}} \cdot U(0,10)
\tag{2}
\end{equation}
In Bayes' rule, the denominator $P(y)$ does not vary with parameters. This means that, wherever we compare $P(\theta|y)$ between two sets of parameters, the denominator $P(y)$ cancels (assuming that both use the same $y$). Due to this, the denominator can safely be ignored when calculating relative posteriors.
Bayesian probability is a complex subject, so by necessity this can only be a brief review. Below are some resources to learn more:-
* Bayesian Data Analysis, Third Edition; Gelman et al.; 2014; CRC Press
* A Student's Guide to Bayesian Statistics; Lambert; 2018; SAGE Publishing
* Bayesian Short Course; Lambert, with accompanying video lectures
The model <a class="anchor" id="model"></a>
PINTS, and Bayesian inference in general, can be applied to many real-life systems: anything for which you can propose a model and measure the input and output. For example, optimisation has been applied to fields as disparate as robotics control, ranking algorithms, and particle physics.
For this example case we will use pharmacokinetics, which is the estimation of drug distribution in the body. The simplest model for this treats the entire body as one big sponge, and so is simple exponential decay.
For pharmacokinetics, the system of interest is the distribution of a drug in the body. The input is some dose, and the system's output is concentration of the drug, measured in a particular compartment at several points in time.
In this example we use exponential decay, which only has one parameter in the model, labelled $k$ here. The output, $y$, is defined by the ODE:-
\begin{equation}
\frac{d y}{d t} = -k \cdot y
\tag{3}
\end{equation}
Because this model treats the entire body as one large compartment, it is sometimes called a "one compartment model". We define this model in Python below.
End of explanation
# You can solve ODEs with the Forward Euler method
def ForwardEuler(func, k, y0, times):
Numerically calculate an ODE (forward Euler technique)
Parameters
----------
func: function
function giving the derivative
k: float
parameter the function requires as input
y0: float
y at t=0
times: list
array of times at which to calculate y
y = y0
y_solution = [y0]
h = times[2] - times[1]
for n in times[:-1]:
dy = func(y, n, k)
y = y + h*dy
y_solution.append(y)
return y_solution
# You can also solve ODEs with scipy.integrate.odeint
def simulate(func, parameters, y0, times):
Numerically calculate an ODE
Parameters
----------
func: function
function giving the derivative
parameters: list
parameters the function requires as input
y0: float
y at t=0
times: list
array of times at which to calculate y
l = scipy.integrate.odeint(func, y0, times, (parameters,)) # returns a list of lists
flatlist = [item for sublist in l for item in sublist] # to single list of 'y's
return flatlist
# This is what the system output looks like. We don't actually know these values
actual_values_euler = ForwardEuler(onecomp, k, y0, times)
actual_values = simulate(onecomp, k, y0, times)
# Plot the model results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Concentration, y')
plt.plot(times, actual_values, '--', label='scipy')
plt.plot(times, actual_values_euler, label='forward Euler')
plt.legend()
plt.show()
Explanation: Now that we have a model and an initial value $y0$, we want to estimate, for any given parameter $k$, $y$ for all values of $t$.
Ordinary Differential Equations can be solved with numerical methods such as Runge-Kutta. The simplest of these is known as Forward Euler.
The popular Python package scipy also has a built-in method, odeint, that integrates ODEs.
End of explanation
# Make noisy data that we're inferring from. noisy_data is known to us.
noise = np.random.normal(0, 0.03, len(actual_values))
noisy_data = actual_values + noise
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values (we know these)')
plt.plot(times, actual_values, label='Actual values (we don\'t know these)')
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.legend()
plt.show()
Explanation: Bayesian optimisation <a class="anchor" id="optimisation"></a>
In optimisation, you measure the output of a system with a known input, compare it to the measured output of the 'real' system, and so estimate the parameters most likely to give that output.
We will use the example model shown above: so the problem becomes this:-
Problem to solve: which value of k gives the model output that best matches the measured data?
In this case, we first require measurements from the real-life system to compare our estimates to. Any measurements will have some measurement error, which we're modelling as gaussian noise below.
End of explanation
# So what do we want to minimise?
def sumofsquares(y_model, y_data):
Gives the sum of the square of all errors between model and experimental data.
In
y_model: list of output values from model
y_data: list of experimental (i.e. noisy) values
Out: the sum of square error
sq_error = []
for t in range(len(y_model)):
sq_error.append((y_data[t] - y_model[t])**2)
return sum(sq_error)
Explanation: We're trying to find which value of $k$ makes our model output the closest to the real system output, assuming that the input dose for each is the same. To do this we define a function, scalar_to_minimise, whose output reduces in size, the closer model output is to measured system output.
End of explanation
# Optimise it with scipy
def scalar_to_minimise(parameters):
For a one compartment model & sum of squares this is what's minimised
y_model = simulate(onecomp, parameters, 1, times)
y_data = noisy_data
return sumofsquares(y_model, y_data) / len(y_model)
start_params = 11
result = scipy.optimize.minimize_scalar(scalar_to_minimise)
print('Calculated k: \t'+str(result.x))
print('Real k: \t'+str(k))
# What does that look like?
recon_model = simulate(onecomp, result.x, 1, times)
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values')
plt.plot(times, actual_values, label='Actual values')
plt.plot(times, recon_model, '--', label='Inferred values')
plt.xlabel('Time')
plt.ylabel('Concentration')
plt.legend()
plt.show()
Explanation: By modelling many values of $k$ and finding the one that gives the lowest value from scalar_to_minimise, we can make a good estimate of $k$ in the real-life system. As you can see, this is inherently imperfect, but can be quite accurate.
End of explanation
# Need to make the model into a Pints one
class PintsOneComp(pints.ForwardModel):
def n_parameters(self):
return 1
def simulate(self, parameter, times):
return simulate(onecomp, parameter, 1, times)
problem = pints.SingleOutputProblem(PintsOneComp(), times, noisy_data) # Create a model instance with measured data
error_measure = pints.SumOfSquaresError(problem) # Define the error measure to be used (as sumofsquares function)
optimisation = pints.OptimisationController(error_measure, [1], method=pints.XNES) # Define a statistical problem
optimisation.set_log_to_screen(False) # Suppress log output
parameters, error = optimisation.run() # Run the statistical model
# How does it look?
print('Custom calculation result: \t'+str(result.x))
print('Pints calculation result: \t'+str(parameters[0]))
plt.figure()
plt.plot(times, noisy_data, '.', label='Measured values')
plt.plot(times, recon_model, '--', label='Custom inferred values')
plt.plot(times, PintsOneComp().simulate(parameters, times), '--', lw=2, label='Pints inferred values')
plt.legend()
plt.show()
Explanation: Using PINTS
The work above estimates $k$ pretty well simply based on minimising the output of scalar_to_minimise. So how, and more importantly why, do we use PINTS?
PINTS formalises the process of creating an optimiser, making it more re-usable in different settings. Rather than rewriting code every time new measures are needed, PINTS allows us to change to different error measures, optimisation methods, and more, relatively simply. Not only that, but PINTS is both faster to write and faster to run than the section above!
End of explanation
# This model stores all the model-related variables, for simplicity.
class OdeModel():
def __init__(self, thetas, covariates, prior, likelihood, modeltype):
self.thetas = thetas
self.covariates = covariates
self.modeltype = modeltype
self.prior = prior
self.likelihood = likelihood
Explanation: There is an important conceptual change in the code above. Previously, we have dealt with a physical model of the real-life system of interest (the function onecomp). In the example described, the physical model takes $y(t_n, param)$ as input and gives $\frac{d}{dt}y(t_n, param)$ as output. With PINTS, the physical model is provided by the user as a simulate function.
PINTS provides a statistical model, which takes measured data $y_{noisy}(t)$ and provides parameter(s) as output. In the example described, the statistical model is a pints.SingleOutputProblem instance, problem, taking measured data, a time array, and the user-defined class PintsOneComp as input.
In order to allow PINTS to use the physical model, the user needs to provide a class inheriting from pints.ForwardModel.
Bayesian sampling <a class="anchor" id="sampling"></a>
Optimisation gives us the most likely parameters, but sampling can give a probability distribution of different parameter values. The method we'll use here is Metropolis Hastings. In our example model, our only model parameter ($\theta$) is $k$ (the decay constant in equation 3 at the start of this tutorial).
Therefore our problem is this:-
Problem to solve: what is the distribution of likely k values, given our data and assumptions?
The Metropolis-Hastings algorithm creates a series of potential values of $\theta$ by making random proposals for new $\theta$s and calculating the numerator of Bayes' rule (equation 2) for each one.
Steps for Metropolis Hastings
1. Start with an arbitrary $\theta$ (a $k$ and a $\sigma$). Calculate $(\text{prior} \cdot \text{likelihood})$.
2. Move to a different $\theta$ (selected from a normal distribution about the existing value). Calculate $(\text{prior} \cdot \text{likelihood})$.
3. If new $(\text{prior} \cdot \text{likelihood})$ is higher than old, keep it and add the new $\theta$ to a list. If not, keep it if rand(0->1) > (old/new). If neither of these work, move back to the old one and add that instead.
4. Repeat steps 2-3 for N steps.
5. Count your list of $\theta$s into bins and draw a histogram.
The distribution of your histogram should approximate the posterior probability distribution. This may not be apparent at first glance; for more information please see the resources above.
End of explanation
def uniform_prior(theta):
Returns 0.1 if entire input list is between 0 & 10, else 0
prior = []
for key, param in theta.items():
if param > 0 and param < 10:
prior.append(0.1)
else:
prior.append(0)
return min(prior)
def likelihood_k(theta, y_data):
Returns the likelihood, P(theta|y)
k = theta['k']
sigma = 0.03
pdf = []
y_model = simulate(onecomp, k, 1, times)
other_bit = 1/(2*math.pi*sigma**2)
for t in range(len(y_data)): # this loop gives a normally distributed pdf
square_error = (y_data[t] - y_model[t])**2
exponential = math.exp(-square_error/(2*sigma**2))
pdf.append(exponential*other_bit)
return np.prod(pdf)
Explanation: The functions below provide the prior and likelihood for the numerator of Bayes' rule.
In this case, we use a uniform prior $P(\theta) ~ U(0,10)$, i.e. for any input between 0 & 10, it outputs 0.1, else 0.
The likelihood $P(y|\theta)$ (probability that theta could give the output y_data) in this case gives $\frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(y_{data}-y_{\theta})^2}{2\sigma^2}}$.
End of explanation
def propose_new_theta(model, y_data, theta):
Randomly proposes a new theta and decides whether to accept or not
In
model: instance of OdeModel class
y_data: list with experimental data
theta: parameters, in a list
Out: new parameters, either the same (if proposed not accepted) or different
numerator = model.prior(theta) * model.likelihood(theta, y_data)
# randomly get a proposed theta & calculate its numerator
proposed_theta = {}
for key, value in theta.items():
proposed_k = np.random.normal(value, model.covariates[key])
proposed_theta[key] = proposed_k
proposed_numerator = model.prior(proposed_theta) * model.likelihood(proposed_theta, y_data)
# if the new numerator should be accepted (metropolis hastings criteria), replace theta
if proposed_numerator == 0:
pass
elif proposed_numerator > numerator:
theta = proposed_theta
numerator = proposed_numerator
elif np.random.rand() < proposed_numerator/numerator:
theta = proposed_theta
numerator = proposed_numerator
return theta
# This just runs propose_new_theta repeatedly
def metropolis_singlethread(model, y_data, threadnum, max_iters):
iters = 0
while iters < max_iters:
theta = propose_new_theta(model, y_data, model.thetas[threadnum][-1])
model.thetas[threadnum].append(theta)
iters = iters + 1
def metropolishastings(model, y_data, blocksize, number_of_blocks):
n = 0
while n < number_of_blocks:
for threadnum, thetas_onelot in enumerate(model.thetas):
metropolis_singlethread(model, y_data, threadnum, blocksize)
n = n+1
Explanation: The function propose_new_theta does steps 2 & 3 in the instructions above. It takes the model, the current $\theta$ value, and measured data.
It returns an accepted value of $\theta$.
This is proposed_theta if accepted, else it is theta.
End of explanation
ks = np.linspace(0,10,100)
likelihoods = []
for n in ks:
likelihoods.append(likelihood_k({'k':n}, noisy_data))
plt.figure()
plt.plot(ks, likelihoods)
plt.xlabel('input parameter, k')
plt.ylabel('likelihood')
plt.axvline(1.5, color='k', label='True value of k')
plt.show()
# Run the metropolis hastings algorithm
thetas_k = [[{'k':5}], [{'k':3}], [{'k':1}]] # Three initial guesses for k
covariates_k = {'k':0.05} # Step size (SD of normal distribution for choosing next proposed theta)
model = OdeModel(thetas_k, covariates_k, uniform_prior, likelihood_k, onecomp)
metropolishastings(model, noisy_data, 10, 100)
# This is how k looks (from all start-points) as the algorithm progresses
plt.figure()
for n in range(len(model.thetas)):
ks_list= [theta['k'] for theta in model.thetas[n]]
plt.plot(ks_list[:500]) # only first 500
plt.xlabel('iteration #')
plt.ylabel('k')
plt.show()
# Here are the occurrences of all k estimates throughout the algorithm
all_ks = []
for n in range(len(model.thetas)):
ks_list = [theta['k'] for theta in model.thetas[n]]
all_ks.append(ks_list)
plt.figure()
plt.hist(all_ks, bins=100, stacked=True)
plt.xlabel('k')
plt.ylabel('occurrence')
plt.show()
Explanation: Let's see this work. First, let's see what the likelihood is when we fix the measured data and take a variety of different proposed $k$ parameters.
End of explanation
import pints
log_likelihood = pints.GaussianKnownSigmaLogLikelihood(problem, sigma=0.05) # Define & wrap a physical model
startpoints = [[1],[3],[5]] # Start 3 Markov chains from arbitrary points
mcmc = pints.MCMCController(log_likelihood, 3, startpoints, method=pints.HaarioBardenetACMC) # Define a statistical problem
mcmc.set_max_iterations(2000) # Set number of iterations to attempt
mcmc.set_log_to_screen(False) # Suppress log output
samples = mcmc.run() # Run the statistical model
# Use a diagnostic plot to check if the chains have converged
import pints.plot
pints.plot.trace(samples)
plt.show()
# Plot several predictions that are all likely sources of the experimental data
pints.plot.series(np.vstack(samples[:,1000:]), problem)
plt.show()
Explanation: Using PINTS
Now let's try the same thing as we have just done above, but using PINTS!
Once again, it's much easier to type and quicker to run than our manual method above.
End of explanation |
13,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simon #metoo step2
Step1: Turn on logging.
Step2: Document pre-processing
We start with units of text represented as text files in a folder.
Step3: Remove stopwords and tokenize.
Step4: Remove words that appear only once.
Step5: Vectorize the corpus (bow)
First create a mapping between tokens and their integer IDs.
Filter out words from the dictionary that occur in less than x documents, or more than y% of the documents.
Step6: Then, create a bag-of-words corpus of numerical document vectors.
Step7: Check dataset size.
Step8: Evaluations can be run to decide optimal number of topics. This takes time, and we won't do this now.
Train the LDA model
The code below shows how to create the model and get a document-topic matrix. We will not run it here.
Step9: An example topic
Step10: To get the same data as in my original #metoo study, we access that exact topicmodel. | Python Code:
import pandas as pd
pd.set_option('display.max_colwidth', -1)
from string import punctuation
from collections import defaultdict
from gensim import corpora, models, matutils
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
import re
import glob
Explanation: Simon #metoo step2
End of explanation
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Turn on logging.
End of explanation
files = glob.glob('tmfiles/*')
docs = [open(f).read() for f in files]
len(docs)
Explanation: Document pre-processing
We start with units of text represented as text files in a folder.
End of explanation
stoplist = [word for word in stopwords.words('english')]
texts = [[word for word in doc.lower().split() if word not in stoplist] for doc in docs]
Explanation: Remove stopwords and tokenize.
End of explanation
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1]for text in texts]
Explanation: Remove words that appear only once.
End of explanation
dictionary = corpora.Dictionary(texts)
dictionary.filter_extremes(no_below=3, no_above=0.5)
Explanation: Vectorize the corpus (bow)
First create a mapping between tokens and their integer IDs.
Filter out words from the dictionary that occur in less than x documents, or more than y% of the documents.
End of explanation
corpus = [dictionary.doc2bow(text) for text in texts]
Explanation: Then, create a bag-of-words corpus of numerical document vectors.
End of explanation
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
Explanation: Check dataset size.
End of explanation
#num_topics = 500
#passes = 15 # loops through the entire corpus
#iterations = 50 # runs through each document
#eval_every = 2 # evaluate model perplexity
#lda_model = models.LdaModel(corpus=corpus,
#id2word=dictionary,
#num_topics=num_topics,
#eval_every=eval_every,
#iterations=iterations,
#passes=passes)
#lda_corpus = lda_model[corpus]
# VIEW TOPICS
#topics = lda_model.show_topics(num_topics=32, num_words=20)
Explanation: Evaluations can be run to decide optimal number of topics. This takes time, and we won't do this now.
Train the LDA model
The code below shows how to create the model and get a document-topic matrix. We will not run it here.
End of explanation
## Convert the corpus into a sparse matrix,
## in scipy.sparse.csc_matrix format, with documents as columns
#matrix = matutils.corpus2csc(lda_corpus)
#matrix
#lda_df = pd.DataFrame(matrix.toarray()) # convert to pandas
#lda_df = pd.DataFrame.transpose(lda_df) # flip rows / columns
##df rows are docs, cols are topics
#lda_df.to_csv("topicmodel.csv")
Explanation: An example topic:
End of explanation
df = pd.DataFrame.from_csv("topicmodel.csv", index_col=None)
df.head()
df = df.sort_index()
df.rename(columns={'Unnamed: 0': 'day'}, inplace=True)
df = df.set_index('day')
# Converting the index as date
df.index = pd.to_datetime(df.index)
df.head()
Explanation: To get the same data as in my original #metoo study, we access that exact topicmodel.
End of explanation |
13,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Calculate sand proportion
We'd like to compute a running-window sand log, given some striplog.
These are some sand beds
Step2: Make a striplog
Step3: Make a sand flag log
We'll make a log version of the striplog
Step4: Convolve with running window
Convolution with a boxcar filter computes the mean in a window.
Step5: Write out as CSV
Here's the proportion log we made
Step6: Save it with NumPy (or you could build up a Pandas DataFrame)...
Step7: Check the file looks okay with a quick command line check (! sends commands to the shell).
Step8: Plot everything together
Step9: Make a histogram of thicknesses | Python Code:
text = top,base,comp number
24.22,24.17,20
24.02,23.38,19
22.97,22.91,18
22.67,22.62,17
21.23,21.17,16
19.85,19.8,15
17.9,17.5,14
17.17,15.5,13
15.18,14.96,12
14.65,13.93,11
13.4,13.05,10
11.94,11.87,9
10.17,10.11,8
7.54,7.49,7
6,5.95,6
5.3,5.25,5
4.91,3.04,4
2.92,2.6,3
2.22,2.17,2
1.9,1.75,1
Explanation: Calculate sand proportion
We'd like to compute a running-window sand log, given some striplog.
These are some sand beds:
End of explanation
from striplog import Striplog, Component
s = Striplog.from_csv(text=text)
s.plot(aspect=5)
s[0]
Explanation: Make a striplog
End of explanation
start, stop, step = 0, 25, 0.01
L = s.to_log(start=start, stop=stop, step=step)
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 2))
plt.plot(L)
Explanation: Make a sand flag log
We'll make a log version of the striplog:
End of explanation
import numpy as np
window_length = 2.5 # metres.
N = int(window_length / step)
boxcar = 100 * np.ones(N) / N
z = np.linspace(start, stop, L.size)
prop = np.convolve(L, boxcar, mode='same')
plt.plot(z, prop)
plt.grid(c='k', alpha=0.2)
plt.ylim(-5, 105)
Explanation: Convolve with running window
Convolution with a boxcar filter computes the mean in a window.
End of explanation
z_prop = np.stack([z, prop], axis=1)
z_prop.shape
Explanation: Write out as CSV
Here's the proportion log we made:
End of explanation
np.savetxt('prop.csv', z_prop, delimiter=',', header='elev,perc', comments='', fmt='%1.3f')
Explanation: Save it with NumPy (or you could build up a Pandas DataFrame)...
End of explanation
!head prop.csv
Explanation: Check the file looks okay with a quick command line check (! sends commands to the shell).
End of explanation
fig, ax = plt.subplots(figsize=(5, 10), ncols=3, sharey=True)
# Plot the striplog.
s.plot(ax=ax[0])
ax[0].set_title('Striplog')
# Fake a striplog by plotting the log... it looks nice!
ax[1].fill_betweenx(z, 0.5, 0, color='grey')
ax[1].fill_betweenx(z, L, 0, color='gold', lw=0)
ax[1].set_title('Faked with log')
# Plot the sand proportion log.
ax[2].plot(prop, z, 'r', lw=1)
ax[2].set_title(f'% sand, {window_length} m')
Explanation: Plot everything together
End of explanation
thicks = [iv.thickness for iv in s]
_ = plt.hist(thicks, bins=51)
Explanation: Make a histogram of thicknesses
End of explanation |
13,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Check missing data or NaN
Data exploration
Analysis on
1. Age
2. Pages visited
3. New user
Columns
Step1: <a id = 'section1'></a>
Check for missing data, or NaN
Step2: <a id='section2'></a>
Data exploration
Step3: People can span an age of up to 123 years!!,
let's put a top limit of 100 years
Step5: Many of the features seem to be correlated with conversion, let's analyze each one in detail.
<a id='age'></a>
Age
Step6: <a id = 'pages_visited'></a>
Pages visited
Step7: Total pages visited is correlated with conversion independent of age.
<a id='new_user'></a>
New user | Python Code:
import pandas as pd
import numpy as np
columns=['country','age','new_user','source','total_pages_visited','converted']
df = pd.read_csv('conversion_data.csv')
df.columns=columns
df.head(2)
Explanation: Check missing data or NaN
Data exploration
Analysis on
1. Age
2. Pages visited
3. New user
Columns:
country : user country based on the IP address
age : user age. Self-reported at sign-in step
new_user : whether the user created the account during this session or had already an account and simply came back to the site
source : marketing channel source
Ads: came to the site by clicking on an advertisement
Seo: came to the site by clicking on search results
Direct: came to the site by directly typing the URL on the browser
total_pages_visited: number of total pages visited during the session. This is a proxy for time spent on site and engagement during the session.
converted: this is our label. 1 means they converted within the session, 0 means they left without buying anything. The company goal is to increase conversion rate: # conversions / total sessions.
End of explanation
# 1. Throw out duplicated data:
#print(len(df))
#df = df[df.duplicated() == False]
#print(len(df))
# Probably don't drop duplicated for this kind of data
# 2. Check for NaNs:
print(df.isnull().values.any())
#No null values
Explanation: <a id = 'section1'></a>
Check for missing data, or NaN
End of explanation
df['converted'] = df['converted'].astype('category')
df['new_user'] = df['new_user'].astype('category')
df.describe()
Explanation: <a id='section2'></a>
Data exploration
End of explanation
print(df[df['age']>100])
df = df.drop(df[df['age']>100].index)
% matplotlib inline
import seaborn as sns
import matplotlib as plt
g = sns.pairplot(df[["age", "new_user", "total_pages_visited", "converted"]], hue="converted", diag_kind="hist")
for ax in g.axes.flat:
plt.artist.setp(ax.get_xticklabels(), rotation=45)
Explanation: People can span an age of up to 123 years!!,
let's put a top limit of 100 years
End of explanation
age_categories={1:'17 to 24',2:'24 to 29',3:'30 to 35',4:'36 to 100'}
def get_age_category(x):
Gets an age category
global age_categories
if (x < 24):
category = 1
elif (x < 29):
category = 2
elif (x < 35):
category = 3
else:
category = 4
return age_categories[category]
df['age_category'] = df['age'].apply(get_age_category)
df['age_category'] = df['age_category'].astype('category')
df.describe()
#df_age = df.groupby('age_category').agg({'converted':'count'})
#df_age
df_age = df.groupby('age_category').agg({'converted':'value_counts'}).rename(
columns={'converted':'converted_counts'})
#df_age = df_age.sort_values('converted_counts',ascending=True)
df_age
df_age.index
df_age_converted = df_age.query('(converted==1)').sort_values('converted_counts',ascending=True)
df_age_converted.index = df_age_converted.index.droplevel(1)
df_age_converted
plt.pyplot.figure(figsize=(15, 4))
df_age_converted.plot(kind='barh')
# Initialize the matplotlib figure
#f, ax = plt.pyplot.subplots(figsize=(15, 6))
#Set context, increase font size
sns.set_context("poster", font_scale=1.5)
#Create a figure
plt.pyplot.figure(figsize=(15, 4))
#Define the axis object
ax = sns.barplot(x='converted_counts', y=df_age_converted.index, data=df_age_converted, palette="Blues_d")
#set paramters
ax.set(xlabel='Total converted', ylabel='Age category', title= "Conversions by age")
#show the plot
sns.plt.show()
Explanation: Many of the features seem to be correlated with conversion, let's analyze each one in detail.
<a id='age'></a>
Age
End of explanation
sns.set(style="whitegrid")
#sns.barplot( x="converted", y="age_category", data=df,
# label="Total", color="b")
g = sns.FacetGrid(df, col="age_category")
g.map(sns.barplot, "total_pages_visited","converted",palette="Blues_d")
Explanation: <a id = 'pages_visited'></a>
Pages visited
End of explanation
df_new_user = df.query( '( new_user==1) ')
df_other = df.query( '( new_user==0) ')
df_new_user = df_new_user.groupby('age_category').agg({'converted':'value_counts'})
df_other = df_other.groupby('age_category').agg({'converted':'value_counts'})
print(df_new_user)
print(df_other)
# Get ratio of converted people by age category:
def get_ratio(df_input,age_category):
non_converted = df_input.loc[age_category,0].values[0]
converted = df_input.loc[age_category,1].values[0]
ratio = converted/(non_converted + converted)
return ratio
converted_ratio = []
for age_category in age_categories.values():
ratio_new_user = get_ratio(df_new_user,age_category)
ratio_other = get_ratio(df_other,age_category)
converted_ratio.append([age_category,'new user',ratio_new_user])
converted_ratio.append([age_category,'other user',ratio_other])
df_ratio = pd.DataFrame(data=converted_ratio,columns=['age category','user kind','ratio'])
df_ratio['age category'] = df_ratio['age category'].astype('category')
df_ratio['user kind'] = df_ratio['user kind'].astype('category')
df_ratio.head()
# Initialize the matplotlib figure
#f, ax = plt.pyplot.subplots(figsize=(15, 6))
#Set context, increase font size
sns.set_context("poster", font_scale=1.5)
#Create a figure
plt.pyplot.figure(figsize=(15, 4))
#Define the axis object
ax = sns.barplot(x=df_ratio.ratio, y=df_ratio['age category'], hue='user kind', data=df_ratio, palette="Blues_d")
#set paramters
ax.set(xlabel='Ratio of converted users', ylabel='Age category', title= "Conversions by age")
#show the plot
sns.plt.show()
Explanation: Total pages visited is correlated with conversion independent of age.
<a id='new_user'></a>
New user
End of explanation |
13,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Here we test different classification algorithms for the generated graph measures dataset. Section 3.6 of the report describes the algorithms and results.
Step1: Visualisation with t-SNE and PCA
Some data visualisation using t-SNE and PCA. t-SNE is explained in Section 3.6.1 of the report.
Step4: Implementation of the SMOTE Algorithm
See
Step5: SMOTE
For each threshold, generate new samples for underrepresented classes (MCI and CS) using SMOTE, append them to the initial dataset and store them in the SMOTE_data dictionary.
Step6: Dummy Classifier
Use a dummy classifier as a simple baseline for comparison with other models. The strategy of the dummy classifier is to choose the most frequent class.
Step7: Look at different classifier performances such as
Step8: Logistic Regression with SMOTE
Step9: SVMs
Step10: Multi-class AdaBoosted Decision Trees
Step11: Multi-class AdaBoosted Decision Trees with SMOTE
Step12: Random Forest Classifier
Step13: Random Forest Classifier with SMOTE data | Python Code:
import numpy as np
import random
random.seed(20)
import matplotlib
# Set backend to pgf
matplotlib.use('pgf')
import matplotlib.pyplot as plt
# Some nice default configuration for plots
plt.rcParams['figure.figsize'] = 10, 7.5
plt.rcParams['axes.grid'] = True
#plt.gray()
%matplotlib inline
from scipy.io import loadmat
from pylab import *
from sklearn.pipeline import Pipeline
from sklearn import preprocessing
from sklearn import cross_validation
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import LeaveOneOut
from sklearn import svm
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn import metrics
# set random Seed
randomSeed = 20;
# TODO: Change accordingly
CONNECTIVITY_MEASURE = 'dWPLI'
DATASETS_FOLDER = '/home/dragos/DTC/MSc/SummerProject/processed_data/features/'
DATASETS_FOLDER = DATASETS_FOLDER + CONNECTIVITY_MEASURE + '/full_graph/datasets/'
nameOfDataFileMat = 'datasetFullGraphMeasures.mat'
nameOfDataFileCSV = 'datasetFullGraphMeasures.csv'
# threshold vector
thresholdVec = [0.05, 0.1, 0.15, 0.2, 0.3]
Explanation: Classification
Here we test different classification algorithms for the generated graph measures dataset. Section 3.6 of the report describes the algorithms and results.
End of explanation
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
scaler = preprocessing.StandardScaler().fit(features)
X_SCL = scaler.transform(features)
model = TSNE(n_components=2, random_state=0)
#model = TSNE(learning_rate=100)
X_TSNE = model.fit_transform(X_SCL)
X_PCA = PCA().fit_transform(X_SCL)
figure(figsize=(10, 5))
subplot(121)
scatter(X_TSNE[:, 0], X_TSNE[:, 1], c=targets)
subplot(122)
scatter(X_PCA[:, 0], X_PCA[:, 1], c=targets)
Explanation: Visualisation with t-SNE and PCA
Some data visualisation using t-SNE and PCA. t-SNE is explained in Section 3.6.1 of the report.
End of explanation
#!/usr/bin/env python
# -*- coding: utf-8 -*-
'''
The MIT License (MIT)
Copyright (c) 2012-2013 Karsten Jeschkies <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
'''
'''
Created on 24.11.2012
@author: karsten jeschkies <[email protected]>
This is an implementation of the SMOTE Algorithm.
See: "SMOTE: synthetic minority over-sampling technique" by
Chawla, N.V et al.
'''
import logging
import numpy as np
from random import randrange, choice
from sklearn.neighbors import NearestNeighbors
logger = logging.getLogger("main")
def SMOTE(T, N, k, h = 1.0):
Returns (N/100) * n_minority_samples synthetic minority samples.
Parameters
----------
T : array-like, shape = [n_minority_samples, n_features]
Holds the minority samples
N : percetange of new synthetic samples:
n_synthetic_samples = N/100 * n_minority_samples. Can be < 100.
k : int. Number of nearest neighbours.
Returns
-------
S : Synthetic samples. array,
shape = [(N/100) * n_minority_samples, n_features].
n_minority_samples, n_features = T.shape
if N < 100:
#create synthetic samples only for a subset of T.
#TODO: select random minortiy samples
N = 100
pass
if (N % 100) != 0:
raise ValueError("N must be < 100 or multiple of 100")
N = N/100
n_synthetic_samples = N * n_minority_samples
S = np.zeros(shape=(n_synthetic_samples, n_features))
#Learn nearest neighbours
neigh = NearestNeighbors(n_neighbors = k)
neigh.fit(T)
#Calculate synthetic samples
for i in xrange(n_minority_samples):
nn = neigh.kneighbors(T[i], return_distance=False)
for n in xrange(N):
nn_index = choice(nn[0])
#NOTE: nn includes T[i], we don't want to select it
while nn_index == i:
nn_index = choice(nn[0])
dif = T[nn_index] - T[i]
gap = np.random.uniform(low = 0.0, high = h)
S[n + i * N, :] = T[i,:] + gap * dif[:]
return S
def borderlineSMOTE(X, y, minority_target, N, k):
Returns synthetic minority samples.
Parameters
----------
X : array-like, shape = [n__samples, n_features]
Holds the minority and majority samples
y : array-like, shape = [n__samples]
Holds the class targets for samples
minority_target : value for minority class
N : percetange of new synthetic samples:
n_synthetic_samples = N/100 * n_minority_samples. Can be < 100.
k : int. Number of nearest neighbours.
h : high in random.uniform to scale dif of snythetic sample
Returns
-------
safe : Safe minorities
synthetic : Synthetic sample of minorities in danger zone
danger : Minorities of danger zone
n_samples, _ = X.shape
#Learn nearest neighbours on complete training set
neigh = NearestNeighbors(n_neighbors = k)
neigh.fit(X)
safe_minority_indices = list()
danger_minority_indices = list()
for i in xrange(n_samples):
if y[i] != minority_target: continue
nn = neigh.kneighbors(X[i], return_distance=False)
majority_neighbours = 0
for n in nn[0]:
if y[n] != minority_target:
majority_neighbours += 1
if majority_neighbours == len(nn):
continue
elif majority_neighbours < (len(nn)/2):
logger.debug("Add sample to safe minorities.")
safe_minority_indices.append(i)
else:
#DANGER zone
danger_minority_indices.append(i)
#SMOTE danger minority samples
synthetic_samples = SMOTE(X[danger_minority_indices], N, k, h = 0.5)
return (X[safe_minority_indices],
synthetic_samples,
X[danger_minority_indices])
Explanation: Implementation of the SMOTE Algorithm
See: SMOTE: synthetic minority over-sampling technique by Chawla, N.V et al.
Link to code below.
End of explanation
SMOTE_data = dict()
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
features = data[:, :-1]
targets = data[:, -1]
### SMOTE - generate synthetic samples
k = 5
classIdxsCS = np.where(targets == 1)
classFeaturesCS = features[classIdxsCS]
CSsynth = SMOTE(classFeaturesCS, 38.46, k, h = 1.0) # get 10 synt samples
CSsynth = CSsynth[:10,:] # slice for the first 10
classIdxsMCI = np.where(targets == 2)
classFeaturesMCI = features[classIdxsMCI]
MCIsynth = SMOTE(classFeaturesMCI, 100, k, h = 1.0) # get 18 synt samples
## concatenate original samples with synthetic ones
features = np.concatenate((features, CSsynth, MCIsynth), axis=0)
CStargetssyn = np.full( (CSsynth.shape[0], ), 1)
MCItargetssyn = np.full( (MCIsynth.shape[0], ), 2)
targets = np.concatenate((targets, CStargetssyn, MCItargetssyn), axis=0)
SMOTE_data[threshold] = (features, targets)
print(SMOTE_data.keys())
Explanation: SMOTE
For each threshold, generate new samples for underrepresented classes (MCI and CS) using SMOTE, append them to the initial dataset and store them in the SMOTE_data dictionary.
End of explanation
from sklearn.dummy import DummyClassifier
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', DummyClassifier(strategy='most_frequent', random_state=randomSeed))
])
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3],target_names=target_Classes))
Explanation: Dummy Classifier
Use a dummy classifier as a simple baseline for comparison with other models. The strategy of the dummy classifier is to choose the most frequent class.
End of explanation
from sklearn.linear_model import LogisticRegression
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', LogisticRegression(C=1, random_state=randomSeed))
])
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3],target_names=target_Classes))
Explanation: Look at different classifier performances such as:
Logistic Regression
End of explanation
from sklearn.linear_model import LogisticRegression
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', LogisticRegression(C=1, random_state=randomSeed))
])
for threshold in thresholdVec:
# load dataset from SMOTE_data dictionary
features = SMOTE_data[threshold][0]
targets = SMOTE_data[threshold][1]
n_samples = features.shape[0]
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3],target_names=target_Classes))
Explanation: Logistic Regression with SMOTE
End of explanation
from sklearn.grid_search import GridSearchCV
for threshold in thresholdVec:
#threshold = 0.05
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
#cv = cross_validation.ShuffleSplit(n_samples, n_iter=n_samples,
# test_size=1, random_state=randomSeed)
param_grid = [
{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': np.logspace(-4, 0, 5), 'kernel': ['rbf']},
]
#pprint(param_grid)
loo = LeaveOneOut(n_samples)
gs_svc = GridSearchCV(svm.SVC(C=1), param_grid, cv=loo)
gs_svc.fit(features, targets)
print(gs_svc.best_score_)
#clf = svm.SVC(kernel='rbf')
#clf = svm.SVC(kernel='rbf', C=100, gamma=0.001)
#clf = svm.SVC(kernel='rbf', C=10, gamma=0.005)
#print(clf.get_params())
#clf.fit(features[:-1], targets[:-1])
#scores = cross_validation.cross_val_score(gs_svc, features, targets, cv=loo)
#print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
#print("\n")
Explanation: SVMs
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import MinMaxScaler
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', AdaBoostClassifier(
DecisionTreeClassifier(), # max_depth=2 in example
n_estimators=300, # 600 in example
learning_rate=1,
random_state=randomSeed))
])
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
bdt_real = AdaBoostClassifier(
DecisionTreeClassifier(), # max_depth=2 in example
n_estimators=300, # 600 in example
learning_rate=1)
bdt_discrete = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=2),
n_estimators=300,
learning_rate=1.5,
algorithm="SAMME")
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3],target_names=target_Classes))
Explanation: Multi-class AdaBoosted Decision Trees
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.preprocessing import MinMaxScaler
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', AdaBoostClassifier(
DecisionTreeClassifier(), # max_depth=2 in example
n_estimators=300, # 600 in example
learning_rate=1,
random_state=randomSeed))
])
for threshold in thresholdVec:
data_file_path = DATASETS_FOLDER + str(threshold) + '/' + nameOfDataFile
# load dataset from SMOTE_data dictionary
features = SMOTE_data[threshold][0]
targets = SMOTE_data[threshold][1]
n_samples = features.shape[0]
loo = LeaveOneOut(n_samples)
#bdt_discrete = AdaBoostClassifier(
# DecisionTreeClassifier(max_depth=2),
# n_estimators=300,
# learning_rate=1,
# algorithm="SAMME")
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3],target_names=target_Classes))
Explanation: Multi-class AdaBoosted Decision Trees with SMOTE
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', RandomForestClassifier(
n_estimators=200,
random_state=randomSeed)
)
])
for threshold in thresholdVec:
# load dataset
data_dict = loadmat(data_file_path)
data = data_dict['dataset']
theThreshold = data_dict['threshold']
n_samples = data.shape[0]
features = data[:, :-1]
targets = data[:, -1]
n_samples = features.shape[0]
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3], target_names=target_Classes))
Explanation: Random Forest Classifier
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
model = Pipeline([
('scaler', preprocessing.StandardScaler() ),
('clf', RandomForestClassifier(
n_estimators=200,
random_state=randomSeed)
)
])
for threshold in thresholdVec:
# load dataset from SMOTE_data dictionary
features = SMOTE_data[threshold][0]
targets = SMOTE_data[threshold][1]
n_samples = features.shape[0]
loo = LeaveOneOut(n_samples)
p = []
t = []
for train,test in loo:
model.fit(features[train], targets[train])
p.append(model.predict(features[test]))
t.append(targets[test])
p=vstack(p)
target_Classes = ['CS', 'MCI', 'AD']
print(metrics.confusion_matrix(t,p))
print(metrics.classification_report(t,p, labels=[1,2,3], target_names=target_Classes))
Explanation: Random Forest Classifier with SMOTE data
End of explanation |
13,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explauto, an open-source Python library to study autonomous exploration in developmental robotics
Explauto is an open-source Python library providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well a platform for teaching and scientific diffusion. It is is available on github.
The library is organized in three main packages, each one containing a collection of interchangeable modules
Step1: According to your installation, you will see at least two available environments
Step2: For example, the 'mid_dimensional' configuration corresponds to
Step3: One can use this method with every registered environments. For example the available configurations for the pendulum are
Step4: Let's instantiate a mid-dimensional simple arm
Step5: Each particular environment has to implement its own compute_sensori_effect method, which takes as argument a motor command vector $m$ (here the position of the joints, 7-dimensional). It returns the corresponding sensory effect vector $s$ (here the coordinate of the hand, $2$-dimensional).
Step6: Environments can implement specific methods for, e.g., drawing
Step7: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position is measured with respect to their respective previous segment.
The Environment base class provides several useful methods in order to, e.g., sample random motor commands
Step8: Let's for example plot 10 random arm configurations
Step9: Dynamical environments are also available, though their integration with the rest of the library is not yet completly clear (to discuss later). E.g., a circular pendulum
Step10: The compute_sensori_effect method is also defined (using a motor primitive)
Step11: But let's continue this tutorial using a mid-dimensional simple arm
Step12: Learning sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to
Step13: Here we will use the 'nearest neighbor' model. This sensorimotor model simply stores sensorimotor experience, ie. $(m, s)$ pairs where $m$ is a motor command (here arm joint positions) and $s$ the corresponding sensory effect (here end-effector positions). When asked for a forward prediction for a given motor command $m$, it returns the associated sensory effect $s$ of the nearest neighbor of $m$ in the stored sensorimotor experience. When asked for an inverse prediction to reach a sensory goal $s$, it returns the associated motor command $m$ of the nearest neighbor of $s$ in the stored sensorimotor experience, possibly pertubated with a bit gaussian noise.
Step14: We will use the 'exact' configuration, which perform forward and inverse prediction as explained above, without any noise added (ie., it just looks for the nearest neighbor).
Now we can instantiate the sensorimotor model by using
Step15: Note that in addition to the names of the model and its configuration, one also has to pass environment.conf. This a Configuration object which is instantiated during the environment creation and provides information about the motor and sensorimotor ranges used by the environment. It is useful for the sensorimotor model to be properly configured. When using the 'default' configuration for example, the added noise when performing inverse prediction depends on the motor ranges. Passing environment.conf thus allows to define sensorimotor model configurations independently of particular environment settings.
Now let's train the model from the execution of random motor commands (i.e. random motor babbling)
Step16: Note that sensorimotor model training in Explauto is an iterative process. They incorporate new sensorimotor experience on the fly instead of using batch training. This is a requirement for autonomous exploration where the internal model has to be refined online.
Once the sensorimodel has been trained, one can perform forward and inverse prediction with it. Let's predict the sensori effect of a new random motor command (which is not in the training set we just used) using the forward_prediction method
Step17: and compare the predicted effect with the real effect observed from executing $m$ through the environment
Step18: We observe that the predicted end-effector position is quite close to the observed position when executing the motor command. Using the 'NN' model, it simply corresponds to the sensory effect of the nearest neighbor of $m$ in the stored sensorimotor experience.
Sensorimotor models can also be used for inverse prediction using the inverse_prediction method, allowing the inference of an appropriate motor comand $m$ in order to reach a given sensory goal $s_g$
Step19: We can check if the inferred motor command is actually appropriate to reach the goal $s_g$
Step20: We observe that the inferred motor command results in an end-effector position which is quite close to the goal. Using the 'exact' configuration of the 'nearest_neighbor' model, it is simply the motor command which resulted in the sensory effect which is the closest to $s_g$ in the stored experience.
Here is a bit more complex example where the arm attempt to follow a vertical straight line with the end-effector
Step21: Using another sensorimotor model in Explauto simply consists of changing the model name and configuration above. For example, you can try to execute the exact same code, just replacing the model instanciation by
Step22: Motor and goal babbling using interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures
Step23: and the available configurations of a given model by
Step24: Using an environment, a sensorimotor and an interest model, one can run a motor babbling strategy by
Step25: Then running the following simulation loop and (optionally) plotting the reached sensory effects
Step26: (The plots are quite hugly here, we will present Explauto visualization tools in the following.)
Random goal babbling corresponds to
Step27: We observe that goal babbling allow a more uniform covering of the sensory space.
And finally, here is the code for curiosity-driven goal babbling (maximization of the learning progress)
Step28: The reached point obtained above do not well cover the sensory space. This is due to the fact that we did not re-initialize the sensorimotor model (therefore this latter was already trained) to avoid some bootstrapping issues. The next section shows how to encapsulate a sensorimotor and an interest models into an agent to, among other things, take care of those bootstrapping issues.
Encapsulating a sensorimotor and an interest models into an agent
Encapsulating a sensorimotor and an interest models into an agent allows to generalize and simplify the simulation loop whatever the exploration strategy involved, ie whatever the type of babbling, the sensorimotor and the interest models. In Explauto, an agent is intantiated using a configuration (generally from an environment), a sensorimotor and an interest models
Step29: An agent is provided with two methods. One for producing a motor command
Step30: The produce() method calls the sample() method of the interest model, which returns either a motor command or a sensory goal according to the interest space (i.e. the type of babbling). Then it uses the sensorimotor model to complete the obtained value into a full sensorimotor vector (using forward prediction in case of motor babbling and inverse prediction in case of goal babbling). Finally it returns the motor part of this full sensorimotor vector. Agents also take care of model bootstrapping issues.
The second main agent method is perceive(), which informs the agent with the sensorimotor consequence of its action in order to update both the sensorimotor and the interest models
Step31: Hence the entire simulation loop can now be rewritten as
Step32: This loop is valid whatever the exploration strategy involved. The corresponding formal framework is defined in
Step33: and run it using the exact same loop
Step34: Of course lack a way to visualize the result of our simulations here, this is why we introduce Explauto's Experiment in the next section.
Encapsulating an environment and an agent into an experiment
Encapsulating an environment and an agent into an experiment allows to evaluate agent learning and offers plotting facilities. Once an environment and an agent have been constructed, one can set an experiment using
Step35: An experiment offers the management of the simulation loop with evaluation, logging and plotting capabilities. Instead of seperately constructing the environment and the agent (containing the sensorimotor and the interest models), one can simply use
Step36: This is the compact way to construct the environment (here a mid-dimensional 'simple_arm'), the sensorimotor model (here, 'NN') and the interest model (here curiosity-driven goal babbling) and encapsulate them into an experiment.
An experiment allows to insert an evaluation phase at given time steps
Step37: Now let's run the experiment
Step38: This executes the same simulation loop as above, inserting an evaluation phase at each specified time step and logging the flow of interest model choices, sensorimotor model inferences and sensorimotor observations. This allows to, e.g., visualize the chosen goals and reached hand positions during the experiment using the scatter_plot method
Step39: or to vizualize the learning curve
Step40: Parallel comparison of exploration strategies
Various exploration strategies can be launched in parallel and compared by using an experiment pool
Step41: running it
Step42: comparing learning curves
Step43: or vizualize the iterative choice of goals and the reached effects | Python Code:
from __future__ import print_function
from explauto.environment import environments
environments.keys()
Explanation: Explauto, an open-source Python library to study autonomous exploration in developmental robotics
Explauto is an open-source Python library providing a unified API to design and compare various exploration strategies driving various sensorimotor learning algorithms in various simulated or robotics systems. Explauto aims at being collaborative and pedagogic, providing a platform to developmental roboticists where they can publish and compare their algorithmic contributions related to autonomous exploration and learning, as well a platform for teaching and scientific diffusion. It is is available on github.
The library is organized in three main packages, each one containing a collection of interchangeable modules:
* The environment package provides a unified interface to real and simulated robots.
* The sensorimotor_model package provides a unified interface to online machine learning algorithm.
* The interest_model package provides a unified interface for the active choice of sensorimotor experiments.
The library is easily extendable by forking the github repository and proposing new modules for each package (tutorial to come, do not hesitate to contact us want to get involved).
This tutorial shows how to use modules contained in these three packages, how to integrated them in simulation loops and how to analyse the results.
Setting environments
In Explauto, an environment implements the physical properties of the interaction between the robot body and the environment in which it evolves. Explauto comes with several sensorimotor systems available from the environment package:
End of explanation
from explauto.environment import available_configurations
available_configurations('simple_arm').keys()
Explanation: According to your installation, you will see at least two available environments:
* a multi-joint arm acting on a plan ('simple_arm')
* an under-actuated torque-controlled circular pendulum ('pendulum').
These environments are simulated. Explauto also provides an interface to real robots based on Dynamixel actuators by providing bindings to the Pypot library (this tutorial shows how to use it on a Poppy robot).
We will use the simple arm for this tutorial. It consists in the simulation of a $n$ degrees-of-freedom (DoF) arm with movements limited to a 2D plan. Each available environment comes with a set of predefined configurations. A default configuration will always be defined. For the simple arm they are:
End of explanation
available_configurations('simple_arm')['mid_dimensional']
Explanation: For example, the 'mid_dimensional' configuration corresponds to:
End of explanation
available_configurations('pendulum').keys()
Explanation: One can use this method with every registered environments. For example the available configurations for the pendulum are:
End of explanation
from explauto import Environment
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
Explanation: Let's instantiate a mid-dimensional simple arm:
End of explanation
from numpy import pi
m = [-pi/6., pi/3., pi/4., pi/5., 0., pi/3., pi/6.]
environment.compute_sensori_effect(m)
Explanation: Each particular environment has to implement its own compute_sensori_effect method, which takes as argument a motor command vector $m$ (here the position of the joints, 7-dimensional). It returns the corresponding sensory effect vector $s$ (here the coordinate of the hand, $2$-dimensional).
End of explanation
# Create the axes for plotting::
%pylab inline
ax = axes()
# plot the arm:
environment.plot_arm(ax, m)
Explanation: Environments can implement specific methods for, e.g., drawing:
End of explanation
motor_configurations = environment.random_motors(n=10)
Explanation: The base of the arm is fixed at (0, 0) (circle). The first angle position m[0] corresponds to the angle between a horizontal line and the segment attached to the base, anticlock-wise. Each following angle position is measured with respect to their respective previous segment.
The Environment base class provides several useful methods in order to, e.g., sample random motor commands:
End of explanation
# Create the axes for plotting::
%pylab inline
ax = axes()
# Plotting 10 random motor configurations:
for m in motor_configurations:
environment.plot_arm(ax, m)
Explanation: Let's for example plot 10 random arm configurations:
End of explanation
environment = Environment.from_configuration('pendulum', 'default')
%pylab
ax = axes()
# Sequence of torques at each time step:
U = [0.25] * 15 + [-0.25] * 15 + [0.25] * 19
# reset to lower position:
environment.reset()
# apply torque and plot:
for u in U:
ax.cla()
environment.apply_torque(u)
environment.plot_current_state(ax)
draw()
Explanation: Dynamical environments are also available, though their integration with the rest of the library is not yet completly clear (to discuss later). E.g., a circular pendulum:
End of explanation
environment.compute_sensori_effect(environment.random_motors())
Explanation: The compute_sensori_effect method is also defined (using a motor primitive):
End of explanation
environment = Environment.from_configuration('simple_arm', 'mid_dimensional')
Explanation: But let's continue this tutorial using a mid-dimensional simple arm:
End of explanation
from explauto.sensorimotor_model import sensorimotor_models
sensorimotor_models.keys()
Explanation: Learning sensorimotor models
In Explauto, a sensorimotor model implements both the iterative learning process from sensorimotor experience, i.e. from the iterative collection of $(m, s)$ pairs by interaction with the environment, and the use of the resulting internal model to perform forward and inverse predictions (or any kind of general prediction between sensorimotor subspaces).
Learning sensorimotor mappings involves machine learning algorithms, for which Explauto provides a unified interface through the SensorimotorModel abstract class.
Using the simple arm environment above, it allows to iteratively learn a sensorimotor model which will be able to:
* infer the position of the end-effector from a given motor command, what is called forward prediction,
* infer the motor command allowing to reach a particular end-effector position, what is called inverse prediction.
* update online from sensorimotor experience
Several sensorimotor models are provided: simple nearest-neighbor look-up, non-parametric models combining classical regressions and optimization algorithms, online local mixtures of Gaussians (beta).
Similarly to environments, available sensorimotor models in Explauto can be accessed using:
End of explanation
from explauto.sensorimotor_model import available_configurations
available_configurations('nearest_neighbor')
Explanation: Here we will use the 'nearest neighbor' model. This sensorimotor model simply stores sensorimotor experience, ie. $(m, s)$ pairs where $m$ is a motor command (here arm joint positions) and $s$ the corresponding sensory effect (here end-effector positions). When asked for a forward prediction for a given motor command $m$, it returns the associated sensory effect $s$ of the nearest neighbor of $m$ in the stored sensorimotor experience. When asked for an inverse prediction to reach a sensory goal $s$, it returns the associated motor command $m$ of the nearest neighbor of $s$ in the stored sensorimotor experience, possibly pertubated with a bit gaussian noise.
End of explanation
from explauto import SensorimotorModel
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'exact')
Explanation: We will use the 'exact' configuration, which perform forward and inverse prediction as explained above, without any noise added (ie., it just looks for the nearest neighbor).
Now we can instantiate the sensorimotor model by using:
End of explanation
for m in environment.random_motors(n=1000):
# compute the sensori effect s of the motor command m through the environment:
s = environment.compute_sensori_effect(m)
# update the model according to this experience:
sm_model.update(m, s)
Explanation: Note that in addition to the names of the model and its configuration, one also has to pass environment.conf. This a Configuration object which is instantiated during the environment creation and provides information about the motor and sensorimotor ranges used by the environment. It is useful for the sensorimotor model to be properly configured. When using the 'default' configuration for example, the added noise when performing inverse prediction depends on the motor ranges. Passing environment.conf thus allows to define sensorimotor model configurations independently of particular environment settings.
Now let's train the model from the execution of random motor commands (i.e. random motor babbling):
End of explanation
# random motor command:
m = environment.random_motors(n=1)[0]
# predicted sensory effect:
s_pred = sm_model.forward_prediction(m)
print('random motor command: ', m)
print('predicted effect: ', s_pred)
Explanation: Note that sensorimotor model training in Explauto is an iterative process. They incorporate new sensorimotor experience on the fly instead of using batch training. This is a requirement for autonomous exploration where the internal model has to be refined online.
Once the sensorimodel has been trained, one can perform forward and inverse prediction with it. Let's predict the sensori effect of a new random motor command (which is not in the training set we just used) using the forward_prediction method:
End of explanation
%pylab inline
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_pred, marker='o', color='red')
Explanation: and compare the predicted effect with the real effect observed from executing $m$ through the environment:
End of explanation
s_g = [0.7, 0.5]
m = sm_model.inverse_prediction(s_g)
print('Inferred motor command to reach the position ', s_g, ': ', m)
Explanation: We observe that the predicted end-effector position is quite close to the observed position when executing the motor command. Using the 'NN' model, it simply corresponds to the sensory effect of the nearest neighbor of $m$ in the stored sensorimotor experience.
Sensorimotor models can also be used for inverse prediction using the inverse_prediction method, allowing the inference of an appropriate motor comand $m$ in order to reach a given sensory goal $s_g$:
End of explanation
ax = axes()
environment.plot_arm(ax, m)
ax.plot(*s_g, marker='o', color='red')
Explanation: We can check if the inferred motor command is actually appropriate to reach the goal $s_g$:
End of explanation
ax = axes()
# Define the line and plot it:
x = 0.8
y_a = 0.5
y_b = -0.5
ax.plot([x, x], [y_a, y_b], color='red')
# for 10 points equidistantly spaced on the line, perform inverse prediction and plot:
for y in linspace(-0.5, 0.5, 10):
m = sm_model.inverse_prediction([x, y])
environment.plot_arm(ax, m)
Explanation: We observe that the inferred motor command results in an end-effector position which is quite close to the goal. Using the 'exact' configuration of the 'nearest_neighbor' model, it is simply the motor command which resulted in the sensory effect which is the closest to $s_g$ in the stored experience.
Here is a bit more complex example where the arm attempt to follow a vertical straight line with the end-effector:
End of explanation
sm_model = SensorimotorModel.from_configuration(environment.conf, 'LWLR-BFGS', 'default')
Explanation: Using another sensorimotor model in Explauto simply consists of changing the model name and configuration above. For example, you can try to execute the exact same code, just replacing the model instanciation by:
End of explanation
from explauto.interest_model import interest_models, available_configurations
interest_models.keys()
Explanation: Motor and goal babbling using interest models
In Explauto, the role of interest models is to provide sensorimotor predictions (forward or inverse) to be performed by the sensorimotor model. An interest model implements the active exploration process, where sensorimotor experiments are chosen to improve the forward or inverse predictions of the sensorimotor model. It explores in a given interest space resulting in motor babbling strategies when it corresponds to the motor space and in goal babbling strategies when it corresponds to the sensory space.
An interest model has to implement a sampling procedure in the interest space. Explauto provides several sampling procedures:
* random sampling
* learning progress maximization in forward or inverse predictions.
* In development:
* social interaction (e.g. using a mouse pointer to interactively provide sensory goals)
* optimization toward a specific goal
Similarly to environments and sensorimotor models, available interest models in Explauto can be accessed using:
End of explanation
available_configurations('discretized_progress')
Explanation: and the available configurations of a given model by:
End of explanation
from explauto import InterestModel
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
Explanation: Using an environment, a sensorimotor and an interest model, one can run a motor babbling strategy by:
* first instantiate a random motor interest model:
End of explanation
# re-instantiate the sensorimotor model (to forget what was previously learnt in the previous section
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
# run the simulation loop
for _ in range(100):
# sample a random motor command using the interest model:
m = im_model.sample()
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: Then running the following simulation loop and (optionally) plotting the reached sensory effects:
End of explanation
# Instantiate a random goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'random')
for _ in range(100):
# sample a random sensory goal using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: (The plots are quite hugly here, we will present Explauto visualization tools in the following.)
Random goal babbling corresponds to:
End of explanation
# Instantiate an active goal interest model:
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
for _ in range(100):
# sample a sensory goal maximizing learning progress using the interest model:
s_g = im_model.sample()
# infer a motor command to reach that goal using the sensorimotor model:
m = sm_model.inverse_prediction(s_g)
# execute this command and observe the corresponding sensory effect:
s = environment.compute_sensori_effect(m)
# update the sensorimotor model:
sm_model.update(m, s)
# update the interest model:
im_model.update(hstack((m, s)), hstack((m, s_g)))
# plot the observed sensory effect:
plot(s[0], s[1], 'ok')
Explanation: We observe that goal babbling allow a more uniform covering of the sensory space.
And finally, here is the code for curiosity-driven goal babbling (maximization of the learning progress):
End of explanation
from explauto import Agent
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.m_dims, 'random')
agent = Agent(environment.conf, sm_model, im_model)
Explanation: The reached point obtained above do not well cover the sensory space. This is due to the fact that we did not re-initialize the sensorimotor model (therefore this latter was already trained) to avoid some bootstrapping issues. The next section shows how to encapsulate a sensorimotor and an interest models into an agent to, among other things, take care of those bootstrapping issues.
Encapsulating a sensorimotor and an interest models into an agent
Encapsulating a sensorimotor and an interest models into an agent allows to generalize and simplify the simulation loop whatever the exploration strategy involved, ie whatever the type of babbling, the sensorimotor and the interest models. In Explauto, an agent is intantiated using a configuration (generally from an environment), a sensorimotor and an interest models:
End of explanation
m = agent.produce()
print(m)
Explanation: An agent is provided with two methods. One for producing a motor command:
End of explanation
s = environment.update(m)
agent.perceive(s)
Explanation: The produce() method calls the sample() method of the interest model, which returns either a motor command or a sensory goal according to the interest space (i.e. the type of babbling). Then it uses the sensorimotor model to complete the obtained value into a full sensorimotor vector (using forward prediction in case of motor babbling and inverse prediction in case of goal babbling). Finally it returns the motor part of this full sensorimotor vector. Agents also take care of model bootstrapping issues.
The second main agent method is perceive(), which informs the agent with the sensorimotor consequence of its action in order to update both the sensorimotor and the interest models:
End of explanation
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
Explanation: Hence the entire simulation loop can now be rewritten as:
End of explanation
sm_model = SensorimotorModel.from_configuration(environment.conf, 'nearest_neighbor', 'default')
im_model = InterestModel.from_configuration(environment.conf, environment.conf.s_dims, 'discretized_progress')
agent = Agent(environment.conf, sm_model, im_model)
Explanation: This loop is valid whatever the exploration strategy involved. The corresponding formal framework is defined in:
C. Moulin-Frier and P.-Y. Oudeyer, Exploration strategies in developmental robotics: A unified probabilistic framework, ICDL/Epirob, Osaka, Japan, 2013, pp. 1โ6.
Let's for example create a curiosity-driven goal babbler:
End of explanation
for _ in range(100):
m = agent.produce()
s = environment.update(m)
agent.perceive(s)
Explanation: and run it using the exact same loop:
End of explanation
from explauto import Experiment
expe = Experiment(environment, agent)
Explanation: Of course lack a way to visualize the result of our simulations here, this is why we introduce Explauto's Experiment in the next section.
Encapsulating an environment and an agent into an experiment
Encapsulating an environment and an agent into an experiment allows to evaluate agent learning and offers plotting facilities. Once an environment and an agent have been constructed, one can set an experiment using:
End of explanation
from explauto.experiment import make_settings
random_goal_babbling = make_settings(environment='simple_arm', environment_config = 'mid_dimensional',
babbling_mode='goal',
interest_model='random',
sensorimotor_model='nearest_neighbor')
expe = Experiment.from_settings(random_goal_babbling)
Explanation: An experiment offers the management of the simulation loop with evaluation, logging and plotting capabilities. Instead of seperately constructing the environment and the agent (containing the sensorimotor and the interest models), one can simply use:
End of explanation
expe.evaluate_at([100, 200, 400, 1000], random_goal_babbling.default_testcases)
Explanation: This is the compact way to construct the environment (here a mid-dimensional 'simple_arm'), the sensorimotor model (here, 'NN') and the interest model (here curiosity-driven goal babbling) and encapsulate them into an experiment.
An experiment allows to insert an evaluation phase at given time steps:
End of explanation
expe.run()
Explanation: Now let's run the experiment:
End of explanation
%pylab inline
ax = axes()
title(('Random goal babbling'))
expe.log.scatter_plot(ax, (('sensori', [0, 1]),))
expe.log.scatter_plot(ax, (('choice', [0, 1]),), marker='.', color='red')
#expe.log.scatter_plot(ax, (('testcases', [0, 1]),), marker='o', color='green')
legend(['reached hand positions', 'chosen goals'])
Explanation: This executes the same simulation loop as above, inserting an evaluation phase at each specified time step and logging the flow of interest model choices, sensorimotor model inferences and sensorimotor observations. This allows to, e.g., visualize the chosen goals and reached hand positions during the experiment using the scatter_plot method:
End of explanation
ax = axes()
expe.log.plot_learning_curve(ax)
Explanation: or to vizualize the learning curve:
End of explanation
from explauto import ExperimentPool
xps = ExperimentPool.from_settings_product(environments=[('simple_arm', 'high_dim_high_s_range')],
babblings=['goal'],
interest_models=[('random', 'default'), ('discretized_progress', 'default')],
sensorimotor_models=[('nearest_neighbor', 'default')],
evaluate_at=[200, 500, 900, 1400],
same_testcases=True)
Explanation: Parallel comparison of exploration strategies
Various exploration strategies can be launched in parallel and compared by using an experiment pool:
End of explanation
xps.run()
Explanation: running it:
End of explanation
ax = axes()
for log in xps.logs:
log.plot_learning_curve(ax)
legend([s.interest_model for s in xps.settings])
Explanation: comparing learning curves:
End of explanation
%pylab
clf()
last_t = 0
for t in linspace(100, xps.logs[0].eval_at[-1], 40):
t = int(t)
for i, (config, log) in enumerate(zip(xps.settings, xps.logs)):
ax = subplot(1, 2, i+1)
log.scatter_plot(ax, (('sensori', [0, 1]),), range(0, t), marker='.', markersize=0.3, color = 'white')
log.density_plot(ax, (('choice', [0, 1]),), range(last_t, t))
title(config.interest_model + ' ' + config.babbling_mode)
draw()
last_t = t
Explanation: or vizualize the iterative choice of goals and the reached effects:
End of explanation |
13,048 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a numpy array which contains time series data. I want to bin that array into equal partitions of a given length (it is fine to drop the last partition if it is not the same size) and then calculate the mean of each of those bins. Due to some reason, I want the binning starts from the end of the array. | Problem:
import numpy as np
data = np.array([4, 2, 5, 6, 7, 5, 4, 3, 5, 7])
bin_size = 3
new_data = data[::-1]
bin_data_mean = new_data[:(data.size // bin_size) * bin_size].reshape(-1, bin_size).mean(axis=1) |
13,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost Article
The data here is taken form the Data Hackathon3.x - http
Step1: Load Data
Step2: Define a function for modeling and cross-validation
This function will do the following
Step3: Step 1- Find the number of estimators for a high learning rate
Step4: Tune subsample and colsample_bytree
Step5: tune subsample
Step6: Got the same value as assument and no change requried.
Try regularization | Python Code:
import os
import pandas as pd
import numpy as np
import xgboost as xgb
from xgboost.sklearn import XGBClassifier
from sklearn import cross_validation, metrics
from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import train_test_split
import matplotlib.pylab as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 4
Explanation: XGBoost Article
The data here is taken form the Data Hackathon3.x - http://datahack.analyticsvidhya.com/contest/data-hackathon-3x
Import Libraries:
End of explanation
path = "./data/allstate"
inputFilePath = os.path.join(path, "train.csv.zip")
df = pd.read_csv(inputFilePath, compression="zip", header=0)
msk = np.random.rand(len(df)) < 0.8
train = df[msk]
test = df[~msk]
train.shape, test.shape
target='loss'
IDcol = 'id'
train[target].value_counts()
Explanation: Load Data:
The data has gone through following pre-processing:
1. City variable dropped because of too many categories
2. DOB converted to Age | DOB dropped
3. EMI_Loan_Submitted_Missing created which is 1 if EMI_Loan_Submitted was missing else 0 | EMI_Loan_Submitted dropped
4. EmployerName dropped because of too many categories
5. Existing_EMI imputed with 0 (median) - 111 values were missing
6. Interest_Rate_Missing created which is 1 if Interest_Rate was missing else 0 | Interest_Rate dropped
7. Lead_Creation_Date dropped because made little intuitive impact on outcome
8. Loan_Amount_Applied, Loan_Tenure_Applied imputed with missing
9. Loan_Amount_Submitted_Missing created which is 1 if Loan_Amount_Submitted was missing else 0 | Loan_Amount_Submitted dropped
10. Loan_Tenure_Submitted_Missing created which is 1 if Loan_Tenure_Submitted was missing else 0 | Loan_Tenure_Submitted dropped
11. LoggedIn, Salary_Account removed
12. Processing_Fee_Missing created which is 1 if Processing_Fee was missing else 0 | Processing_Fee dropped
13. Source - top 2 kept as is and all others combined into different category
14. Numerical and One-Hot-Coding performed
End of explanation
test_results = pd.read_csv('test_results.csv')
def modelfit(alg, dtrain, dtest, predictors,useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(dtrain[predictors].values, label=dtrain[target].values)
xgtest = xgb.DMatrix(dtest[predictors].values)r
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
metrics='auc', early_stopping_rounds=early_stopping_rounds, show_progress=False)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[predictors], dtrain[target],eval_metric='auc')
#Predict training set:
dtrain_predictions = alg.predict(dtrain[predictors])
dtrain_predprob = alg.predict_proba(dtrain[predictors])[:,1]
#Print model report:
print "\nModel Report"
print "Accuracy : %.4g" % metrics.accuracy_score(dtrain[target].values, dtrain_predictions)
print "AUC Score (Train): %f" % metrics.roc_auc_score(dtrain[target], dtrain_predprob)
# Predict on testing data:
dtest['predprob'] = alg.predict_proba(dtest[predictors])[:,1]
results = test_results.merge(dtest[['ID','predprob']], on='ID')
print 'AUC Score (Test): %f' % metrics.roc_auc_score(results[target], results['predprob'])
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar', title='Feature Importances')
plt.ylabel('Feature Importance Score')
Explanation: Define a function for modeling and cross-validation
This function will do the following:
1. fit the model
2. determine training accuracy
3. determine training AUC
4. determine testing AUC
5. update n_estimators with cv function of xgboost package
6. plot Feature Importance
End of explanation
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb1 = XGBRegresor(
learning_rate =0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb1, train, test, predictors)
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test1 = {
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gsearch1 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=5,
min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1, seed=27),
param_grid = param_test1, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch1.fit(train[predictors],train[target])
gsearch1.grid_scores_, gsearch1.best_params_, gsearch1.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test2 = {
'max_depth':[4,5,6],
'min_child_weight':[4,5,6]
}
gsearch2 = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=5,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2.fit(train[predictors],train[target])
gsearch2.grid_scores_, gsearch2.best_params_, gsearch2.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test2b = {
'min_child_weight':[6,8,10,12]
}
gsearch2b = GridSearchCV(estimator = XGBClassifier( learning_rate=0.1, n_estimators=140, max_depth=4,
min_child_weight=2, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test2b, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch2b.fit(train[predictors],train[target])
gsearch2b.grid_scores_, gsearch2b.best_params_, gsearch2b.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test3 = {
'gamma':[i/10.0 for i in range(0,5)]
}
gsearch3 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=140, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test3, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch3.fit(train[predictors],train[target])
gsearch3.grid_scores_, gsearch3.best_params_, gsearch3.best_score_
predictors = [x for x in train.columns if x not in [target, IDcol]]
xgb2 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb2, train, test, predictors)
Explanation: Step 1- Find the number of estimators for a high learning rate
End of explanation
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test4 = {
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gsearch4 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test4, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch4.fit(train[predictors],train[target])
gsearch4.grid_scores_, gsearch4.best_params_, gsearch4.best_score_
Explanation: Tune subsample and colsample_bytree
End of explanation
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test5 = {
'subsample':[i/100.0 for i in range(75,90,5)],
'colsample_bytree':[i/100.0 for i in range(75,90,5)]
}
gsearch5 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test5, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch5.fit(train[predictors],train[target])
gsearch5.grid_scores_, gsearch5.best_params_, gsearch5.best_score_
Explanation: tune subsample:
End of explanation
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test6 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gsearch6 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test6, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch6.fit(train[predictors],train[target])
gsearch6.grid_scores_, gsearch6.best_params_, gsearch6.best_score_
#Grid seach on subsample and max_features
#Choose all predictors except target & IDcols
param_test7 = {
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gsearch7 = GridSearchCV(estimator = XGBClassifier( learning_rate =0.1, n_estimators=177, max_depth=4,
min_child_weight=6, gamma=0.1, subsample=0.8, colsample_bytree=0.8,
objective= 'binary:logistic', nthread=4, scale_pos_weight=1,seed=27),
param_grid = param_test7, scoring='roc_auc',n_jobs=4,iid=False, cv=5)
gsearch7.fit(train[predictors],train[target])
gsearch7.grid_scores_, gsearch7.best_params_, gsearch7.best_score_
xgb3 = XGBClassifier(
learning_rate =0.1,
n_estimators=1000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb3, train, test, predictors)
xgb4 = XGBClassifier(
learning_rate =0.01,
n_estimators=5000,
max_depth=4,
min_child_weight=6,
gamma=0,
subsample=0.8,
colsample_bytree=0.8,
reg_alpha=0.005,
objective= 'binary:logistic',
nthread=4,
scale_pos_weight=1,
seed=27)
modelfit(xgb4, train, test, predictors)
Explanation: Got the same value as assument and no change requried.
Try regularization:
End of explanation |
13,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Regression
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
Step1: Linear regression is ubiquitous in research. In this example we'll fit a line
$$ y=mx+b $$
to data where the error bars have been underestimated and need to be inflated by a factor $f$. This example is taken from the emcee documentation.
Step2: We will assume the errors are Normal and impose uniform priors on $(m, b, \ln f)$.
Step3: Let's sample from this distribution using multiple bounding ellipsoids and random walk
Step4: Let's see how we did. | Python Code:
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate= np.random.default_rng(56101)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
Explanation: Linear Regression
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
End of explanation
# truth
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# generate mock data
N = 50
x = np.sort(10 * rstate.uniform(size=N))
yerr = 0.1 + 0.5 * rstate.uniform(size=N)
y_true = m_true * x + b_true
y = y_true + np.abs(f_true * y_true) * rstate.normal(size=N)
y += yerr * rstate.normal(size=N)
# plot results
plt.figure(figsize=(10, 5))
plt.errorbar(x, y, yerr=yerr, fmt='ko', ecolor='red')
plt.plot(x, y_true, color='blue', lw=3)
plt.xlabel(r'$X$')
plt.ylabel(r'$Y$')
plt.tight_layout()
Explanation: Linear regression is ubiquitous in research. In this example we'll fit a line
$$ y=mx+b $$
to data where the error bars have been underestimated and need to be inflated by a factor $f$. This example is taken from the emcee documentation.
End of explanation
# log-likelihood
def loglike(theta):
m, b, lnf = theta
model = m * x + b
inv_sigma2 = 1.0 / (yerr**2 + model**2 * np.exp(2 * lnf))
return -0.5 * (np.sum((y-model)**2 * inv_sigma2 - np.log(inv_sigma2)))
# prior transform
def prior_transform(utheta):
um, ub, ulf = utheta
m = 5.5 * um - 5.
b = 10. * ub
lnf = 11. * ulf - 10.
return m, b, lnf
Explanation: We will assume the errors are Normal and impose uniform priors on $(m, b, \ln f)$.
End of explanation
dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=3,
bound='multi', sample='rwalk', rstate=rstate)
dsampler.run_nested()
dres = dsampler.results
Explanation: Let's sample from this distribution using multiple bounding ellipsoids and random walk
End of explanation
from dynesty import plotting as dyplot
truths = [m_true, b_true, np.log(f_true)]
labels = [r'$m$', r'$b$', r'$\ln f$']
fig, axes = dyplot.traceplot(dsampler.results, truths=truths, labels=labels,
fig=plt.subplots(3, 2, figsize=(16, 12)))
fig.tight_layout()
fig, axes = dyplot.cornerplot(dres, truths=truths, show_titles=True,
title_kwargs={'y': 1.04}, labels=labels,
fig=plt.subplots(3, 3, figsize=(15, 15)))
Explanation: Let's see how we did.
End of explanation |
13,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook introduces the notion of computing the general linear model using linear algebra. First we load the necessarily libraries.
Step2: A simple example
We start with a simple example of an independent samples t-test. First, let's make a function that will generate some data. We will assume that there are two conditions with specified means and standard deviations
Step3: Make some data and plot the distributions for the two conditions
Step4: Now we want to perform a t-test to ask whether the means of the two conditions are different. Let's try to compute it on our own, using linear algebra. Remember that the formula for the GLM is
Step5: There are some useful functions to generate specific types of matrices
Step6: Now let's look at some basic arithmetic operations
Step7: Matrix multiplication
Matrix multiplication is performed on numpy arrays using the .dot() operator.
Step8: Exercise
Step9: Exercise
Step10: Matrix inversion
We also need to know how to compute the inverse of a matrix, which we do using numpy.linalg.inv().
Exercise
Step12: Now that we know how to perform the necessary matrix operations, let's do our t-test on the data generated above. We first have to fix a problem
Step13: Now let's estimate the model parameters using our function.
Step14: Let's compute the same test using a couple of other canned procedures. First, we use the t-test procedure within the scipy package.
Step15: We can also compute it via the general linear model, using the ordinary least squares (OLS) method from statsmodels.
Step16: Exercise
Step19: Multiple regression
Let's now look at how we can fit a more complex model using the GLM. Let's make some data based on two regressors plus noise. We will make one of the regressors smooth across observations, for reasons that will become clearer later.
Step20: Let's run the same analysis using a canned function from the statsmodels package to compare the results. Note that statsmodels automatically adds an intercept, so we don't pass that column from the design matrix.
Step21: Beyond ordinary least squares
In the foregoing, we used ordinary least squares estimation, which is the best linear unbiased estimator in the case of uncorrelated and homoscedastic (equal variance) errors (according to the Gauss-Markov theorem). However, there are common situations where these assumptions fail, in which case we need to use more sophisticated models. The case that is most relevant to fMRI is when there are correlated errors, which we will explore below.
First, let's simulate performance using OLS when the assumptions are upheld - the Type I error rate should be about 0.05.
Step22: Now let's introduce some correlated noise, using the function created above which smooths the noise across observations using a first-order autoregressive (AR(1)) model. We do this for a range of levels of autocorrelation; because we have set the true beta values to zero, and the resulting proportion of significant results tells us the Type 1 error. We also assess the variance of the estimates.
Step23: Exercise
Step24: The AR1 covariance has this form
Step26: Now let's build a version of our estimator that uses GLS rather than OLS. We do this using an interative approach. We first run OLS to estimate the model and obtain the residuals, and then we estimate the autocorrelation structure from the residuals. Then we estimate the model using GLS with the autocorrelation structure estimated above. The GLS estimator is
Step27: What do you see in this comparison?
Now let's simulate datasets under the null and estimate the model, across different levels of autocorrelation, as we did above. Because the estimation is a bit more complex this will take a couple of minutes. | Python Code:
import numpy,pandas
import matplotlib.pyplot as plt
import seaborn as sns
import scipy.stats
import statsmodels.api as sm
import statsmodels
from statsmodels.formula.api import ols,glsar
from statsmodels.tsa.arima_process import arma_generate_sample
from scipy.linalg import toeplitz
from IPython.display import display, HTML
%matplotlib inline
Explanation: This notebook introduces the notion of computing the general linear model using linear algebra. First we load the necessarily libraries.
End of explanation
def make_ttest_data(n_obs=[50,50],mean_obs=[10,10.1],sd_obs=[2,2]):
function to generate independent-sample data with two conditions
n_obs=[50,50] # number of observations in each condition
n_obs_total=numpy.sum(n_obs)
mean_obs=[10,11]
sd_obs=[1,1]
condition=numpy.zeros(n_obs_total)
condition[:n_obs[0]]=0
condition[n_obs[0]:n_obs_total]=1
data=numpy.zeros(n_obs_total)
data[:n_obs[0]]=mean_obs[0]
data[n_obs[0]:n_obs_total]=mean_obs[1]
# doublecheck our work
assert numpy.sum(data==mean_obs[0])==n_obs[0]
assert numpy.sum(data==mean_obs[1])==n_obs[1]
noise=numpy.zeros(n_obs_total)
noise[:n_obs[0]]=numpy.random.randn(n_obs[0])*sd_obs[0]
noise[n_obs[0]:n_obs_total]=numpy.random.randn(n_obs[1])*sd_obs[1]
df=pandas.DataFrame({'data':data+noise,'condition':condition})
return df
Explanation: A simple example
We start with a simple example of an independent samples t-test. First, let's make a function that will generate some data. We will assume that there are two conditions with specified means and standard deviations
End of explanation
data=make_ttest_data()
Y=data.data.values
X=data.condition.values
f = plt.figure()
sns.distplot(Y[X==0], hist=False, label="condition 1")
sns.distplot(Y[X==1], hist=False, label="condition 2")
Explanation: Make some data and plot the distributions for the two conditions
End of explanation
# to make an array, we give a list to numpy.array
y = numpy.array([1,3,2])
print(y)
print('y shape:',y.shape)
# we can add a dimension with the None operator
z=y[:,None]
print(z)
print('z shape:',z.shape)
# one option to create a matrix is to give a vector and reshape to a matrix
print('A')
A = numpy.array([1,1,2,3,5,8,13,21,34]).reshape(3,3)
print(A)
# another alternative is to pass a list of lists
print('B')
B = numpy.array([[1,1,2],[3,5,8],[13,21,34]])
print(B)
# to transpose a matrix, use the .T operator
print('B.T')
print(B.T)
Explanation: Now we want to perform a t-test to ask whether the means of the two conditions are different. Let's try to compute it on our own, using linear algebra. Remember that the formula for the GLM is:
$Y = X * B + e$
where Y is an N X 1 matrix containing the data that we generated, and X is an N X c "design matrix" that describes the conditions (in this case, a single vector indicating condition 1 or 2). Using the normal equations, we can estimate B using:
$\hat{B} = (X'X)^{-1}X'Y$
Before we dive into these computations, we need to go over how to do linear algebra in Python. The following borrows liberally from https://www.ibm.com/developerworks/community/blogs/jfp/entry/Elementary_Matrix_Operations_In_Python?lang=en
Making arrays/matrices in Python
End of explanation
# create a matrix full of zeros
# note that the shape is passed as a tuple if you want multiple dimensions
a=numpy.zeros((2,4))
print('a')
print(a)
#create a matrix full of ones
b=numpy.ones((2,4))
print('b')
print(b)
# create a matrix full of any other number:
c=b*12
print('c')
print(c)
# create a range of numbers:
d=numpy.arange(10)
print('d')
print(d)
e=numpy.arange(3,5,0.33)
print('e')
print(e)
Explanation: There are some useful functions to generate specific types of matrices
End of explanation
print('a+5')
print(a+5)
print('c/2')
print(c/2)
print('a+b+c')
print(a+b+c)
print('a*b*c')
print(a*b*c)
print('b/c')
print(b/c)
Explanation: Now let's look at some basic arithmetic operations
End of explanation
x=numpy.array([[1,2],[3,4]])
y=numpy.array([[1,0],[0,2]])
print('x')
print(x)
print('y')
print(y)
print('scalar product of x and y: x*y')
print(x*y)
print('matrix product of x and y: x.dot(y)')
print(x.dot(y))
print('or use numpy.matrix')
print(numpy.matrix(x)*numpy.matrix(y))
Explanation: Matrix multiplication
Matrix multiplication is performed on numpy arrays using the .dot() operator.
End of explanation
def variance(Y):
# insert code here to estimate variance using matrix multiplication
return var
# use allclose rather than == to deal with numerical errors
assert numpy.allclose(numpy.var(Y),variance(Y))
Explanation: Exercise: We know that the variance of a matrix X is computed as $mean((X-mean(X))*(X-mean(X))')$. Fill in the appropriate code in the function below so that it returns a value that equals the value obtained from the numpy.var() command.
End of explanation
def corrcoef(x,y):
assert len(x)==len(y)
# add code here to compute correlation
return r
print('My result:',corrcoef(X,Y))
print('Numpy result:',numpy.corrcoef(X,Y))
assert numpy.allclose(numpy.corrcoef(X,Y)[0,1],corrcoef(X,Y))
Explanation: Exercise: Write a function to compute the correlation coefficient using matrix algebra. The equation to compute the correlation using matrix algebra is:
$r = \frac{X\cdot Y}{\sqrt{(X\cdot X)*(Y\cdot Y)}}$
assuming that X and Y have zero mean, so you need to remove the mean before computing this.
End of explanation
# Exercise code here
Explanation: Matrix inversion
We also need to know how to compute the inverse of a matrix, which we do using numpy.linalg.inv().
Exercise: In the cell below, create a matrix containing the following numbers:
[1,0]
[0,2]
and print out the original matrix along with the inverted matrix.
End of explanation
def ols_estimate(X,Y,add_intercept=True,verbose=False,
ddof=1,use_two_sided=True):
function to estimate parameters for a general linear model
# first we need to set up the matrices in the proper shape
# Y should be N X 1
# X should be X X c
if verbose:
print('original Y shape:',Y.shape)
Y=Y.reshape((len(Y),1))
if verbose:
print('new Y shape:',Y.shape)
if verbose:
print('original X shape:',X.shape)
if len(X.shape)==1:
X=X.reshape((len(X),1))
Xnames=['X%d'%int(i+1) for i in range(X.shape[1])]
if verbose:
print('new X shape:',X.shape)
# add an intercept to the model if specified
if add_intercept:
X=sm.add_constant(X)
Xnames=Xnames.append('Intercept')
# make sure that the design matrix is full rank
assert numpy.linalg.matrix_rank(X)==X.shape[1]
# estimate the parameters using the normal equations
b_hat=numpy.linalg.inv(X.T.dot(X)).dot(X.T.dot(Y))
if verbose:
print('b_hat=',b_hat)
# compute residuals and their variance
resid=Y-X.dot(b_hat)
sigma2=resid.T.dot(resid)/(X.shape[0] - X.shape[1]) # variance of the residuals
# now compute the t statistic and p values for for each variable in X
t=numpy.zeros(X.shape[1])
p=numpy.zeros(X.shape[1])
for i in range(X.shape[1]):
c=numpy.zeros(X.shape[1])
c[i]=1
t[i]=c.dot(b_hat)/numpy.sqrt(c.dot(numpy.linalg.inv(X.T.dot(X))).dot(c.T)*sigma2)
if t[i]<0:
p[i]=scipy.stats.distributions.t.cdf(t[i],len(Y)-1)
else:
p[i]=1-scipy.stats.distributions.t.cdf(t[i],len(Y)-1)
if use_two_sided:
p[i]=p[i]*2
if verbose:
print('t=',t)
df=pandas.DataFrame({'bhat':b_hat.ravel(),'t':t.ravel(),'p':p.ravel()},index=Xnames)
return df
Explanation: Now that we know how to perform the necessary matrix operations, let's do our t-test on the data generated above. We first have to fix a problem: we need both X and Y to be matrices for our computation to work, but right now they are 1-dimensional vectors rather than two-dimensional matrices. We can fix this using numpy - let's go ahead and create a function to compute the ordinary least squares estimates, that includes code to reformat the data into matrices. We also include an option to add an intercept (i.e. a column of ones) to the model if it doesn't already exist.
End of explanation
e=ols_estimate(X,Y)
display(e)
Explanation: Now let's estimate the model parameters using our function.
End of explanation
t,p=scipy.stats.ttest_ind(Y[X==1],Y[X==0])
print('t/p computed by scipy:',t,p)
assert numpy.allclose(t,e.t.values[1])
Explanation: Let's compute the same test using a couple of other canned procedures. First, we use the t-test procedure within the scipy package.
End of explanation
X=sm.add_constant(X)
ols_result=sm.OLS(Y,X).fit()
print(ols_result.summary())
# make sure our result is close to the one from statsmodels
for i in range(len(e.t.values)):
assert numpy.allclose(e.t.values[i],ols_result.tvalues[i])
Explanation: We can also compute it via the general linear model, using the ordinary least squares (OLS) method from statsmodels.
End of explanation
residual=Y - X.dot(e.bhat.values)
## insert code here
Explanation: Exercise: Confirm that the dot product between the residuals from OLS and the X variable is zero.
End of explanation
def mkar1noise(tslen,coef,noisesd):
function to return AR(1) autocorrelated noise
varcorrect = numpy.sqrt(1-coef**2)
noise=numpy.random.randn(tslen)*noisesd
for i in range(1,tslen):
noise[i]=noise[i]*varcorrect+noise[i-1]*coef
return noise
def make_regression_data(nobs=100,regsmooth=[1],
regsmoothcoef=0.8,
beta=[0.1,0.5,10],noisesd=1.,
noisecorr=0):
function to generate regression data
with option to add autocorrelated noise
beta reflects two conditions plus intercept
regs=numpy.random.randn(nobs,len(beta)-1)
regvarcorrect = numpy.sqrt(1-regsmoothcoef**2)
for r in regsmooth:
for i in range(1,nobs):
regs[i,r]=regs[i,r]*regvarcorrect+regs[i-1,r]*regsmoothcoef
regs=numpy.hstack((regs,numpy.ones((regs.shape[0],1))))
data=regs.dot(numpy.array(beta))
if noisecorr==0:
noise=numpy.random.randn(len(data))*noisesd
else:
noise=mkar1noise(len(data),noisecorr,noisesd)
return data+noise,regs
Y,X=make_regression_data()
#X=X-numpy.mean(X,0)
plt.imshow(X,interpolation='nearest')
plt.axis('auto')
plt.ylim([0,100])
plt.figure()
plt.scatter(X[:,0],Y)
plt.xlabel('first X regressor - X[:,0]')
plt.ylabel('Y')
plt.figure()
plt.scatter(X[:,1],Y)
plt.xlabel('first X regressor - X[:,1]')
plt.ylabel('Y')
e=ols_estimate(X,Y)
display(e)
Explanation: Multiple regression
Let's now look at how we can fit a more complex model using the GLM. Let's make some data based on two regressors plus noise. We will make one of the regressors smooth across observations, for reasons that will become clearer later.
End of explanation
ols_result=sm.OLS(Y,X).fit()
print(ols_result.summary())
for i in range(len(e.t.values)):
assert numpy.allclose(e.t.values[i],ols_result.tvalues[i])
Explanation: Let's run the same analysis using a canned function from the statsmodels package to compare the results. Note that statsmodels automatically adds an intercept, so we don't pass that column from the design matrix.
End of explanation
nruns=1000
pval=numpy.zeros((nruns,3))
bhat=numpy.zeros((nruns,3))
for i in range(nruns):
Y,X=make_regression_data(beta=[0,0,0])
e=ols_estimate(X,Y)
pval[i,:]=e.p.values
bhat[i,:]=e.bhat.values
df=pandas.DataFrame({'Type 1 error':[numpy.mean(pval[:,i]<0.05) for i in range(3)],
'Variance of bhat':[numpy.std(bhat[:,i]) for i in range(3)]},
index=['X1','X2','intercept'])
display(df)
Explanation: Beyond ordinary least squares
In the foregoing, we used ordinary least squares estimation, which is the best linear unbiased estimator in the case of uncorrelated and homoscedastic (equal variance) errors (according to the Gauss-Markov theorem). However, there are common situations where these assumptions fail, in which case we need to use more sophisticated models. The case that is most relevant to fMRI is when there are correlated errors, which we will explore below.
First, let's simulate performance using OLS when the assumptions are upheld - the Type I error rate should be about 0.05.
End of explanation
nruns=1000
ncvals=numpy.arange(0.0,0.9,0.1)
pval=numpy.zeros((nruns,3,len(ncvals)))
bhat=numpy.zeros((nruns,3,len(ncvals)))
for nc in range(len(ncvals)):
for i in range(nruns):
Y,X=make_regression_data(beta=[0,0,0],noisecorr=ncvals[nc])
e=ols_estimate(X,Y,add_intercept=False)
pval[i,:,nc]=e.p.values
bhat[i,:,nc]=e.bhat.values
pval_exc=pval<0.05
meanpval=numpy.mean(pval_exc,0)
f=plt.figure(figsize=(8,5))
plt.subplot(1,2,1)
plt.plot(ncvals,meanpval.T)
plt.plot([0,1],[0.05,0.05],'--')
plt.xlabel('autocorrelation')
plt.ylabel('Type I error (% of significant tests)')
plt.legend(['X1','X2','Intercept'])
plt.ylim([0,1])
plt.subplot(1,2,2)
bhvar=numpy.std(bhat,0)
plt.plot(ncvals,bhvar.T)
plt.xlabel('autocorrelation')
plt.ylabel('std of parameter estimates')
plt.legend(['X1','X2','Intercept'])
plt.ylim([0,1])
Explanation: Now let's introduce some correlated noise, using the function created above which smooths the noise across observations using a first-order autoregressive (AR(1)) model. We do this for a range of levels of autocorrelation; because we have set the true beta values to zero, and the resulting proportion of significant results tells us the Type 1 error. We also assess the variance of the estimates.
End of explanation
print(toeplitz(range(4)))
Explanation: Exercise: What do you see? Why do the effects of correlation in the data differ between regressors?
Generalized least squares
In cases where the data do not adhere to the assumptions of OLS, we can use generalized least squares to obtain BLUE estimates. This requires that we have a model of the autocorrelation structure. Let's use a Toeplitz matrix, allows us to create an AR(1) covariance matrix.
The Toeplitz matrix has this form (in this case for a dataset with 4 observations):
End of explanation
rho=0.3
print(rho**toeplitz(range(4)))
Explanation: The AR1 covariance has this form:
$V = \sigma^2 \begin{bmatrix}1 & \rho & \rho^2 & \rho^3\
\rho & 1 & \rho & \rho^2\
\rho^2 & \rho & 1 & \rho \
\rho^3 & \rho^2 & \rho & 1 \\end{bmatrix}$
where $\rho$ is the first-order autocorrelation and $\sigma^2$ is the variance. Note that we still assume that the variances are homogenous across datapoints. Thus, to generate such a matrix we simply exponentiate $\rho$ by the Toeplitz matrix (which is acheived using the $**$ operator) in Python.
End of explanation
def gls_estimate(X,Y,add_intercept=True,verbose=False,
ddof=1,use_two_sided=True):
estimate generalized least squares
using a Toeplitz matrix to generate AR(1) covariance
# first we need to set up the matrices in the proper shape
# Y should be N X 1
# X should be X X c
if verbose:
print('original Y shape:',Y.shape)
Y=Y.reshape((len(Y),1))
if verbose:
print('new Y shape:',Y.shape)
if verbose:
print('original X shape:',X.shape)
if len(X.shape)==1:
X=X.reshape((len(X),1))
Xnames=['X%d'%int(i+1) for i in range(X.shape[1])]
if verbose:
print('new X shape:',X.shape)
# add an intercept to the model if specified
if add_intercept:
X=sm.add_constant(X)
# make sure that the design matrix is full rank
assert numpy.linalg.matrix_rank(X)==X.shape[1]
# first fit OLS to get residuals for AC estimation
e=ols_estimate(X,Y)
resid=Y - X.dot(e.bhat.values[:,numpy.newaxis])
ar1_coef=statsmodels.tsa.stattools.acf(resid)[1] # get the first-order autocorrelation estimate
# compute the inverse covariance matrix
order=toeplitz(range(len(Y)))
sigma=ar1_coef**order
Vinv=numpy.linalg.inv(sigma)
# re-estimate the model using GLS
b_hat=numpy.linalg.inv(X.T.dot(Vinv).dot(X)).dot(X.T.dot(Vinv).dot(Y))
if verbose:
print('b_hat=',b_hat)
resid=Y-X.dot(b_hat)
sigma2=resid.T.dot(resid)/(X.shape[0] - X.shape[1]) # variance of the residuals
# now compute the t statistic and p values for for each variable in X
t=numpy.zeros(X.shape[1])
p=numpy.zeros(X.shape[1])
for i in range(X.shape[1]):
c=numpy.zeros(X.shape[1])
c[i]=1
t[i]=c.dot(b_hat)/numpy.sqrt(c.dot(numpy.linalg.inv(X.T.dot(Vinv).dot(X))).dot(c.T)*sigma2)
if t[i]<0:
p[i]=scipy.stats.distributions.t.cdf(t[i],len(Y)-1)
else:
p[i]=1-scipy.stats.distributions.t.cdf(t[i],len(Y)-1)
if use_two_sided:
p[i]=p[i]*2
if verbose:
print('t=',t)
df=pandas.DataFrame({'bhat':b_hat.ravel(),'t':t.ravel(),'p':p.ravel()},index=Xnames)
return df
order=toeplitz(range(len(Y)))
sigma=0.5**order
Y,X=make_regression_data(beta=[1,0.1,10],noisecorr=0.5)
e=gls_estimate(X,Y)
display(e)
gls_result=sm.GLS(Y,X,sigma=sigma).fit()
gls_result.summary()
Explanation: Now let's build a version of our estimator that uses GLS rather than OLS. We do this using an interative approach. We first run OLS to estimate the model and obtain the residuals, and then we estimate the autocorrelation structure from the residuals. Then we estimate the model using GLS with the autocorrelation structure estimated above. The GLS estimator is:
$\hat{B} = (X'V^{-1}X)^{-1}X'V^{-1}Y$
where $V$ is the covariance matrix (which in OLS we assumed was simply $\sigma^2I$). This is akin to "whitening" the data by removing the covariance structure.
End of explanation
nruns=1000
ncvals=numpy.arange(0.0,0.9,0.1)
pval=numpy.zeros((nruns,2,len(ncvals)))
bhat=numpy.zeros((nruns,2,len(ncvals)))
for nc in range(len(ncvals)):
for i in range(nruns):
Y,X=make_regression_data(beta=[0,0,0],noisecorr=ncvals[nc])
e=gls_estimate(X,Y)
pval[i,:,nc]=e.p.values[:2]
bhat[i,:,nc]=e.bhat.values[:2]
pval_exc=pval<0.05
meanpval=numpy.mean(pval_exc,0)
f=plt.figure(figsize=(12,5))
f=plt.subplot(1,2,1)
plt.plot(ncvals,meanpval.T)
plt.plot([0,1],[0.05,0.05],'--')
plt.xlabel('autocorrelation')
plt.ylabel('% of significant tests')
plt.legend(['X1','X2','Intercept'])
plt.ylim([0,1])
bhvar=numpy.std(bhat,0)
f=plt.subplot(1,2,2)
plt.plot(ncvals,bhvar.T)
plt.xlabel('autocorrelation')
plt.ylabel('std of parameter estimates')
plt.legend(['X1','X2','Intercept'])
plt.ylim([0,1])
Explanation: What do you see in this comparison?
Now let's simulate datasets under the null and estimate the model, across different levels of autocorrelation, as we did above. Because the estimation is a bit more complex this will take a couple of minutes.
End of explanation |
13,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vidic, Fajfar and Fischinger (1994)
This procedure, proposed by Vidic, Fajfar and Fischinger (1994), aims to determine the displacements from an inelastic design spectra for systems with a given ductility factor. The inelastic displacement spectra is determined by means of applying a reduction factor, which depends on the natural period of the system, its ductility factor, the hysteretic behaviour, the damping, and the frequency content of the ground motion.
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
Step2: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Plot vulnerability function
Step10: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import vidic_etal_1994
from rmtk.vulnerability.common import utils
%matplotlib inline
Explanation: Vidic, Fajfar and Fischinger (1994)
This procedure, proposed by Vidic, Fajfar and Fischinger (1994), aims to determine the displacements from an inelastic design spectra for systems with a given ductility factor. The inelastic displacement spectra is determined by means of applying a reduction factor, which depends on the natural period of the system, its ductility factor, the hysteretic behaviour, the damping, and the frequency content of the ground motion.
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
utils.plot_capacity_curves(capacity_curves)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
End of explanation
gmrs_folder = "../../../../../../rmtk_data/accelerograms"
gmrs = utils.read_gmrs(gmrs_folder)
minT, maxT = 0.1, 2.0
utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder.
Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
End of explanation
damping_model = "mass"
damping_ratio = 0.05
hysteresis_model = 'Q'
PDM, Sds = vidic_etal_1994.calculate_fragility(capacity_curves, gmrs,
damage_model, damping_ratio,
hysteresis_model, damping_model)
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_model: This parameter defines the type of damping model to be used in the analysis. The valid options are "mass" and "stiffness".
2. damping_ratio: This parameter defines the damping ratio for the structure.
3. hysteresis_model: The valid options are 'Q' or "bilinear".
End of explanation
IMT = "Sa"
period = 2.0
regression_method = "least squares"
fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio,
IMT, damage_model, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa".
period: this parameter defines the time period of the fundamental mode of vibration of the structure.
regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 2.00
utils.plot_fragility_model(fragility_model, minIML, maxIML)
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "RC"
minIML, maxIML = 0.01, 2.00
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Plot vulnerability function
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
13,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step7: Python Environment
We show here some examples of how to run Python on a Pynq platform. Python 3.6
is running exclusively on the ARM processor.
In the first example, which is based on calculating the factors and primes
of integer numbers, give us a sense of the performance available when running
on an ARM processor running Linux.
In the second set of examples, we leverage Python's numpy package and asyncio
module to demonstrate how Python can communicate
with programmable logic.
Factors and Primes Example
Code is provided in the cell below for a function to calculate factors and
primes. It contains some sample functions to calculate the factors and primes
of integers. We will use three functions from the factors_and_primes module
to demonstrate Python programming.
Step8: Next we will call the factorize() function to calculate the factors of an integer.
Step9: The primes_between() function can tell us how many prime numbers there are in an
integer range. Letโs try it for the interval 1 through 1066. We can also use one
of Pythonโs built-in methods len() to count them all.
Step10: Additionally, we can combine len() with another built-in method, sum(), to calculate
the average of the 180 prime numbers.
Step11: This result makes sense intuitively because prime numbers are known to become less
frequent for larger number intervals. These examples demonstrate how Python treats
functions as first-class objects so that functions may be passed as parameters to
other functions. This is a key property of functional programming and demonstrates
the power of Python.
In the next code snippet, we can use list comprehensions (a โPythonicโ form of the
map-filter-reduce template) to โmineโ the factors of 1066 to find those factors that
end in the digit โ3โ.
Step12: This code tells Python to first convert each prime between 1 and 1066 to a string and
then to return those numbers whose string representation end with the number โ3โ. It
uses the built-in str() and endswith() methods to test each prime for inclusion in the list.
And because we really want to know what fraction of the 180 primes of 1066 end in a
โ3โ, we can calculate ...
Step14: These examples demonstrate how Python is a modern, multi-paradigmatic language. More
simply, it continually integrates the best features of other leading languages, including
functional programming constructs. Consider how many lines of code you would need to
implement the list comprehension above in C and you get an appreciation of the power
of productivity-layer languages. Higher levels of programming abstraction really do
result in higher programmer productivity!
Numpy Data Movement
Code in the cells below show a very simple data movement code snippet that can be used
to share data with programmable logic. We leverage the Python numpy package to
manipulate the buffer on the ARM processors and can then send a buffer pointer to
programmable logic for sharing data.
We do not assume what programmable logic design is loaded, so here we only allocate
the needed memory space and show that it can manipulated as a numpy array and contains
a buffer pointer attribute. That pointer can then can be passed to programmable
logic hardware.
Step15: With the simple wrapper above, we can get access to memory that can be shared by both
numpy methods and programmable logic.
Step16: To double-check we show that the buffer is indeed a numpy array.
Step17: To send the buffer pointer to programmable logic, we use its physical address which
is what programmable logic would need to communicate using this shared buffer.
Step18: In this short example, we showed a simple allocation of a numpy array that is now ready
to be shared with programmable logic devices. With numpy arrays that are accessible to programmable logic, we can quickly manipulate and move data across software and hardware.
Asyncio Integration
PYNQ also leverages the Python asyncio module for communicating with programmable logic
devices through events (namely interrupts).
A Python program running on PYNQ can use the asyncio library to manage multiple IO-bound
tasks asynchronously, thereby avoiding any blocking caused by waiting for responses from
slower IO subsystems. Instead, the program can continue to execute other tasks that are
ready to run. When the previously-busy tasks are ready to resume, they will be executed
in turn, and the cycle is repeated.
Again, since we won't assume what interrupt enabled devices are loaded on programmable
logic, we will show an example here a software-only asyncio example that uses asyncio's
sleep method.
Step19: With the wake_up function defined, we then can add a new task to the event loop. | Python Code:
Factors-and-primes functions.
Find factors or primes of integers, int ranges and int lists
and sets of integers with most factors in a given integer interval
def factorize(n):
Calculate all factors of integer n.
factors = []
if isinstance(n, int) and n > 0:
if n == 1:
factors.append(n)
return factors
else:
for x in range(1, int(n**0.5)+1):
if n % x == 0:
factors.append(x)
factors.append(n//x)
return sorted(set(factors))
else:
print('factorize ONLY computes with one integer argument > 0')
def primes_between(interval_min, interval_max):
Find all primes in the interval.
primes = []
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
if interval_min == 1:
primes = [1]
for i in range(interval_min, interval_max):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(primes)
else:
print('primes_between ONLY computes over the specified range.')
def primes_in(integer_list):
Calculate all unique prime numbers.
primes = []
try:
for i in (integer_list):
if len(factorize(i)) == 2:
primes.append(i)
return sorted(set(primes))
except TypeError:
print('primes_in ONLY computes over lists of integers.')
def get_ints_with_most_factors(interval_min, interval_max):
Finds the integers with the most factors.
max_no_of_factors = 1
all_ints_with_most_factors = []
# Find the lowest number with most factors between i_min and i_max
if interval_check(interval_min, interval_max):
for i in range(interval_min, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors > max_no_of_factors:
max_no_of_factors = no_of_factors
results = (i, max_no_of_factors, factors_of_i,\
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
# Find any larger numbers with an equal number of factors
for i in range(all_ints_with_most_factors[0][0]+1, interval_max):
factors_of_i = factorize(i)
no_of_factors = len(factors_of_i)
if no_of_factors == max_no_of_factors:
results = (i, max_no_of_factors, factors_of_i, \
primes_in(factors_of_i))
all_ints_with_most_factors.append(results)
return all_ints_with_most_factors
else:
print_error_msg()
def interval_check(interval_min, interval_max):
Check type and range of integer interval.
if (isinstance(interval_min, int) and interval_min > 0 and
isinstance(interval_max, int) and interval_max > interval_min):
return True
else:
return False
def print_error_msg():
Print invalid integer interval error message.
print('ints_with_most_factors ONLY computes over integer intervals where'
' interval_min <= int_with_most_factors < interval_max and'
' interval_min >= 1')
Explanation: Python Environment
We show here some examples of how to run Python on a Pynq platform. Python 3.6
is running exclusively on the ARM processor.
In the first example, which is based on calculating the factors and primes
of integer numbers, give us a sense of the performance available when running
on an ARM processor running Linux.
In the second set of examples, we leverage Python's numpy package and asyncio
module to demonstrate how Python can communicate
with programmable logic.
Factors and Primes Example
Code is provided in the cell below for a function to calculate factors and
primes. It contains some sample functions to calculate the factors and primes
of integers. We will use three functions from the factors_and_primes module
to demonstrate Python programming.
End of explanation
factorize(1066)
Explanation: Next we will call the factorize() function to calculate the factors of an integer.
End of explanation
len(primes_between(1, 1066))
Explanation: The primes_between() function can tell us how many prime numbers there are in an
integer range. Letโs try it for the interval 1 through 1066. We can also use one
of Pythonโs built-in methods len() to count them all.
End of explanation
primes_1066 = primes_between(1, 1066)
primes_1066_average = sum(primes_1066) / len(primes_1066)
primes_1066_average
Explanation: Additionally, we can combine len() with another built-in method, sum(), to calculate
the average of the 180 prime numbers.
End of explanation
primes_1066_ends3 = [x for x in primes_between(1, 1066)
if str(x).endswith('3')]
print('{}'.format(primes_1066_ends3))
Explanation: This result makes sense intuitively because prime numbers are known to become less
frequent for larger number intervals. These examples demonstrate how Python treats
functions as first-class objects so that functions may be passed as parameters to
other functions. This is a key property of functional programming and demonstrates
the power of Python.
In the next code snippet, we can use list comprehensions (a โPythonicโ form of the
map-filter-reduce template) to โmineโ the factors of 1066 to find those factors that
end in the digit โ3โ.
End of explanation
len(primes_1066_ends3) / len(primes_1066)
Explanation: This code tells Python to first convert each prime between 1 and 1066 to a string and
then to return those numbers whose string representation end with the number โ3โ. It
uses the built-in str() and endswith() methods to test each prime for inclusion in the list.
And because we really want to know what fraction of the 180 primes of 1066 end in a
โ3โ, we can calculate ...
End of explanation
import numpy as np
import pynq
def get_pynq_buffer(shape, dtype):
Simple function to call PYNQ's memory allocator with numpy attributes
return pynq.allocate(shape, dtype)
Explanation: These examples demonstrate how Python is a modern, multi-paradigmatic language. More
simply, it continually integrates the best features of other leading languages, including
functional programming constructs. Consider how many lines of code you would need to
implement the list comprehension above in C and you get an appreciation of the power
of productivity-layer languages. Higher levels of programming abstraction really do
result in higher programmer productivity!
Numpy Data Movement
Code in the cells below show a very simple data movement code snippet that can be used
to share data with programmable logic. We leverage the Python numpy package to
manipulate the buffer on the ARM processors and can then send a buffer pointer to
programmable logic for sharing data.
We do not assume what programmable logic design is loaded, so here we only allocate
the needed memory space and show that it can manipulated as a numpy array and contains
a buffer pointer attribute. That pointer can then can be passed to programmable
logic hardware.
End of explanation
buffer = get_pynq_buffer(shape=(4,4), dtype=np.uint32)
buffer
Explanation: With the simple wrapper above, we can get access to memory that can be shared by both
numpy methods and programmable logic.
End of explanation
isinstance(buffer,np.ndarray)
Explanation: To double-check we show that the buffer is indeed a numpy array.
End of explanation
pl_buffer_address = hex(buffer.physical_address)
pl_buffer_address
Explanation: To send the buffer pointer to programmable logic, we use its physical address which
is what programmable logic would need to communicate using this shared buffer.
End of explanation
import asyncio
import random
import time
# Coroutine
async def wake_up(delay):
'''A function that will yield to asyncio.sleep() for a few seconds
and then resume, having preserved its state while suspended
'''
start_time = time.time()
print(f'The time is: {time.strftime("%I:%M:%S")}')
print(f"Suspending coroutine 'wake_up' at 'await` statement\n")
await asyncio.sleep(delay)
print(f"Resuming coroutine 'wake_up' from 'await` statement")
end_time = time.time()
sleep_time = end_time - start_time
print(f"'wake-up' was suspended for precisely: {sleep_time} seconds")
Explanation: In this short example, we showed a simple allocation of a numpy array that is now ready
to be shared with programmable logic devices. With numpy arrays that are accessible to programmable logic, we can quickly manipulate and move data across software and hardware.
Asyncio Integration
PYNQ also leverages the Python asyncio module for communicating with programmable logic
devices through events (namely interrupts).
A Python program running on PYNQ can use the asyncio library to manage multiple IO-bound
tasks asynchronously, thereby avoiding any blocking caused by waiting for responses from
slower IO subsystems. Instead, the program can continue to execute other tasks that are
ready to run. When the previously-busy tasks are ready to resume, they will be executed
in turn, and the cycle is repeated.
Again, since we won't assume what interrupt enabled devices are loaded on programmable
logic, we will show an example here a software-only asyncio example that uses asyncio's
sleep method.
End of explanation
delay = random.randint(1,5)
my_event_loop = asyncio.get_event_loop()
try:
print("Creating task for coroutine 'wake_up'\n")
wake_up_task = my_event_loop.create_task(wake_up(delay))
my_event_loop.run_until_complete(wake_up_task)
except RuntimeError as err:
print (f'{err}' +
' - restart the Jupyter kernel to re-run the event loop')
finally:
my_event_loop.close()
Explanation: With the wake_up function defined, we then can add a new task to the event loop.
End of explanation |
13,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
input_feature_mean = input_feature.mean()
output_mean = output.mean()
# compute the product of the output and the input_feature and its mean
product = output * input_feature
product_mean = product.mean()
# compute the squared value of the input_feature and its mean
input_feature_squered = input_feature * input_feature
input_feature_squered_mean = input_feature_squered.mean()
# use the formula for the slope
line1 = product.sum() - (output.sum() * input_feature.sum()) / input_feature.size()
line2 = input_feature_squered.sum() - (input_feature.sum() * input_feature.sum()) / input_feature.size()
slope = line1 / line2
# use the formula for the intercept
intercept = output_mean - (slope * input_feature_mean)
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = 2650
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft, estimated_price)
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = predictions - output
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = intercept + slope*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = (output - intercept) / slope
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print bedrooms_intercept, bedrooms_slope
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
rss_prices_on_bedrooms = get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS when using bedrooms on TEST data : ' + str(rss_prices_on_sqft)
# Compute RSS when using squarfeet on TEST data:
rss_prices_on_sqft = get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
print 'The RSS when using squarfeet on TEST data : ' + str(rss_prices_on_sqft)
rss_prices_on_bedrooms > rss_prices_on_sqft
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
13,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Checkpoints Design Pattern
This notebook demonstrates how to set up checkpointing in Keras.
The model tries to predict whether or not a ride includes a toll.
Creating dataset
Create dataset from BigQuery. The dataset consists of 19 millions rows and will not comfortably fit into memory.
Step1: Create model | Python Code:
import tensorflow as tf
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def features_and_labels(features):
label = features.pop('tolls_amount') # this is what we will train for
return features, tf.cast(label > 0, dtypes.int64, name='threshold')
def read_dataset(client, row_restriction, batch_size=2048, infinite=True):
GCP_PROJECT_ID='ai-analytics-solutions' # CHANGE
COL_NAMES = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude', 'tolls_amount']
COL_TYPES = [dtypes.float64] * len(COL_NAMES)
DATASET_GCP_PROJECT_ID, DATASET_ID, TABLE_ID, = 'bigquery-public-data.new_york.tlc_green_trips_2015'.split('.')
bqsession = client.read_session(
"projects/" + GCP_PROJECT_ID,
DATASET_GCP_PROJECT_ID, TABLE_ID, DATASET_ID,
COL_NAMES, COL_TYPES,
requested_streams=2,
row_restriction=row_restriction + ' AND pickup_longitude > -80 AND dropoff_longitude < -70')
dataset = bqsession.parallel_read_rows()
dataset = dataset.prefetch(1).map(features_and_labels).shuffle(batch_size*10).batch(batch_size)
if infinite:
dataset = dataset.repeat()
return dataset
client = BigQueryClient()
temp_df = read_dataset(client, "pickup_datetime BETWEEN '2015-01-01' AND '2015-03-31'", 2)
for row in temp_df:
print(row)
break
BATCH_SIZE=2048
train_df = read_dataset(client, "pickup_datetime BETWEEN '2015-01-01' AND '2015-03-31'", BATCH_SIZE)
eval_df = read_dataset(client, "pickup_datetime BETWEEN '2015-04-01' AND '2015-04-30'", BATCH_SIZE, infinite=False) # for validation, read it only once
Explanation: Checkpoints Design Pattern
This notebook demonstrates how to set up checkpointing in Keras.
The model tries to predict whether or not a ride includes a toll.
Creating dataset
Create dataset from BigQuery. The dataset consists of 19 millions rows and will not comfortably fit into memory.
End of explanation
metrics = [
tf.keras.metrics.BinaryAccuracy(name='accuracy'),
tf.keras.metrics.Precision(name='precision'),
tf.keras.metrics.Recall(name='recall'),
tf.keras.metrics.AUC(name='roc_auc'),
]
# create inputs, and pass them into appropriate types of feature columns (here, everything is numeric)
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float64')
for colname in ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude']
}
input_fc = [tf.feature_column.numeric_column(colname) for colname in inputs.keys()]
# transformations, pass through
transformed = inputs.copy()
input_layer = tf.keras.layers.DenseFeatures(input_fc, name='features')(transformed)
# Deep learning model
d1 = tf.keras.layers.Dense(16, activation='relu', name='d1')(input_layer)
d2 = tf.keras.layers.Dropout(0.25, name='d2')(d1)
d3 = tf.keras.layers.Dense(16, activation='relu', name='d3')(d2)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='d4', bias_initializer=tf.keras.initializers.Constant())(d3)
model = tf.keras.Model(inputs, output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=metrics)
tf.keras.utils.plot_model(model, rankdir='LR')
class_weight = {0: 0.5, 1: 25.0}
OUTDIR='trained'
import shutil
shutil.rmtree(OUTDIR, ignore_errors=True)
NUM_TRAINING_EXAMPLES = 1000 * 1000 * 5
STOP_POINT = 3.5
TOTAL_TRAINING_EXAMPLES = int(STOP_POINT * NUM_TRAINING_EXAMPLES)
NUM_CHECKPOINTS = 10
steps_per_epoch = (TOTAL_TRAINING_EXAMPLES //
(BATCH_SIZE*NUM_CHECKPOINTS))
checkpoint_path = '{}/checkpoints/taxi'.format(OUTDIR)
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=False,
verbose=1)
history = model.fit(train_df, validation_data=eval_df,
epochs=NUM_CHECKPOINTS,
steps_per_epoch=steps_per_epoch,
class_weight=class_weight)
Explanation: Create model
End of explanation |
13,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sports scheduling
In sports scheduling we usually have a bunch of games which are basically tasks requiring the two competing teams and a field as resources, so lets formulate this
Step1: However, the fields might be quite different, and so it would be unfair if some team needs to play more than twice on any one. We can avoid this by assigning each game a parameter for each team that plays this game and set it to one (here we simply use integers as parameters, this could also be some string in case of more parameters). Finally, we restrict any team parameter to two on any field
Step2: Sometimes we want to fix some games to specific rounds or fields. Here we fix all games in the middle round (n_fields-1) | Python Code:
import sys;sys.path.append('../src')
from pyschedule import Scenario, solvers, plotters, alt
n_teams = 12 # Number of teams
n_fields = int(n_teams/2) # Num of fields
n_rounds = n_teams-1 # Number of rounds
# Create scenario
S = Scenario('sport_scheduling',horizon=n_rounds)
# Game tasks
Games = { (i,j) : S.Task('Game_%i_%i'%(i,j)) for i in range(n_teams)
for j in range(n_teams) if i < j }
# Team and field resources
Teams = [ S.Resource('Team_%i'%i) for i in range(n_teams) ]
Fields = [ S.Resource('Field_%i'%i) for i in range(n_fields) ]
# Resource requirements
for i,j in Games :
Games[i,j] += [Teams[i], Teams[j]]
Games[i,j] += alt( Fields )
if solvers.mip.solve(S,msg=1):
%matplotlib inline
plotters.matplotlib.plot(S,hide_resources=Teams,fig_size=(12,5))
else:
print('no solution found')
Explanation: Sports scheduling
In sports scheduling we usually have a bunch of games which are basically tasks requiring the two competing teams and a field as resources, so lets formulate this:
End of explanation
# Teams in games as task parameters
for i,j in Games :
Games[i,j].teams = [i,j]
# Each team at most two times per field
for j in range(n_fields):
for i in range(n_teams) :
S += Fields[j][lambda T,i=i: i in T.teams] <= 2
if solvers.mip.solve(S,kind='CBC',msg=1):
plotters.matplotlib.plot(S,hide_resources=Teams,fig_size=(12,5))
else:
print('no solution found')
Explanation: However, the fields might be quite different, and so it would be unfair if some team needs to play more than twice on any one. We can avoid this by assigning each game a parameter for each team that plays this game and set it to one (here we simply use integers as parameters, this could also be some string in case of more parameters). Finally, we restrict any team parameter to two on any field:
End of explanation
for i in range(n_fields):
# Restrict to specific field
Games[2*i,2*i+1] += Fields[i]
# Start exactly in the middle round
S += Games[2*i,2*i+1] >= n_fields-1
if solvers.mip.solve(S,msg=1):
plotters.matplotlib.plot(S,hide_resources=Teams,fig_size=(12,5))
else:
print('no solution found')
Explanation: Sometimes we want to fix some games to specific rounds or fields. Here we fix all games in the middle round (n_fields-1):
End of explanation |
13,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
13,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyze
The hypertools analyze function allows you to perform complex analyses (normalization, dimensionality reduction and alignment) in a single line of code!
(Note that the order of operation is always the following normalize -> reduce -> alignment)
Import packages
Step1: Load your data
First, we'll load one of the sample datasets. This dataset is a list of 2 numpy arrays, each containing average brain activity (fMRI) from 18 subjects listening to the same story, fit using Hierarchical Topographic Factor Analysis (HTFA) with 100 nodes. The rows are timepoints and the columns are fMRI components.
See the full dataset or the HTFA article for more info on the data and HTFA, respectively.
Step2: We can see that the elements of weights each have the dimensions (300,100). We can further visualize the elements using a heatmap.
Step3: Normalization
Here is an example where we z-score the columns within each list
Step4: We can again visualize the data (this time, normalized) using heatmaps.
Step5: Normalize and reduce
To easily normalize and reduce the dimensionality of the data, pass the normalize, reduce, and ndims arguments to the analyze function. The normalize argument, outlined above, specifies how the data should be normalized. The reduce argumemnt, specifies the desired method of reduction. The ndims argument (int) specifies the number of dimensions to reduce to.
Supported dimensionality reduction models include
Step6: We can again visualize the data using heatmaps.
Step7: Finer control
For finer control of the model parameters, reduce can be a dictionary with the keys model and params. See scikit-learn specific model docs for details on parameters supported for each model.
Step8: We can again visualize the data using heatmaps.
Step9: Normalize, reduce, and align
Finally, we can normalize, reduce and then align all in one step.
The align argument can accept the following strings
Step10: Again, we can visualize the normed, reduced, and aligned data using a heatmap. | Python Code:
import hypertools as hyp
import seaborn as sb
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Analyze
The hypertools analyze function allows you to perform complex analyses (normalization, dimensionality reduction and alignment) in a single line of code!
(Note that the order of operation is always the following normalize -> reduce -> alignment)
Import packages
End of explanation
geo = hyp.load('weights_avg')
weights = geo.get_data()
print(weights[0].shape) # 300 TRs and 100 components
print(weights[1].shape)
Explanation: Load your data
First, we'll load one of the sample datasets. This dataset is a list of 2 numpy arrays, each containing average brain activity (fMRI) from 18 subjects listening to the same story, fit using Hierarchical Topographic Factor Analysis (HTFA) with 100 nodes. The rows are timepoints and the columns are fMRI components.
See the full dataset or the HTFA article for more info on the data and HTFA, respectively.
End of explanation
for x in weights:
sb.heatmap(x)
plt.show()
Explanation: We can see that the elements of weights each have the dimensions (300,100). We can further visualize the elements using a heatmap.
End of explanation
norm_within = hyp.analyze(weights, normalize='within')
Explanation: Normalization
Here is an example where we z-score the columns within each list:
Normalize accepts the following arguments, as strings:
+ โacrossโ - z-scores columns across all lists (default)
+ โwithinโ - z-scores columns within each list
+ โrowโ - z-scores each row of data
End of explanation
for x in norm_within:
sb.heatmap(x)
plt.show()
Explanation: We can again visualize the data (this time, normalized) using heatmaps.
End of explanation
norm_reduced = hyp.analyze(weights, normalize='within', reduce='PCA', ndims=3)
Explanation: Normalize and reduce
To easily normalize and reduce the dimensionality of the data, pass the normalize, reduce, and ndims arguments to the analyze function. The normalize argument, outlined above, specifies how the data should be normalized. The reduce argumemnt, specifies the desired method of reduction. The ndims argument (int) specifies the number of dimensions to reduce to.
Supported dimensionality reduction models include: PCA, IncrementalPCA, SparsePCA, MiniBatchSparsePCA, KernelPCA, FastICA, FactorAnalysis, TruncatedSVD, DictionaryLearning, MiniBatchDictionaryLearning, TSNE, Isomap, SpectralEmbedding, LocallyLinearEmbedding, and MDS.
End of explanation
for x in norm_reduced:
sb.heatmap(x)
plt.show()
Explanation: We can again visualize the data using heatmaps.
End of explanation
reduce={'model' : 'PCA', 'params' : {'whiten' : True}} # dictionary of parameters
reduced_params = hyp.analyze(weights, normalize='within', reduce=reduce, ndims=3)
Explanation: Finer control
For finer control of the model parameters, reduce can be a dictionary with the keys model and params. See scikit-learn specific model docs for details on parameters supported for each model.
End of explanation
for x in reduced_params:
sb.heatmap(x)
plt.show()
Explanation: We can again visualize the data using heatmaps.
End of explanation
norm_red_algn = hyp.analyze(weights, normalize='within', reduce='PCA', ndims=3, align='SRM')
Explanation: Normalize, reduce, and align
Finally, we can normalize, reduce and then align all in one step.
The align argument can accept the following strings:
+ 'hyper' - implements hyperalignment algorithm
+ 'SRM' - implements shared response model via Brainiak
End of explanation
for x in norm_red_algn:
sb.heatmap(x)
plt.show()
Explanation: Again, we can visualize the normed, reduced, and aligned data using a heatmap.
End of explanation |
13,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure S1
Step1: Load phase boundary data
Step2: Load optimization data
Step3: Put it all together and produce the final figure | Python Code:
import sys
sys.path.append('../lib/')
import numpy as np
import matplotlib.text
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
import shapely.ops
import plotting
import evolimmune
from plotting import *
import analysis
%load_ext autoreload
%autoreload 2
plt.style.use(['paper'])
eps = 1e-8
Explanation: Figure S1: Global optimization over parameters
This notebook contains the analysis of a direct global opimization over all four parameters ($p, q, c_{\rm constitutive}, p_{\rm uptake}$) of the model as a function of the pathogen statistics. It can be thought of as a supplement to Figure 1, motivating the choice of immune strategies considered for determining the phase boundaries.
Prerequisites:
To generate the data type:
make run
make agg
This notebook also needs the phase data from Figure 2.
Import a number of packages that we will need in the following.
End of explanation
df = analysis.loadnpz('../fig2/data/phases.npz')
polygons = evolimmune.polygons_from_boundaries(df, yconv=evolimmune.to_tau)
phases = evolimmune.phases_from_polygons(polygons)
qpos = (polygons['complete']-polygons['ac'])-polygons['pm'].intersection(polygons['pi'])-(polygons['complete']-polygons['io'])
puppos = polygons['complete']-shapely.ops.cascaded_union((polygons['ac'],
polygons['pm'],
polygons['complete']-polygons['mi']))
analysis.printunique(df)
Explanation: Load phase boundary data
End of explanation
dft = analysis.loadnpz('data/opt.npz')
evolimmune.derived_quantities(dft)
analysis.printunique(dft)
Explanation: Load optimization data
End of explanation
variables = ['cconstitutive', 'q', 'p', 'pup']
fig, axesgrid = plt.subplots(nrows=2, ncols=2, figsize=(7, 5.0), sharey=True, sharex=True)
ymin, ymax = 0.09, 20.0
axes = axesgrid.flatten()
boundarykwargs = dict(ylimmax=ymax, ylimmin=ymin, lw=7.5, color='w')
for counter, var in enumerate(variables):
ax = axes[counter]
cmap = cm.viridis if var != 'cconstitutive' else cm.viridis_r
cmap.set_bad('darkmagenta', 1.)
im, cbar = plotting.heatmap(dft.pivot(index='tauenv', columns='pienv', values=var),
imshow=True, zlabel=evolimmune.varname_to_tex[var], cmap=cmap, ax=ax,
interpolation='bilinear')
cbar.outline.set_linewidth(0.0)
if var == 'cconstitutive':
analysis.plot_interior_boundary(ax, phases['p'], **boundarykwargs)
analysis.plot_interior_boundary(ax, phases['a'], **boundarykwargs)
elif var in ['q', 'p']:
analysis.plot_interior_boundary(ax, qpos, **boundarykwargs)
if var == 'p':
analysis.plot_interior_boundary(ax, phases['c'], **boundarykwargs)
elif var == 'pup':
analysis.plot_interior_boundary(ax, puppos, **boundarykwargs)
ax.set_ylabel('')
ax.set_xlabel('')
ax.set_xlim(0.0, 1.0)
ax.set_ylim(ymin, ymax)
ax.set_yscale('log')
plotting.despine(ax, spines='all')
for ax in axesgrid[:, 0]:
ax.set_ylabel(r'characteristic time $\tau_{env}$')
for ax in axesgrid[-1, :]:
ax.set_xlabel('frequency $\pi_{env}$')
fig.tight_layout(pad=0.25)
fig.savefig('SIopt.pdf')
fig.savefig('SIopt.svg')
Explanation: Put it all together and produce the final figure
End of explanation |
13,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-vol', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-VOL
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
13,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
E2E ML on GCP
Step1: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step4: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step5: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step6: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step10: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step11: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step12: Import TensorFlow
Import the TensorFlow package into your Python environment.
Step13: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step14: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step15: Set pre-built containers
Set the pre-built Docker container image for prediction.
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available
Step16: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step17: Examine the tuning package
Package layout
Before you start the tuning, you will look at how a Python package is assembled for a custom tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step18: Create the task script for the Python training package
Next, you create the task.py script for driving the training package. Some noteable steps include
Step19: Store tuning script on your Cloud Storage bucket
Next, you package the tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step20: Create a Docker file
To use your own custom training container, you build a Docker file and embed into the container your training scripts.
Write the Docker file contents
Your first step in containerizing your code is to create a Docker file. In your Docker youโll include all the commands needed to run your container image. Itโll install all the libraries youโre using and set up the entry point for your training code.
Install a pre-defined container image from TensorFlow repository for deep learning images.
Copies in the Python training code, to be shown subsequently.
Sets the entry into the Python training script as trainer/task.py. Note, the .py is dropped in the ENTRYPOINT command, as it is implied.
Step21: Build the container locally
Next, you will provide a name for your customer container that you will use when you submit it to the Google Container Registry.
Step22: Next, build the container.
Step23: Register the custom container
When youโve finished running the container locally, push it to Google Container Registry.
Step24: Construct hyperparameter tuning pipeline
Next, construct the pipeline with the following tasks
Step25: Create hyperparameter tuning specifications
Next, you construct the worker pool specification, and the study's metric and parameter specifications, as follows
Step26: Compile and execute hyperparameter tuning pipeline
Next, you compile the pipeline and then exeute it. The pipeline takes the following parameters, which are passed as the dictionary parameter_values
Step27: View the data pipeline execution results
Step28: Delete a pipeline job
After a pipeline job is completed, you can delete the pipeline job with the method delete(). Prior to completion, a pipeline job can be canceled with the method cancel().
Step29: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install -U tensorflow==2.5 $USER_FLAG -q
! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG -q
! pip3 install -U tensorflow-transform==1.2 $USER_FLAG -q
! pip3 install -U tensorflow-io==0.18 $USER_FLAG -q
! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
! pip3 install --upgrade kfp $USER_FLAG -q
Explanation: E2E ML on GCP: MLOps stage 3 : formalization: get started with Hyperparameter Tuning pipeline components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_hpt_pipeline_components.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage3/get_started_with_hpt_pipeline_components.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage3/get_started_with_hpt_pipeline_components.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 3 : formalization: get started with Hyperparameter Tuning pipeline components.
Dataset
The dataset used for this tutorial is the Horses or Humans from TensorFlow Datasets. The trained model predicts whether an image is a horse or human being.
Objective
In this tutorial, you learn how to use prebuilt Google Cloud Pipeline Components for Vertex AI Hyperparameter Tuning.
This tutorial uses the following Google Cloud ML services:
Google Cloud Pipeline Components
Vertex AI Dataset, Model and Endpoint resources
Vertex AI Hyperparameter Tuning
The steps performed include:
Construct a pipeline for:
Hyperparameter tune/train a custom model.
Retrieve the tuned hyperparameter values and metrics to optimize.
If the metrics exceed a specified threshold.
Get the location of the model artifacts for the best tuned model.
Upload the model artifacts to a Vertex AI Model resource.
Execute a Vertex AI pipeline.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installations
Install the required packages for executing the notebook.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your service account from gcloud
if not IS_COLAB:
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].replace("*", "").strip()
if IS_COLAB:
shell_output = ! gcloud projects describe $PROJECT_ID
project_number = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{project_number}[email protected]"
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_URI
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_URI
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
import json
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import component
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
import tensorflow as tf
Explanation: Import TensorFlow
Import the TensorFlow package into your Python environment.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
import os
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more about hardware accelerator support for your region.
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for prediction.
Set the variable TF to the TensorFlow version of the container image. For example, 2-1 would be version 2.1, and 1-15 would be version 1.15. The following list shows some of the pre-built images available:
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python tuning script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow==2.5.0',\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Horses or Humans image classification\n\nVersion: 0.0.0\n\nSummary: Demostration tuning script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: [email protected]\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Examine the tuning package
Package layout
Before you start the tuning, you will look at how a Python package is assembled for a custom tuning job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom tuning job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
import os
os.system('pip install cloudml-hypertune') # alternaterly, this can be added to the Dockerfile
import tensorflow as tf
import tensorflow_datasets as tfds
import argparse
import hypertune
def get_args():
'''Parses args. Must include all hyperparameters you want to tune.'''
parser = argparse.ArgumentParser()
parser.add_argument(
'--epochs',
required=True,
type=int,
help='number of epochs')
parser.add_argument(
'--learning_rate',
required=True,
type=float,
help='learning rate')
parser.add_argument(
'--momentum',
required=False,
type=float,
default=0.5,
help='SGD momentum value')
parser.add_argument(
'--batch_size',
required=True,
type=int,
help='the batch size')
parser.add_argument(
'--model-dir',
dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'),
type=str, help='Model dir.')
args = parser.parse_args()
return args
def preprocess_data(image, label):
'''Resizes and scales images.'''
image = tf.image.resize(image, (150,150))
return tf.cast(image, tf.float32) / 255., label
def get_data():
'''Loads Horses Or Humans dataset and preprocesses data.'''
data, info = tfds.load(name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(64)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def get_model(learning_rate, momentum):
'''Defines and complies model.'''
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def train_model(model, train_data, validation_data, epochs, batch_size):
history = model.fit(train_data, epochs=epochs, batch_size=batch_size, validation_data=validation_data)
# DEFINE METRIC
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=epochs
)
return model
def main():
args = get_args()
train_data, validation_data = get_data()
model = get_model(args.learning_rate, args.momentum)
model = train_model(model, train_data, validation_data, args.epochs, args.batch_size)
model.save(args.model_dir)
if __name__ == "__main__":
main()
Explanation: Create the task script for the Python training package
Next, you create the task.py script for driving the training package. Some noteable steps include:
Command-line arguments:
model-dir: The location to save the trained model. When using Vertex AI custom training, the location will be specified in the environment variable: AIP_MODEL_DIR,
epochs: The number of epochs to train for.
learning_rate: Hyperparameter for learning rate.
batch_size: Hyperparameter for batch size.
Data preprocessing (get_data())
Loads and preprocesses the dataset as a tf.data.Dataset generator.
Model architecture (get_model()):
Builds the corresponding model architecture.
Training (train_model()):
Trains the model
Model artifact saving
Saves the model artifacts where the Cloud Storage location is specified.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_URI/trainer_horses_or_humans.tar.gz
Explanation: Store tuning script on your Cloud Storage bucket
Next, you package the tuning folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
%%writefile custom/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
WORKDIR /root
WORKDIR /
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
Explanation: Create a Docker file
To use your own custom training container, you build a Docker file and embed into the container your training scripts.
Write the Docker file contents
Your first step in containerizing your code is to create a Docker file. In your Docker youโll include all the commands needed to run your container image. Itโll install all the libraries youโre using and set up the entry point for your training code.
Install a pre-defined container image from TensorFlow repository for deep learning images.
Copies in the Python training code, to be shown subsequently.
Sets the entry into the Python training script as trainer/task.py. Note, the .py is dropped in the ENTRYPOINT command, as it is implied.
End of explanation
TRAIN_IMAGE = "gcr.io/" + PROJECT_ID + "/horses_or_humans:v1"
Explanation: Build the container locally
Next, you will provide a name for your customer container that you will use when you submit it to the Google Container Registry.
End of explanation
! docker build custom -t $TRAIN_IMAGE
Explanation: Next, build the container.
End of explanation
! docker push $TRAIN_IMAGE
Explanation: Register the custom container
When youโve finished running the container locally, push it to Google Container Registry.
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/custom_icn_tuning".format(BUCKET_URI)
@component(packages_to_install=["google-cloud-aiplatform"])
def model_dir(base_output_directory: str, best_trial: str) -> str:
from google.cloud.aiplatform_v1.types import study
trial_proto = study.Trial.from_json(best_trial)
model_id = trial_proto.id
return f"{base_output_directory}/{model_id}/model"
@dsl.pipeline(
name="hp-tuning", description="Custom image classification hyperparameter tuning"
)
def pipeline(
display_name: str,
worker_pool_specs: list,
study_spec_metrics: list,
study_spec_parameters: list,
threshold: float,
deploy_image: str,
max_trial_count: int = 5,
parallel_trial_count: int = 1,
base_output_directory: str = PIPELINE_ROOT,
labels: dict = {},
project: str = PROJECT_ID,
region: str = REGION,
):
from google_cloud_pipeline_components.experimental import \
hyperparameter_tuning_job
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.hyperparameter_tuning_job import \
HyperparameterTuningJobRunOp
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
tuning_op = HyperparameterTuningJobRunOp(
display_name=display_name,
project=project,
location=region,
worker_pool_specs=worker_pool_specs,
study_spec_metrics=study_spec_metrics,
study_spec_parameters=study_spec_parameters,
max_trial_count=max_trial_count,
parallel_trial_count=parallel_trial_count,
base_output_directory=base_output_directory,
)
trials_op = hyperparameter_tuning_job.GetTrialsOp(
gcp_resources=tuning_op.outputs["gcp_resources"]
)
best_trial_op = hyperparameter_tuning_job.GetBestTrialOp(
trials=trials_op.output, study_spec_metrics=study_spec_metrics
)
threshold_op = hyperparameter_tuning_job.IsMetricBeyondThresholdOp(
trial=best_trial_op.output,
study_spec_metrics=study_spec_metrics,
threshold=threshold,
)
with dsl.Condition(
threshold_op.output == "true",
name="deploy_decision",
):
_ = hyperparameter_tuning_job.GetHyperparametersOp(trial=best_trial_op.output)
model_dir_op = model_dir(base_output_directory, best_trial_op.output)
import_unmanaged_model_op = importer_node.importer(
artifact_uri=model_dir_op.output,
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": DEPLOY_IMAGE,
},
},
).after(model_dir_op)
_ = ModelUploadOp(
project=project,
display_name=display_name,
unmanaged_container_model=import_unmanaged_model_op.outputs["artifact"],
).after(import_unmanaged_model_op)
Explanation: Construct hyperparameter tuning pipeline
Next, construct the pipeline with the following tasks:
Create/Execute a hyperparameter tuning job
Get all trial results.
Get the best trial results.
Determine if the best trial results exceed a threshold
Retrieve the hyperparameter values
Determine Cloud Storage location of the best model
Upload the best model as a Vertex AI Model resource.
End of explanation
from google_cloud_pipeline_components.experimental import \
hyperparameter_tuning_job
gpu = "ACCELERATOR_TYPE_UNSPECIFIED"
accelerator_count = 0
if TRAIN_GPU:
gpu = TRAIN_GPU.name
accelerator_count = 1
else:
gpu = "ACCELERATOR_TYPE_UNSPECIFIED"
accelerator_count = (
0 # same problem with accelerator_count, if we keep is as "None" its not
)
CMDARGS = [
"--epochs=10",
]
# The spec of the worker pools including machine type and Docker image
worker_pool_specs = [
{
"machine_spec": {
"machine_type": TRAIN_COMPUTE,
"accelerator_type": gpu,
"accelerator_count": accelerator_count,
},
"replica_count": 1,
"container_spec": {"image_uri": TRAIN_IMAGE, "args": CMDARGS},
}
]
# List serialized from the dictionary representing metrics to optimize.
# The dictionary key is the metric_id, which is reported by your training job,
# and the dictionary value is the optimization goal of the metric.
metric_spec = hyperparameter_tuning_job.serialize_metrics({"accuracy": "maximize"})
# List serialized from the parameter dictionary. The dictionary
# represents parameters to optimize. The dictionary key is the parameter_id,
# which is passed into your training job as a command line key word argument, and the
# dictionary value is the parameter specification of the metric.
parameter_spec = hyperparameter_tuning_job.serialize_parameters(
{
"learning_rate": aip.hyperparameter_tuning.DoubleParameterSpec(
min=0.001, max=1, scale="log"
),
"batch_size": aip.hyperparameter_tuning.DiscreteParameterSpec(
values=[16, 32, 64], scale=None
),
}
)
Explanation: Create hyperparameter tuning specifications
Next, you construct the worker pool specification, and the study's metric and parameter specifications, as follows:
Worker pool specification
This specification describes the machine and container requirements, and scaling for executing the hyperparameter study. Since the training module is embedded in the docker image, you use the args field to specify any command-lime arguments, which are not part of the study, to the training module. In this example, you pass the number of epochs.
Parameter specification
This specification describes the hyperparameters to tune, and the range of values to tune then for. For each study, the trial values for these parameters are passed as command-line arguments to the training module, as --<parameter_name>=<trial_value>.
Metric specification
This specification describes the metric(s) to be evaluated in the study and wether to minize or maximize the metric.
End of explanation
compiler.Compiler().compile(
pipeline_func=pipeline, package_path="hp_tune_pipeline_job.json"
)
pipeline = aip.PipelineJob(
display_name="hp_tuning",
template_path="hp_tune_pipeline_job.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={
"display_name": "hp_tuning",
"worker_pool_specs": worker_pool_specs,
"study_spec_metrics": metric_spec,
"study_spec_parameters": parameter_spec,
"threshold": 0.7,
"deploy_image": DEPLOY_IMAGE,
},
enable_caching=False,
)
pipeline.run()
! rm -rf hp_tune_pipeline_job.json custom custom.tar.gz
Explanation: Compile and execute hyperparameter tuning pipeline
Next, you compile the pipeline and then exeute it. The pipeline takes the following parameters, which are passed as the dictionary parameter_values:
display_name: A human readable name for the pipeline job.
import_file: The Cloud Storage location to the dataset.
worker_pool_specs: The the machine and container, and auto-scaling requirements, as well as command line arguments.
study_spec_metrics: The metrics to optimize in the study trials.
study_spec_parameters: The parameters to tune.
End of explanation
PROJECT_NUMBER = pipeline.gca_resource.name.split("/")[1]
print(PROJECT_NUMBER)
def print_pipeline_output(job, output_task_name):
JOB_ID = job.name
print(JOB_ID)
for _ in range(len(job.gca_resource.job_detail.task_details)):
TASK_ID = job.gca_resource.job_detail.task_details[_].task_id
EXECUTE_OUTPUT = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/executor_output.json"
)
GCP_RESOURCES = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/gcp_resources"
)
EVAL_METRICS = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/evaluation_metrics"
)
if tf.io.gfile.exists(EXECUTE_OUTPUT):
! gsutil cat $EXECUTE_OUTPUT
return EXECUTE_OUTPUT
elif tf.io.gfile.exists(GCP_RESOURCES):
! gsutil cat $GCP_RESOURCES
return GCP_RESOURCES
elif tf.io.gfile.exists(EVAL_METRICS):
! gsutil cat $EVAL_METRICS
return EVAL_METRICS
return None
print("hyperparameter-tuning-job")
artifacts = print_pipeline_output(pipeline, "hyperparameter-tuning-job")
print("\n\n")
print("gettrialsop")
artifacts = print_pipeline_output(pipeline, "gettrialsop")
print("\n\n")
print("getbesttrialop")
artifacts = print_pipeline_output(pipeline, "getbesttrialop")
print("\n\n")
output = !gsutil cat $artifacts
output = json.loads(output[0])
best_trial = json.loads(output["parameters"]["Output"]["stringValue"])
model_id = best_trial["id"]
print("BEST MODEL", model_id)
parameters = best_trial["parameters"]
batch_size = parameters[0]["value"]
print("BATCH SIZE", batch_size)
learning_rate = parameters[1]["value"]
print("LR", learning_rate)
MODEL_DIR = f"{PIPELINE_ROOT}/{model_id}/model"
print("ismetricbeyondthresholdop")
artifacts = print_pipeline_output(pipeline, "ismetricbeyondthresholdop")
print("\n\n")
print("deploy-decision")
artifacts = print_pipeline_output(pipeline, "deploy-decision")
print("\n\n")
print("model-dir")
artifacts = print_pipeline_output(pipeline, "model-dir")
print("\n\n")
print("model-upload")
artifacts = print_pipeline_output(pipeline, "model-upload")
print("\n\n")
Explanation: View the data pipeline execution results
End of explanation
pipeline.delete()
Explanation: Delete a pipeline job
After a pipeline job is completed, you can delete the pipeline job with the method delete(). Prior to completion, a pipeline job can be canceled with the method cancel().
End of explanation
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Cloud Storage Bucket
End of explanation |
13,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PROMISE12 prostate segmentation demo
Preparation
Step1: 3) Make sure you have all the dependencies installed (replacing gpu with cpu for cpu-only mode)
Step2: Training a network from the command line
The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory
Step3: Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command
python net_segment.py inference --conf demos/PROMISE12/promise12_demo_inference_config.ini
or the following python code in the Notebook
Step4: Otherwise, you can load up some pre-trained weights for the network | Python Code:
import os,sys
niftynet_path=r'path/to/NiftyNet'
os.chdir(niftynet_path)
Explanation: PROMISE12 prostate segmentation demo
Preparation:
1) Make sure you have set up the PROMISE12 data set. If not, download it from https://promise12.grand-challenge.org/ (registration required) and run data/PROMISE12/setup.py
2) Make sure you are in NiftyNet root, setting niftynet_path correctly to the path with the niftynet folder in it
End of explanation
import pip
#pip.main(['install','-r','requirements-gpu.txt'])
pip.main(['install','-r','requirements-cpu.txt'])
pip.main(['install', 'SimpleITK>=1.0.0'])
Explanation: 3) Make sure you have all the dependencies installed (replacing gpu with cpu for cpu-only mode):
End of explanation
import os
import sys
import niftynet
sys.argv=['','train','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_train_config.ini'),'--max_iter','10']
niftynet.main()
Explanation: Training a network from the command line
The simplest way to use NiftyNet is via the commandline net_segment.py script. Normally, this is done on the command line with a command like this from the NiftyNet root directory:
python net_segment.py train --conf demos/PROMISE12/promise12_demo_train_config.ini --max_iter 10
Notice that we use configuration file that is specific to this experiment. This file contains default settings. Also note that we can override these settings on the command line.
To execute NiftyNet from within the notebook, you can run the following python code:
End of explanation
import os
import sys
import niftynet
sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini')]
niftynet.main()
Explanation: Now you have trained (a few iterations of) a deep learning network for medical image segmentation. If you have some time on your hands, you can finish training the network (by leaving off the max_iter argument) and try it out, by running the following command
python net_segment.py inference --conf demos/PROMISE12/promise12_demo_inference_config.ini
or the following python code in the Notebook
End of explanation
import os
import sys
import niftynet
sys.argv=['', 'inference','-a','net_segment','--conf',os.path.join('demos','PROMISE12','promise12_demo_inference_config.ini'), '--model_dir', os.path.join('demos','PROMISE12','pretrained')]
niftynet.main()
Explanation: Otherwise, you can load up some pre-trained weights for the network:
python net_segment.py inference --conf demo/PROMISE12/promise12_demo_config.ini --model_dir demo/PROMISE12/pretrained
or the following python code in the Notebook
End of explanation |
13,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SSL Connection Examples
Connecting to a Redis instance via SSL.
Step1: Connecting to a Redis instance via a URL string
Step2: Connecting to a Redis instance via SSL, while specifying a self-signed SSL certificate.
Step3: Connecting to a Redis instance via SSL, and validate the OCSP status of the certificate
The redis package is design to be small, meaning extra libraries must be installed, in order to support OCSP stapling. As a result, first install redis via
Step4: Connect via SSL, validate OCSP-stapled certificates
The redis package is design to be small, meaning extra libraries must be installed, in order to support OCSP stapling. As a result, first install redis via
Step5: Naive validation of a stapled OCSP certificate | Python Code:
import redis
ssl_connection = redis.Redis(host='localhost', port=6666, ssl=True, ssl_cert_reqs="none")
ssl_connection.ping()
Explanation: SSL Connection Examples
Connecting to a Redis instance via SSL.
End of explanation
import redis
url_connection = redis.from_url("redis://localhost:6379?ssl_cert_reqs=none&decode_responses=True&health_check_interval=2")
url_connection.ping()
Explanation: Connecting to a Redis instance via a URL string
End of explanation
import os
import redis
ssl_certfile="some-certificate.pem"
ssl_keyfile="some-key.pem"
ssl_ca_certs=ssl_certfile
ssl_cert_conn = redis.Redis(
host="localhost",
port=6666,
ssl=True,
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
ssl_cert_reqs="required",
ssl_ca_certs=ssl_ca_certs,
)
ssl_cert_conn.ping()
Explanation: Connecting to a Redis instance via SSL, while specifying a self-signed SSL certificate.
End of explanation
import os
import redis
ssl_certfile="some-certificate.pem"
ssl_keyfile="some-key.pem"
ssl_ca_certs=ssl_certfile
ssl_cert_conn = redis.Redis(
host="localhost",
port=6666,
ssl=True,
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
ssl_cert_reqs="required",
ssl_validate_ocsp=True
)
ssl_cert_conn.ping()
Explanation: Connecting to a Redis instance via SSL, and validate the OCSP status of the certificate
The redis package is design to be small, meaning extra libraries must be installed, in order to support OCSP stapling. As a result, first install redis via:
pip install redis[ocsp]
This will install cryptography, requests, and PyOpenSSL, none of which are generally required to use Redis.
End of explanation
import redis
import OpenSSL
ssl_certfile="some-certificate.pem"
ssl_keyfile="some-key.pem"
ssl_ca_certs=ssl_certfile
ssl_expected_certificate = "expected-ocsp-certificate.pem"
# PyOpenSSL is used only for the purpose of validating the ocsp
# stapled response
ctx = OpenSSL.SSL.Context(OpenSSL.SSL.SSLv23_METHOD)
ctx.use_certificate_file=ssl_certfile
ctx.use_privatekey_file=ssl_keyfile
expected_certificate = open(ssl_expected_certificate, 'rb').read()
ssl_cert_conn = redis.Redis(
host="localhost",
port=6666,
ssl=True,
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
ssl_cert_reqs="required",
ssl_ocsp_context=ctx,
ssl_ocsp_expected_cert=expected_certificate,
)
ssl_cert_conn.ping()
Explanation: Connect via SSL, validate OCSP-stapled certificates
The redis package is design to be small, meaning extra libraries must be installed, in order to support OCSP stapling. As a result, first install redis via:
pip install redis[ocsp]
This will install cryptography, requests, and PyOpenSSL, none of which are generally required to use Redis.
Using a custom SSL context and validating against an expected certificate
End of explanation
import redis
import OpenSSL
ssl_certfile="some-certificate.pem"
ssl_keyfile="some-key.pem"
ssl_ca_certs=ssl_certfile
ssl_expected_certificate = "expected-ocsp-certificate.pem"
# PyOpenSSL is used only for the purpose of validating the ocsp
# stapled response
ctx = OpenSSL.SSL.Context(OpenSSL.SSL.SSLv23_METHOD)
ctx.use_certificate_file=ssl_certfile
ctx.use_privatekey_file=ssl_keyfile
ssl_cert_conn = redis.Redis(
host="localhost",
port=6666,
ssl=True,
ssl_certfile=ssl_certfile,
ssl_keyfile=ssl_keyfile,
ssl_cert_reqs="required",
ssl_validate_ocsp_stapled=True,
)
ssl_cert_conn.ping()
Explanation: Naive validation of a stapled OCSP certificate
End of explanation |
13,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Welcome to the feature engineering project for the House Prices - Advanced Regression Techniques competition! This competition uses nearly the same data you used in the exercises of the Feature Engineering course. We'll collect together the work you did into a complete project which you can build off of with ideas of your own.
<blockquote style="margin-right
Step1: Data Preprocessing
Before we can do any feature engineering, we need to preprocess the data to get it in a form suitable for analysis. The data we used in the course was a bit simpler than the competition data. For the Ames competition dataset, we'll need to
Step2: Clean Data
Some of the categorical features in this dataset have what are apparently typos in their categories
Step3: Comparing these to data_description.txt shows us what needs cleaning. We'll take care of a couple of issues here, but you might want to evaluate this data further.
Step4: Encode the Statistical Data Type
Pandas has Python types corresponding to the standard statistical types (numeric, categorical, etc.). Encoding each feature with its correct type helps ensure each feature is treated appropriately by whatever functions we use, and makes it easier for us to apply transformations consistently. This hidden cell defines the encode function
Step5: Handle Missing Values
Handling missing values now will make the feature engineering go more smoothly. We'll impute 0 for missing numeric values and "None" for missing categorical values. You might like to experiment with other imputation strategies. In particular, you could try creating "missing value" indicators
Step6: Load Data
And now we can call the data loader and get the processed data splits
Step7: Uncomment and run this cell if you'd like to see what they contain. Notice that df_test is
missing values for SalePrice. (NAs were willed with 0's in the imputation step.)
Step8: Establish Baseline
Finally, let's establish a baseline score to judge our feature engineering against.
Here is the function we created in Lesson 1 that will compute the cross-validated RMSLE score for a feature set. We've used XGBoost for our model, but you might want to experiment with other models.
Step9: We can reuse this scoring function anytime we want to try out a new feature set. We'll run it now on the processed data with no additional features and get a baseline score
Step10: This baseline score helps us to know whether some set of features we've assembled has actually led to any improvement or not.
Step 2 - Feature Utility Scores
In Lesson 2 we saw how to use mutual information to compute a utility score for a feature, giving you an indication of how much potential the feature has. This hidden cell defines the two utility functions we used, make_mi_scores and plot_mi_scores
Step11: Let's look at our feature scores again
Step12: You can see that we have a number of features that are highly informative and also some that don't seem to be informative at all (at least by themselves). As we talked about in Tutorial 2, the top scoring features will usually pay-off the most during feature development, so it could be a good idea to focus your efforts on those. On the other hand, training on uninformative features can lead to overfitting. So, the features with 0.0 scores we'll drop entirely
Step13: Removing them does lead to a modest performance gain
Step14: Later, we'll add the drop_uninformative function to our feature-creation pipeline.
Step 3 - Create Features
Now we'll start developing our feature set.
To make our feature engineering workflow more modular, we'll define a function that will take a prepared dataframe and pass it through a pipeline of transformations to get the final feature set. It will look something like this
Step15: A label encoding is okay for any kind of categorical feature when you're using a tree-ensemble like XGBoost, even for unordered categories. If you wanted to try a linear regression model (also popular in this competition), you would instead want to use a one-hot encoding, especially for the features with unordered categories.
Create Features with Pandas
This cell reproduces the work you did in Exercise 3, where you applied strategies for creating features in Pandas. Modify or add to these functions to try out other feature combinations.
Step16: Here are some ideas for other transforms you could explore
Step17: Principal Component Analysis
PCA was the second unsupervised model we used for feature creation. We saw how it could be used to decompose the variational structure in the data. The PCA algorithm gave us loadings which described each component of variation, and also the components which were the transformed datapoints. The loadings can suggest features to create and the components we can use as features directly.
Here are the utility functions from the PCA lesson
Step18: And here are transforms that produce the features from the Exercise 5. You might want to change these if you came up with a different answer.
Step19: These are only a couple ways you could use the principal components. You could also try clustering using one or more components. One thing to note is that PCA doesn't change the distance between points -- it's just like a rotation. So clustering with the full set of components is the same as clustering with the original features. Instead, pick some subset of components, maybe those with the most variance or the highest MI scores.
For further analysis, you might want to look at a correlation matrix for the dataset
Step20: Groups of highly correlated features often yield interesting loadings.
PCA Application - Indicate Outliers
In Exercise 5, you applied PCA to determine houses that were outliers, that is, houses having values not well represented in the rest of the data. You saw that there was a group of houses in the Edwards neighborhood having a SaleCondition of Partial whose values were especially extreme.
Some models can benefit from having these outliers indicated, which is what this next transform will do.
Step21: You could also consider applying some sort of robust scaler from scikit-learn's sklearn.preprocessing module to the outlying values, especially those in GrLivArea. Here is a tutorial illustrating some of them. Another option could be to create a feature of "outlier scores" using one of scikit-learn's outlier detectors.
Target Encoding
Needing a separate holdout set to create a target encoding is rather wasteful of data. In Tutorial 6 we used 25% of our dataset just to encode a single feature, Zipcode. The data from the other features in that 25% we didn't get to use at all.
There is, however, a way you can use target encoding without having to use held-out encoding data. It's basically the same trick used in cross-validation
Step22: Use it like
Step23: Step 4 - Hyperparameter Tuning
At this stage, you might like to do some hyperparameter tuning with XGBoost before creating your final submission.
Step24: Just tuning these by hand can give you great results. However, you might like to try using one of scikit-learn's automatic hyperparameter tuners. Or you could explore more advanced tuning libraries like Optuna or scikit-optimize.
Here is how you can use Optuna with XGBoost | Python Code:
#$HIDE_INPUT$
import os
import warnings
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from IPython.display import display
from pandas.api.types import CategoricalDtype
from category_encoders import MEstimateEncoder
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
from sklearn.feature_selection import mutual_info_regression
from sklearn.model_selection import KFold, cross_val_score
from xgboost import XGBRegressor
# Set Matplotlib defaults
plt.style.use("seaborn-whitegrid")
plt.rc("figure", autolayout=True)
plt.rc(
"axes",
labelweight="bold",
labelsize="large",
titleweight="bold",
titlesize=14,
titlepad=10,
)
# Mute warnings
warnings.filterwarnings('ignore')
Explanation: Introduction
Welcome to the feature engineering project for the House Prices - Advanced Regression Techniques competition! This competition uses nearly the same data you used in the exercises of the Feature Engineering course. We'll collect together the work you did into a complete project which you can build off of with ideas of your own.
<blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;">
<strong>Fork This Notebook!</strong><br>
Create your own editable copy of this notebook by clicking on the <strong>Copy and Edit</strong> button in the top right corner.
</blockquote>
Step 1 - Preliminaries
Imports and Configuration
We'll start by importing the packages we used in the exercises and setting some notebook defaults. Unhide this cell if you'd like to see the libraries we'll use:
End of explanation
def load_data():
# Read data
data_dir = Path("../input/house-prices-advanced-regression-techniques/")
df_train = pd.read_csv(data_dir / "train.csv", index_col="Id")
df_test = pd.read_csv(data_dir / "test.csv", index_col="Id")
# Merge the splits so we can process them together
df = pd.concat([df_train, df_test])
# Preprocessing
df = clean(df)
df = encode(df)
df = impute(df)
# Reform splits
df_train = df.loc[df_train.index, :]
df_test = df.loc[df_test.index, :]
return df_train, df_test
Explanation: Data Preprocessing
Before we can do any feature engineering, we need to preprocess the data to get it in a form suitable for analysis. The data we used in the course was a bit simpler than the competition data. For the Ames competition dataset, we'll need to:
- Load the data from CSV files
- Clean the data to fix any errors or inconsistencies
- Encode the statistical data type (numeric, categorical)
- Impute any missing values
We'll wrap all these steps up in a function, which will make easy for you to get a fresh dataframe whenever you need. After reading the CSV file, we'll apply three preprocessing steps, clean, encode, and impute, and then create the data splits: one (df_train) for training the model, and one (df_test) for making the predictions that you'll submit to the competition for scoring on the leaderboard.
End of explanation
data_dir = Path("../input/house-prices-advanced-regression-techniques/")
df = pd.read_csv(data_dir / "train.csv", index_col="Id")
df.Exterior2nd.unique()
Explanation: Clean Data
Some of the categorical features in this dataset have what are apparently typos in their categories:
End of explanation
def clean(df):
df["Exterior2nd"] = df["Exterior2nd"].replace({"Brk Cmn": "BrkComm"})
# Some values of GarageYrBlt are corrupt, so we'll replace them
# with the year the house was built
df["GarageYrBlt"] = df["GarageYrBlt"].where(df.GarageYrBlt <= 2010, df.YearBuilt)
# Names beginning with numbers are awkward to work with
df.rename(columns={
"1stFlrSF": "FirstFlrSF",
"2ndFlrSF": "SecondFlrSF",
"3SsnPorch": "Threeseasonporch",
}, inplace=True,
)
return df
Explanation: Comparing these to data_description.txt shows us what needs cleaning. We'll take care of a couple of issues here, but you might want to evaluate this data further.
End of explanation
#$HIDE_INPUT$
# The numeric features are already encoded correctly (`float` for
# continuous, `int` for discrete), but the categoricals we'll need to
# do ourselves. Note in particular, that the `MSSubClass` feature is
# read as an `int` type, but is actually a (nominative) categorical.
# The nominative (unordered) categorical features
features_nom = ["MSSubClass", "MSZoning", "Street", "Alley", "LandContour", "LotConfig", "Neighborhood", "Condition1", "Condition2", "BldgType", "HouseStyle", "RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType", "Foundation", "Heating", "CentralAir", "GarageType", "MiscFeature", "SaleType", "SaleCondition"]
# The ordinal (ordered) categorical features
# Pandas calls the categories "levels"
five_levels = ["Po", "Fa", "TA", "Gd", "Ex"]
ten_levels = list(range(10))
ordered_levels = {
"OverallQual": ten_levels,
"OverallCond": ten_levels,
"ExterQual": five_levels,
"ExterCond": five_levels,
"BsmtQual": five_levels,
"BsmtCond": five_levels,
"HeatingQC": five_levels,
"KitchenQual": five_levels,
"FireplaceQu": five_levels,
"GarageQual": five_levels,
"GarageCond": five_levels,
"PoolQC": five_levels,
"LotShape": ["Reg", "IR1", "IR2", "IR3"],
"LandSlope": ["Sev", "Mod", "Gtl"],
"BsmtExposure": ["No", "Mn", "Av", "Gd"],
"BsmtFinType1": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"BsmtFinType2": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"Functional": ["Sal", "Sev", "Maj1", "Maj2", "Mod", "Min2", "Min1", "Typ"],
"GarageFinish": ["Unf", "RFn", "Fin"],
"PavedDrive": ["N", "P", "Y"],
"Utilities": ["NoSeWa", "NoSewr", "AllPub"],
"CentralAir": ["N", "Y"],
"Electrical": ["Mix", "FuseP", "FuseF", "FuseA", "SBrkr"],
"Fence": ["MnWw", "GdWo", "MnPrv", "GdPrv"],
}
# Add a None level for missing values
ordered_levels = {key: ["None"] + value for key, value in
ordered_levels.items()}
def encode(df):
# Nominal categories
for name in features_nom:
df[name] = df[name].astype("category")
# Add a None category for missing values
if "None" not in df[name].cat.categories:
df[name].cat.add_categories("None", inplace=True)
# Ordinal categories
for name, levels in ordered_levels.items():
df[name] = df[name].astype(CategoricalDtype(levels,
ordered=True))
return df
Explanation: Encode the Statistical Data Type
Pandas has Python types corresponding to the standard statistical types (numeric, categorical, etc.). Encoding each feature with its correct type helps ensure each feature is treated appropriately by whatever functions we use, and makes it easier for us to apply transformations consistently. This hidden cell defines the encode function:
End of explanation
def impute(df):
for name in df.select_dtypes("number"):
df[name] = df[name].fillna(0)
for name in df.select_dtypes("category"):
df[name] = df[name].fillna("None")
return df
Explanation: Handle Missing Values
Handling missing values now will make the feature engineering go more smoothly. We'll impute 0 for missing numeric values and "None" for missing categorical values. You might like to experiment with other imputation strategies. In particular, you could try creating "missing value" indicators: 1 whenever a value was imputed and 0 otherwise.
End of explanation
df_train, df_test = load_data()
Explanation: Load Data
And now we can call the data loader and get the processed data splits:
End of explanation
# Peek at the values
#display(df_train)
#display(df_test)
# Display information about dtypes and missing values
#display(df_train.info())
#display(df_test.info())
Explanation: Uncomment and run this cell if you'd like to see what they contain. Notice that df_test is
missing values for SalePrice. (NAs were willed with 0's in the imputation step.)
End of explanation
#$HIDE_INPUT$
def score_dataset(X, y, model=XGBRegressor()):
# Label encoding for categoricals
#
# Label encoding is good for XGBoost and RandomForest, but one-hot
# would be better for models like Lasso or Ridge. The `cat.codes`
# attribute holds the category levels.
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
# Metric for Housing competition is RMSLE (Root Mean Squared Log Error)
log_y = np.log(y)
score = cross_val_score(
model, X, log_y, cv=5, scoring="neg_mean_squared_error",
)
score = -1 * score.mean()
score = np.sqrt(score)
return score
Explanation: Establish Baseline
Finally, let's establish a baseline score to judge our feature engineering against.
Here is the function we created in Lesson 1 that will compute the cross-validated RMSLE score for a feature set. We've used XGBoost for our model, but you might want to experiment with other models.
End of explanation
X = df_train.copy()
y = X.pop("SalePrice")
baseline_score = score_dataset(X, y)
print(f"Baseline score: {baseline_score:.5f} RMSLE")
Explanation: We can reuse this scoring function anytime we want to try out a new feature set. We'll run it now on the processed data with no additional features and get a baseline score:
End of explanation
#$HIDE_INPUT$
def make_mi_scores(X, y):
X = X.copy()
for colname in X.select_dtypes(["object", "category"]):
X[colname], _ = X[colname].factorize()
# All discrete features should now have integer dtypes
discrete_features = [pd.api.types.is_integer_dtype(t) for t in X.dtypes]
mi_scores = mutual_info_regression(X, y, discrete_features=discrete_features, random_state=0)
mi_scores = pd.Series(mi_scores, name="MI Scores", index=X.columns)
mi_scores = mi_scores.sort_values(ascending=False)
return mi_scores
def plot_mi_scores(scores):
scores = scores.sort_values(ascending=True)
width = np.arange(len(scores))
ticks = list(scores.index)
plt.barh(width, scores)
plt.yticks(width, ticks)
plt.title("Mutual Information Scores")
Explanation: This baseline score helps us to know whether some set of features we've assembled has actually led to any improvement or not.
Step 2 - Feature Utility Scores
In Lesson 2 we saw how to use mutual information to compute a utility score for a feature, giving you an indication of how much potential the feature has. This hidden cell defines the two utility functions we used, make_mi_scores and plot_mi_scores:
End of explanation
X = df_train.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
mi_scores
Explanation: Let's look at our feature scores again:
End of explanation
def drop_uninformative(df, mi_scores):
return df.loc[:, mi_scores > 0.0]
Explanation: You can see that we have a number of features that are highly informative and also some that don't seem to be informative at all (at least by themselves). As we talked about in Tutorial 2, the top scoring features will usually pay-off the most during feature development, so it could be a good idea to focus your efforts on those. On the other hand, training on uninformative features can lead to overfitting. So, the features with 0.0 scores we'll drop entirely:
End of explanation
X = df_train.copy()
y = X.pop("SalePrice")
X = drop_uninformative(X, mi_scores)
score_dataset(X, y)
Explanation: Removing them does lead to a modest performance gain:
End of explanation
def label_encode(df):
X = df.copy()
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
return X
Explanation: Later, we'll add the drop_uninformative function to our feature-creation pipeline.
Step 3 - Create Features
Now we'll start developing our feature set.
To make our feature engineering workflow more modular, we'll define a function that will take a prepared dataframe and pass it through a pipeline of transformations to get the final feature set. It will look something like this:
def create_features(df):
X = df.copy()
y = X.pop("SalePrice")
X = X.join(create_features_1(X))
X = X.join(create_features_2(X))
X = X.join(create_features_3(X))
# ...
return X
Let's go ahead and define one transformation now, a label encoding for the categorical features:
End of explanation
#$HIDE_INPUT$
def mathematical_transforms(df):
X = pd.DataFrame() # dataframe to hold new features
X["LivLotRatio"] = df.GrLivArea / df.LotArea
X["Spaciousness"] = (df.FirstFlrSF + df.SecondFlrSF) / df.TotRmsAbvGrd
# This feature ended up not helping performance
# X["TotalOutsideSF"] = \
# df.WoodDeckSF + df.OpenPorchSF + df.EnclosedPorch + \
# df.Threeseasonporch + df.ScreenPorch
return X
def interactions(df):
X = pd.get_dummies(df.BldgType, prefix="Bldg")
X = X.mul(df.GrLivArea, axis=0)
return X
def counts(df):
X = pd.DataFrame()
X["PorchTypes"] = df[[
"WoodDeckSF",
"OpenPorchSF",
"EnclosedPorch",
"Threeseasonporch",
"ScreenPorch",
]].gt(0.0).sum(axis=1)
return X
def break_down(df):
X = pd.DataFrame()
X["MSClass"] = df.MSSubClass.str.split("_", n=1, expand=True)[0]
return X
def group_transforms(df):
X = pd.DataFrame()
X["MedNhbdArea"] = df.groupby("Neighborhood")["GrLivArea"].transform("median")
return X
Explanation: A label encoding is okay for any kind of categorical feature when you're using a tree-ensemble like XGBoost, even for unordered categories. If you wanted to try a linear regression model (also popular in this competition), you would instead want to use a one-hot encoding, especially for the features with unordered categories.
Create Features with Pandas
This cell reproduces the work you did in Exercise 3, where you applied strategies for creating features in Pandas. Modify or add to these functions to try out other feature combinations.
End of explanation
#$HIDE_INPUT$
cluster_features = [
"LotArea",
"TotalBsmtSF",
"FirstFlrSF",
"SecondFlrSF",
"GrLivArea",
]
def cluster_labels(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=n_clusters, n_init=50, random_state=0)
X_new = pd.DataFrame()
X_new["Cluster"] = kmeans.fit_predict(X_scaled)
return X_new
def cluster_distance(df, features, n_clusters=20):
X = df.copy()
X_scaled = X.loc[:, features]
X_scaled = (X_scaled - X_scaled.mean(axis=0)) / X_scaled.std(axis=0)
kmeans = KMeans(n_clusters=20, n_init=50, random_state=0)
X_cd = kmeans.fit_transform(X_scaled)
# Label features and join to dataset
X_cd = pd.DataFrame(
X_cd, columns=[f"Centroid_{i}" for i in range(X_cd.shape[1])]
)
return X_cd
Explanation: Here are some ideas for other transforms you could explore:
- Interactions between the quality Qual and condition Cond features. OverallQual, for instance, was a high-scoring feature. You could try combining it with OverallCond by converting both to integer type and taking a product.
- Square roots of area features. This would convert units of square feet to just feet.
- Logarithms of numeric features. If a feature has a skewed distribution, applying a logarithm can help normalize it.
- Interactions between numeric and categorical features that describe the same thing. You could look at interactions between BsmtQual and TotalBsmtSF, for instance.
- Other group statistics in Neighboorhood. We did the median of GrLivArea. Looking at mean, std, or count could be interesting. You could also try combining the group statistics with other features. Maybe the difference of GrLivArea and the median is important?
k-Means Clustering
The first unsupervised algorithm we used to create features was k-means clustering. We saw that you could either use the cluster labels as a feature (a column with 0, 1, 2, ...) or you could use the distance of the observations to each cluster. We saw how these features can sometimes be effective at untangling complicated spatial relationships.
End of explanation
#$HIDE_INPUT$
def apply_pca(X, standardize=True):
# Standardize
if standardize:
X = (X - X.mean(axis=0)) / X.std(axis=0)
# Create principal components
pca = PCA()
X_pca = pca.fit_transform(X)
# Convert to dataframe
component_names = [f"PC{i+1}" for i in range(X_pca.shape[1])]
X_pca = pd.DataFrame(X_pca, columns=component_names)
# Create loadings
loadings = pd.DataFrame(
pca.components_.T, # transpose the matrix of loadings
columns=component_names, # so the columns are the principal components
index=X.columns, # and the rows are the original features
)
return pca, X_pca, loadings
def plot_variance(pca, width=8, dpi=100):
# Create figure
fig, axs = plt.subplots(1, 2)
n = pca.n_components_
grid = np.arange(1, n + 1)
# Explained variance
evr = pca.explained_variance_ratio_
axs[0].bar(grid, evr)
axs[0].set(
xlabel="Component", title="% Explained Variance", ylim=(0.0, 1.0)
)
# Cumulative Variance
cv = np.cumsum(evr)
axs[1].plot(np.r_[0, grid], np.r_[0, cv], "o-")
axs[1].set(
xlabel="Component", title="% Cumulative Variance", ylim=(0.0, 1.0)
)
# Set up figure
fig.set(figwidth=8, dpi=100)
return axs
Explanation: Principal Component Analysis
PCA was the second unsupervised model we used for feature creation. We saw how it could be used to decompose the variational structure in the data. The PCA algorithm gave us loadings which described each component of variation, and also the components which were the transformed datapoints. The loadings can suggest features to create and the components we can use as features directly.
Here are the utility functions from the PCA lesson:
End of explanation
#$HIDE_INPUT$
def pca_inspired(df):
X = pd.DataFrame()
X["Feature1"] = df.GrLivArea + df.TotalBsmtSF
X["Feature2"] = df.YearRemodAdd * df.TotalBsmtSF
return X
def pca_components(df, features):
X = df.loc[:, features]
_, X_pca, _ = apply_pca(X)
return X_pca
pca_features = [
"GarageArea",
"YearRemodAdd",
"TotalBsmtSF",
"GrLivArea",
]
Explanation: And here are transforms that produce the features from the Exercise 5. You might want to change these if you came up with a different answer.
End of explanation
def corrplot(df, method="pearson", annot=True, **kwargs):
sns.clustermap(
df.corr(method),
vmin=-1.0,
vmax=1.0,
cmap="icefire",
method="complete",
annot=annot,
**kwargs,
)
corrplot(df_train, annot=None)
Explanation: These are only a couple ways you could use the principal components. You could also try clustering using one or more components. One thing to note is that PCA doesn't change the distance between points -- it's just like a rotation. So clustering with the full set of components is the same as clustering with the original features. Instead, pick some subset of components, maybe those with the most variance or the highest MI scores.
For further analysis, you might want to look at a correlation matrix for the dataset:
End of explanation
def indicate_outliers(df):
X_new = pd.DataFrame()
X_new["Outlier"] = (df.Neighborhood == "Edwards") & (df.SaleCondition == "Partial")
return X_new
Explanation: Groups of highly correlated features often yield interesting loadings.
PCA Application - Indicate Outliers
In Exercise 5, you applied PCA to determine houses that were outliers, that is, houses having values not well represented in the rest of the data. You saw that there was a group of houses in the Edwards neighborhood having a SaleCondition of Partial whose values were especially extreme.
Some models can benefit from having these outliers indicated, which is what this next transform will do.
End of explanation
#$HIDE_INPUT$
class CrossFoldEncoder:
def __init__(self, encoder, **kwargs):
self.encoder_ = encoder
self.kwargs_ = kwargs # keyword arguments for the encoder
self.cv_ = KFold(n_splits=5)
# Fit an encoder on one split and transform the feature on the
# other. Iterating over the splits in all folds gives a complete
# transformation. We also now have one trained encoder on each
# fold.
def fit_transform(self, X, y, cols):
self.fitted_encoders_ = []
self.cols_ = cols
X_encoded = []
for idx_encode, idx_train in self.cv_.split(X):
fitted_encoder = self.encoder_(cols=cols, **self.kwargs_)
fitted_encoder.fit(
X.iloc[idx_encode, :], y.iloc[idx_encode],
)
X_encoded.append(fitted_encoder.transform(X.iloc[idx_train, :])[cols])
self.fitted_encoders_.append(fitted_encoder)
X_encoded = pd.concat(X_encoded)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
# To transform the test data, average the encodings learned from
# each fold.
def transform(self, X):
from functools import reduce
X_encoded_list = []
for fitted_encoder in self.fitted_encoders_:
X_encoded = fitted_encoder.transform(X)
X_encoded_list.append(X_encoded[self.cols_])
X_encoded = reduce(
lambda x, y: x.add(y, fill_value=0), X_encoded_list
) / len(X_encoded_list)
X_encoded.columns = [name + "_encoded" for name in X_encoded.columns]
return X_encoded
Explanation: You could also consider applying some sort of robust scaler from scikit-learn's sklearn.preprocessing module to the outlying values, especially those in GrLivArea. Here is a tutorial illustrating some of them. Another option could be to create a feature of "outlier scores" using one of scikit-learn's outlier detectors.
Target Encoding
Needing a separate holdout set to create a target encoding is rather wasteful of data. In Tutorial 6 we used 25% of our dataset just to encode a single feature, Zipcode. The data from the other features in that 25% we didn't get to use at all.
There is, however, a way you can use target encoding without having to use held-out encoding data. It's basically the same trick used in cross-validation:
1. Split the data into folds, each fold having two splits of the dataset.
2. Train the encoder on one split but transform the values of the other.
3. Repeat for all the splits.
This way, training and transformation always take place on independent sets of data, just like when you use a holdout set but without any data going to waste.
In the next hidden cell is a wrapper you can use with any target encoder:
End of explanation
def create_features(df, df_test=None):
X = df.copy()
y = X.pop("SalePrice")
mi_scores = make_mi_scores(X, y)
# Combine splits if test data is given
#
# If we're creating features for test set predictions, we should
# use all the data we have available. After creating our features,
# we'll recreate the splits.
if df_test is not None:
X_test = df_test.copy()
X_test.pop("SalePrice")
X = pd.concat([X, X_test])
# Lesson 2 - Mutual Information
X = drop_uninformative(X, mi_scores)
# Lesson 3 - Transformations
X = X.join(mathematical_transforms(X))
X = X.join(interactions(X))
X = X.join(counts(X))
# X = X.join(break_down(X))
X = X.join(group_transforms(X))
# Lesson 4 - Clustering
# X = X.join(cluster_labels(X, cluster_features, n_clusters=20))
# X = X.join(cluster_distance(X, cluster_features, n_clusters=20))
# Lesson 5 - PCA
X = X.join(pca_inspired(X))
# X = X.join(pca_components(X, pca_features))
# X = X.join(indicate_outliers(X))
X = label_encode(X)
# Reform splits
if df_test is not None:
X_test = X.loc[df_test.index, :]
X.drop(df_test.index, inplace=True)
# Lesson 6 - Target Encoder
encoder = CrossFoldEncoder(MEstimateEncoder, m=1)
X = X.join(encoder.fit_transform(X, y, cols=["MSSubClass"]))
if df_test is not None:
X_test = X_test.join(encoder.transform(X_test))
if df_test is not None:
return X, X_test
else:
return X
df_train, df_test = load_data()
X_train = create_features(df_train)
y_train = df_train.loc[:, "SalePrice"]
score_dataset(X_train, y_train)
Explanation: Use it like:
encoder = CrossFoldEncoder(MEstimateEncoder, m=1)
X_encoded = encoder.fit_transform(X, y, cols=["MSSubClass"]))
You can turn any of the encoders from the category_encoders library into a cross-fold encoder. The CatBoostEncoder would be worth trying. It's similar to MEstimateEncoder but uses some tricks to better prevent overfitting. Its smoothing parameter is called a instead of m.
Create Final Feature Set
Now let's combine everything together. Putting the transformations into separate functions makes it easier to experiment with various combinations. The ones I left uncommented I found gave the best results. You should experiment with you own ideas though! Modify any of these transformations or come up with some of your own to add to the pipeline.
End of explanation
X_train = create_features(df_train)
y_train = df_train.loc[:, "SalePrice"]
xgb_params = dict(
max_depth=6, # maximum depth of each tree - try 2 to 10
learning_rate=0.01, # effect of each tree - try 0.0001 to 0.1
n_estimators=1000, # number of trees (that is, boosting rounds) - try 1000 to 8000
min_child_weight=1, # minimum number of houses in a leaf - try 1 to 10
colsample_bytree=0.7, # fraction of features (columns) per tree - try 0.2 to 1.0
subsample=0.7, # fraction of instances (rows) per tree - try 0.2 to 1.0
reg_alpha=0.5, # L1 regularization (like LASSO) - try 0.0 to 10.0
reg_lambda=1.0, # L2 regularization (like Ridge) - try 0.0 to 10.0
num_parallel_tree=1, # set > 1 for boosted random forests
)
xgb = XGBRegressor(**xgb_params)
score_dataset(X_train, y_train, xgb)
Explanation: Step 4 - Hyperparameter Tuning
At this stage, you might like to do some hyperparameter tuning with XGBoost before creating your final submission.
End of explanation
X_train, X_test = create_features(df_train, df_test)
y_train = df_train.loc[:, "SalePrice"]
xgb = XGBRegressor(**xgb_params)
# XGB minimizes MSE, but competition loss is RMSLE
# So, we need to log-transform y to train and exp-transform the predictions
xgb.fit(X_train, np.log(y))
predictions = np.exp(xgb.predict(X_test))
output = pd.DataFrame({'Id': X_test.index, 'SalePrice': predictions})
output.to_csv('my_submission.csv', index=False)
print("Your submission was successfully saved!")
Explanation: Just tuning these by hand can give you great results. However, you might like to try using one of scikit-learn's automatic hyperparameter tuners. Or you could explore more advanced tuning libraries like Optuna or scikit-optimize.
Here is how you can use Optuna with XGBoost:
```
import optuna
def objective(trial):
xgb_params = dict(
max_depth=trial.suggest_int("max_depth", 2, 10),
learning_rate=trial.suggest_float("learning_rate", 1e-4, 1e-1, log=True),
n_estimators=trial.suggest_int("n_estimators", 1000, 8000),
min_child_weight=trial.suggest_int("min_child_weight", 1, 10),
colsample_bytree=trial.suggest_float("colsample_bytree", 0.2, 1.0),
subsample=trial.suggest_float("subsample", 0.2, 1.0),
reg_alpha=trial.suggest_float("reg_alpha", 1e-4, 1e2, log=True),
reg_lambda=trial.suggest_float("reg_lambda", 1e-4, 1e2, log=True),
)
xgb = XGBRegressor(**xgb_params)
return score_dataset(X_train, y_train, xgb)
study = optuna.create_study(direction="minimize")
study.optimize(objective, n_trials=20)
xgb_params = study.best_params
```
Copy this into a code cell if you'd like to use it, but be aware that it will take quite a while to run. After it's done, you might enjoy using some of Optuna's visualizations.
Step 5 - Train Model and Create Submissions
Once you're satisfied with everything, it's time to create your final predictions! This cell will:
- create your feature set from the original data
- train XGBoost on the training data
- use the trained model to make predictions from the test set
- save the predictions to a CSV file
End of explanation |
13,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prรกctica 3 - Dinรกmica de manipuladores
List comprehensions
En esta prรกctica nos enfocaremos en temas avanzados de programaciรณn que se aplican directamente al lenguaje de programaciรณn Python y a algunos otros lenguajes de programaciรณn. En primer lugar veamos la instrucciรณn range
Step1: Mmmm..., realmente no hizo nada... bueno, el objetivo de esta funciรณn de la libreria standard de Python es crear un arreglo de enteros desde $0$ hasta el numero que le demos como argumento
Step2: En este caso es obvio como se estan generando los numeros, desde $0$ hasta ยฟ$4$?. Mas bien hasta que el numero de elementos sea el numero dado, o bien $n-1$; de cualquier manera, esta forma de utilizar el ciclo for es la que nos da el vinculo con el for tradicional de otros lenguajes de programaciรณn, pero en Python tenemos una manera diferente de usar este mismo for, List Comprehensions
Step3: Si recuerdas la prรกctica 1, para crear un arreglo con numeros adentro, podiamos utilizar el siguiente cรณdigo
Step4: Sin embargo, comparandolo con la simplicidad del cรณdigo anterior, este enfoque palidece, por lo que en Python es normal ver este tipo de construcciones, podemos diseccionarlo en dos partes
Step5: La segunda parte de este cรณdigo,
Python
[i for i in range(5)]
es el for, esta es la razon por la que usamos i anteriormente, da lo mismo escribir
Step6: y por supuesto no funcionarรก si
Step7: Pero regresemos por un momento al primer pedazo de cรณdigo que escribimos
Step8: Si ese... lo que esta pasando aqui es que Python realmente no quiere evaluar esta expresiรณn hasta que sea completamente necesario, a este concepto se le llama Lazy Evaluation, o evaluaciรณn floja; a pesar de la mala reputaciรณn que tiene Python por ser lento, y por ocupar demasiada memoria de ejecuciรณn, tiene caracteristicas bastante interesantes, como el hecho de no evaluar expresiones que no van a ser necesarias.
Hagamos un ejemplo para que quede mas claro
Step9: En este cรณdigo las lineas dentro del if nunca serรกn ejecutadas, por lo que si tuvieramos un lenguaje de programaciรณn compilado (como C), en primer lugar tendrรญa que reservar espacio en memoria para el arreglo que va a guardar los datos de A, sin embargo nunca se utilizaria ese espacio, en Python nunca se ejecutarรกn esas lineas y lo demostraremos
Step10: Una de las funciones especiales de Jupyter (no de Python) es que si escribo %%timeit al inicio de cualquier celda, la ejecutarรก muchas veces y me reportarรก el promedio de tiempo que se tardรณ en ejecutar esta celda, en este caso nuestro programa se tardรณ $739ns$ en promedio.
Si por el otro lado, ejecuto las lineas en donde saltarรก especificamente la creaciรณn de este arreglo tendremos
Step11: En esta ocasiรณn se tarda $12.4ns$ ya que en realidad no esta haciendo nada...
Por cierto, una cosa mรกs... las list comprehensions funcionan no solo con range, sino con cualquier arreglo, como en el siguiente caso
Step12: o incluso este
Step13: Ejercicio
Utilizando list comprehensions
Step14: Genera un arreglo con las matrices simbolicas de sympy que contengan las matrices de transformaciรณn homogรฉneas de los 5 grados de libertad descritos en el siguiente arreglo con parametros DH | Python Code:
range(5)
Explanation: Prรกctica 3 - Dinรกmica de manipuladores
List comprehensions
En esta prรกctica nos enfocaremos en temas avanzados de programaciรณn que se aplican directamente al lenguaje de programaciรณn Python y a algunos otros lenguajes de programaciรณn. En primer lugar veamos la instrucciรณn range:
End of explanation
for i in range(5):
print(i)
Explanation: Mmmm..., realmente no hizo nada... bueno, el objetivo de esta funciรณn de la libreria standard de Python es crear un arreglo de enteros desde $0$ hasta el numero que le demos como argumento:
End of explanation
[i for i in range(5)]
Explanation: En este caso es obvio como se estan generando los numeros, desde $0$ hasta ยฟ$4$?. Mas bien hasta que el numero de elementos sea el numero dado, o bien $n-1$; de cualquier manera, esta forma de utilizar el ciclo for es la que nos da el vinculo con el for tradicional de otros lenguajes de programaciรณn, pero en Python tenemos una manera diferente de usar este mismo for, List Comprehensions:
End of explanation
A = []
for i in range(5):
A.append(i)
A
Explanation: Si recuerdas la prรกctica 1, para crear un arreglo con numeros adentro, podiamos utilizar el siguiente cรณdigo:
End of explanation
[i**2 for i in range(10)]
Explanation: Sin embargo, comparandolo con la simplicidad del cรณdigo anterior, este enfoque palidece, por lo que en Python es normal ver este tipo de construcciones, podemos diseccionarlo en dos partes:
Python
[i for i in range(5)]
en la primera, i ocupa el lugar del elemento que nos dicta que es lo que vamos a guardar en el arreglo final; en este caso, es el numero a secas, por lo que no hacemos ninguna operaciรณn, pero si queremos guardar en un arreglo, por ejemplo, los cuadrados de los primeros $10$ enteros positivos, tendremos:
End of explanation
[x**2 for x in range(10)]
Explanation: La segunda parte de este cรณdigo,
Python
[i for i in range(5)]
es el for, esta es la razon por la que usamos i anteriormente, da lo mismo escribir:
End of explanation
[x**2 for i in range(10)]
Explanation: y por supuesto no funcionarรก si:
End of explanation
range(5)
Explanation: Pero regresemos por un momento al primer pedazo de cรณdigo que escribimos:
End of explanation
if False:
A = []
for i in range(5):
A.append(i)
else:
pass
Explanation: Si ese... lo que esta pasando aqui es que Python realmente no quiere evaluar esta expresiรณn hasta que sea completamente necesario, a este concepto se le llama Lazy Evaluation, o evaluaciรณn floja; a pesar de la mala reputaciรณn que tiene Python por ser lento, y por ocupar demasiada memoria de ejecuciรณn, tiene caracteristicas bastante interesantes, como el hecho de no evaluar expresiones que no van a ser necesarias.
Hagamos un ejemplo para que quede mas claro:
End of explanation
%%timeit
A = [i for i in range(5)]
Explanation: En este cรณdigo las lineas dentro del if nunca serรกn ejecutadas, por lo que si tuvieramos un lenguaje de programaciรณn compilado (como C), en primer lugar tendrรญa que reservar espacio en memoria para el arreglo que va a guardar los datos de A, sin embargo nunca se utilizaria ese espacio, en Python nunca se ejecutarรกn esas lineas y lo demostraremos:
End of explanation
%%timeit
if False:
A =[]
for i in range(5):
A.append(i)
else:
pass
Explanation: Una de las funciones especiales de Jupyter (no de Python) es que si escribo %%timeit al inicio de cualquier celda, la ejecutarรก muchas veces y me reportarรก el promedio de tiempo que se tardรณ en ejecutar esta celda, en este caso nuestro programa se tardรณ $739ns$ en promedio.
Si por el otro lado, ejecuto las lineas en donde saltarรก especificamente la creaciรณn de este arreglo tendremos:
End of explanation
[x**2 + 1 for x in [2,4,6,8,10]]
Explanation: En esta ocasiรณn se tarda $12.4ns$ ya que en realidad no esta haciendo nada...
Por cierto, una cosa mรกs... las list comprehensions funcionan no solo con range, sino con cualquier arreglo, como en el siguiente caso:
End of explanation
[caracter + " abc" for caracter in "pizza"]
Explanation: o incluso este:
End of explanation
from sympy.physics.mechanics import mechanics_printing
mechanics_printing()
from sympy import var
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
longs
from nose.tools import assert_equal
from sympy import var
assert_equal(longs, [var("l1"), var("l2"), var("l3"), var("l4"), var("l5")])
Explanation: Ejercicio
Utilizando list comprehensions:
Genera un arreglo con los simbolos de sympy que representan la longitud de 5 eslabones, recuerda que para generar el simbolo de $l_1$ tenemos que escribir:
Python
var("l1")
End of explanation
parametros_DH = [[ "0", "l1", "0", "q1"],
["l2", "0", "0", "q2"],
[ "0", "l3", "0", "q3"],
["l4", "0", "0", "q4"],
[ "0", "l5", "0", "q5"]]
from sympy import var
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
matrices
from nose.tools import assert_equal
from sympy import Matrix, var, sin, cos
l1, q1, cero = var("l1"), var("q1"), var("0")
A1 = Matrix([[cos(q1), -sin(q1)*cos(cero), sin(cero)*sin(q1), cero*cos(q1)],
[sin(q1), cos(cero)*cos(q1), -sin(cero)*cos(q1), cero*sin(q1)],
[0, sin(cero), cos(cero), l1],
[0, 0, 0, 1]])
assert_equal(matrices[0], A1)
l2, q2, cero = var("l2"), var("q2"), var("0")
A2 = Matrix([[cos(q2), -sin(q2)*cos(cero), sin(cero)*sin(q2), l2*cos(q2)],
[sin(q2), cos(cero)*cos(q2), -sin(cero)*cos(q2), l2*sin(q2)],
[0, sin(cero), cos(cero), cero],
[0, 0, 0, 1]])
assert_equal(matrices[1], A2)
l3, q3, cero = var("l3"), var("q3"), var("0")
A3 = Matrix([[cos(q3), -sin(q3)*cos(cero), sin(cero)*sin(q3), cero*cos(q3)],
[sin(q3), cos(cero)*cos(q3), -sin(cero)*cos(q3), cero*sin(q3)],
[0, sin(cero), cos(cero), l3],
[0, 0, 0, 1]])
assert_equal(matrices[2], A3)
l4, q4, cero = var("l4"), var("q4"), var("0")
A4 = Matrix([[cos(q4), -sin(q4)*cos(cero), sin(cero)*sin(q4), l4*cos(q4)],
[sin(q4), cos(cero)*cos(q4), -sin(cero)*cos(q4), l4*sin(q4)],
[0, sin(cero), cos(cero), cero],
[0, 0, 0, 1]])
assert_equal(matrices[3], A4)
l5, q5, cero = var("l5"), var("q5"), var("0")
A5 = Matrix([[cos(q5), -sin(q5)*cos(cero), sin(cero)*sin(q5), cero*cos(q5)],
[sin(q5), cos(cero)*cos(q5), -sin(cero)*cos(q5), cero*sin(q5)],
[0, sin(cero), cos(cero), l5],
[0, 0, 0, 1]])
assert_equal(matrices[4], A5)
Explanation: Genera un arreglo con las matrices simbolicas de sympy que contengan las matrices de transformaciรณn homogรฉneas de los 5 grados de libertad descritos en el siguiente arreglo con parametros DH:
End of explanation |
13,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Data
Step2: Exercise 1
Step3: Exercise 2
Step4: Exercise 3 | Python Code:
# Useful Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
Explanation: Exercises: Variance
By Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/variance
IMPORTANT NOTE:
This lecture corresponds to the Variance lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
X = np.random.randint(100, size = 100)
Explanation: Data:
End of explanation
# Range of X
range_X =
## Your code goes here
print 'Range of X: %s' %(range_X)
# Mean Absolute Deviation
# First calculate the value of mu (the mean)
mu = np.mean(X)
## Your code goes here
print 'Mean absolute deviation of X:', MAD
# Variance and standard deviation
## Your code goes here
print 'Variance of X:',
print 'Standard deviation of X:',
# Semivariance and semideviation
## Your code goes here
print 'Semivariance of X:',
print 'Semideviation of X:',
# Target variance
## Your code goes here
print 'Target semivariance of X:',
print 'Target semideviation of X:',
Explanation: Exercise 1:
Using the skills aquired in the lecture series, find the following parameters of the list X above:
- Range
- Mean Absolute Deviation
- Variance and Standard Deviation
- Semivariance and Semideviation
- Target variance (with B = 60)
End of explanation
att = get_pricing('T', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')
# Rolling mean
## Your code goes here
# Rolling standard deviation
## Your code goes here
Explanation: Exercise 2:
Using the skills aquired in the lecture series, find the following parameters of prices for AT&T stock over a year:
- 30 days rolling variance
- 15 days rolling Standard Deviation
End of explanation
asset1 = get_pricing('AAPL', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')
asset2 = get_pricing('XLF', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')
cov = np.cov(asset1, asset2)[0,1]
w1 = ## Your code goes here.
w2 = 1 - w1
v1 = np.var(asset1)
v2 = np.var(asset2)
pvariance = (w1**2)*v1+(w2**2)*v2+(2*w1*w2)*cov
print 'Portfolio variance: ', pvariance
Explanation: Exercise 3 :
The portfolio variance is calculated as
$$\text{VAR}p = \text{VAR}{s1} (w_1^2) + \text{VAR}{s2}(w_2^2) + \text{COV}{S_1, S_2} (2 w_1 w_2)$$
Where $w_1$ and $w_2$ are the weights of $S_1$ and $S_2$.
Find values of $w_1$ and $w_2$ to have a portfolio variance of 50.
End of explanation |
13,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI SDK
Step1: Install the latest GA version of google-cloud-storage library.
Step2: Install the latest version of tensorflow library.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step5: Otherwise, set your project ID here.
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step13: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step14: Tutorial
Now you are ready to start creating your own AutoML image object detection model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
Step15: Quick peek at your data
This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step16: Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters
Step17: Create and run training pipeline
To train an AutoML model, you perform two steps
Step18: Run the training pipeline
Next, you run the job to start the training job by invoking the method run, with the following parameters
Step19: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
Step20: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
Step21: Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
Step22: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs
Step23: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters
Step24: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
Step25: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format
Step26: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI SDK : AutoML training image object detection model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_image_object_detection_batch.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex AI SDK to create image object detection models and do batch prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and the corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.
Objective
In this tutorial, you create an AutoML image object detection model from a Python script, and then do a batch prediction using the Vertex AI SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex AI SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library.
End of explanation
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest version of tensorflow library.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Otherwise, set your project ID here.
End of explanation
REGION = "[your-region]" # @param {type:"string"}
if REGION == "[your-region]":
REGION = "us-central1"
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aiplatform
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME, location=REGION)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/vision/salads.csv"
Explanation: Tutorial
Now you are ready to start creating your own AutoML image object detection model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head -10
Explanation: Quick peek at your data
This tutorial uses a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
dataset = aiplatform.ImageDataset.create(
display_name="Salads" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.image.bounding_box,
)
print(dataset.resource_name)
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
job = aiplatform.AutoMLImageTrainingJob(
display_name="salads_" + TIMESTAMP,
prediction_type="object_detection",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(job)
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: An image classification model.
object_detection: An image object detection model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
model_type: The type of model for deployment.
CLOUD: Deployment on Google Cloud
CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud.
CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud.
MOBILE_TF_VERSATILE_1: Deployment on an edge device.
MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device.
MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device.
base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only.
The instantiated object is the job for the training job.
End of explanation
model = job.run(
dataset=dataset,
model_display_name="salads_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=20000,
disable_early_stopping=False,
)
Explanation: Run the training pipeline
Next, you run the job to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour).
disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 1 hour 30 minutes.
End of explanation
# Get model resource ID
models = aiplatform.Model.list(filter="display_name=salads_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
test_items = !gsutil cat $IMPORT_FILE | head -n2
cols_1 = str(test_items[0]).split(",")
cols_2 = str(test_items[1]).split(",")
if len(cols_1) == 11:
test_item_1 = str(cols_1[1])
test_label_1 = str(cols_1[2])
test_item_2 = str(cols_2[1])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_label_1 = str(cols_1[1])
test_item_2 = str(cols_2[0])
test_label_2 = str(cols_2[1])
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
Explanation: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_URI/$file_1
! gsutil cp $test_item_2 $BUCKET_URI/$file_2
test_item_1 = BUCKET_URI + "/" + file_1
test_item_2 = BUCKET_URI + "/" + file_2
Explanation: Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
End of explanation
import json
import tensorflow as tf
gcs_input_uri = BUCKET_URI + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
content: The Cloud Storage path to the image.
mime_type: The content type. In our example, it is a jpeg file.
For example:
{'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
End of explanation
batch_predict_job = model.batch_predict(
job_display_name="salads_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_URI,
machine_type="n1-standard-4",
starting_replica_count=1,
max_replica_count=1,
sync=False,
)
print(batch_predict_job)
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
machine_type: The type of machine for running batch prediction on dedicated resources. Not specifying machine type will result in batch prediction job being run with automatic resources.
starting_replica_count: The number of machine replicas used at the start of the batch operation. If not set, Vertex AI decides starting number, not greater than max_replica_count. Only used if machine_type is set.
max_replica_count: The maximum number of machine replicas the batch operation may be scaled to. Only used if machine_type is set. Default is 10.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
For AutoML models, only manual scaling is supported. In manual scaling both starting_replica_count and max_replica_count have the same value.
For this batch job we are using manual scaling. Here we are setting both starting_replica_count and max_replica_count to the same value that is 1.
End of explanation
batch_predict_job.wait()
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
Explanation: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
bboxes: The bounding box of each detected object.
End of explanation
delete_bucket = False
# Delete the dataset using the Vertex dataset object
dataset.delete()
# Delete the model using the Vertex model object
model.delete()
# Delete the AutoML or Pipeline trainig job
job.delete()
# Delete the batch prediction job using the Vertex batch prediction object
batch_predict_job.delete()
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Model
AutoML Training Job
Batch Job
Cloud Storage Bucket
End of explanation |
13,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http
Step1: Obtain a private key file for your service account
You should already have a service account registered to use Earth Engine. If you don't, follow these instructions to get one. Copy the email address of your service account into the following cell. (The service account must already be registered to use Earth Engine). In the following cell, the gsutil command line is used to generate a key file for the service account. The key file will be created on the notebook VM.
Step2: Start an AuthorizedSession and test your credentials
Test the private key by using it to get credentials. Use the credentials to create an authorized session to make HTTP requests. Make a GET request through the session to check that the credentials work.
Step3: Serialize a computation
Before you can send a request to compute something, the computation needs to be put into the Earth Engine expression graph format. The following demonstrates how to obtain the expression graph.
Authenticate to Earth Engine
Get Earth Engine scoped credentials from the service account. Use them to initialize Earth Engine.
Step4: Define a computation
Prototype a simple computation with the client API. Note that the result of the computation is an Image.
Step5: Serialize the expression graph
This will create an object that represents the Earth Engine expression graph (specifically, an Expression). In general, you should build these with one of the client APIs.
Step6: Create the desired projection (WGS84) at the desired scale (10 meters for Sentinel-2). This is just to discover the desired scale in degrees, the units of the projection. These scales will be used to specify the affine transform in the request.
Step7: Send the request
Make a POST request to the computePixels endpoint. Note that the request contains the Expression, which is the serialized computation. It also contains a PixelGrid. The PixelGrid contains dimensions for the desired output and an AffineTransform in the units of the requested coordinate system. Here the coordinate system is geographic, so the transform is specified with scale in degrees and geographic coordinates of the upper left corner of the requested image patch.
Step8: If you are running this in a notebook, you can display the results using the IPython image display widget. | Python Code:
# INSERT YOUR PROJECT HERE
PROJECT = 'your-project'
!gcloud auth login --project {PROJECT}
Explanation: <table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_REST_API_compute_image.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_REST_API_compute_image.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table>
Image computations with the Earth Engine REST API
Note: The REST API contains new and advanced features that may not be suitable for all users. If you are new to Earth Engine, please get started with the JavaScript guide.
The Earth Engine REST API quickstart shows how to access blocks of pixels from an Earth Engine asset. Suppose you want to apply a computation to the pixels before obtaining the result. This guide shows how to prototype a computation with one of the client libraries, serialize the computation graph and use the REST API to obtain the computed result. Making compute requests through the REST API corresponds to a POST request to one of the compute endpoints, for example computePixels, computeFeatures, or the generic value.compute. Specifically, this example demonstrates getting a median composite of Sentinel-2 imagery in a small region.
Before you begin
Follow these instructions to:
Apply for Earth Engine
Create a Google Cloud project
Enable the Earth Engine API on the project
Create a service account
Give the service account project level permission to perform Earth Engine computations
Note: To complete this tutorial, you will need a service account that is registered for Earth Engine access. See these instructions to register a service account before proceeding.
Authenticate to Google Cloud
The first thing to do is login so that you can make authenticated requests to Google Cloud. You will set the project at the same time. Follow the instructions in the output to complete the sign in.
End of explanation
# INSERT YOUR SERVICE ACCOUNT HERE
SERVICE_ACCOUNT='[email protected]'
KEY = 'key.json'
!gcloud iam service-accounts keys create {KEY} --iam-account {SERVICE_ACCOUNT}
Explanation: Obtain a private key file for your service account
You should already have a service account registered to use Earth Engine. If you don't, follow these instructions to get one. Copy the email address of your service account into the following cell. (The service account must already be registered to use Earth Engine). In the following cell, the gsutil command line is used to generate a key file for the service account. The key file will be created on the notebook VM.
End of explanation
from google.auth.transport.requests import AuthorizedSession
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file(KEY)
scoped_credentials = credentials.with_scopes(
['https://www.googleapis.com/auth/cloud-platform'])
session = AuthorizedSession(scoped_credentials)
url = 'https://earthengine.googleapis.com/v1beta/projects/earthengine-public/assets/LANDSAT'
response = session.get(url)
from pprint import pprint
import json
pprint(json.loads(response.content))
Explanation: Start an AuthorizedSession and test your credentials
Test the private key by using it to get credentials. Use the credentials to create an authorized session to make HTTP requests. Make a GET request through the session to check that the credentials work.
End of explanation
import ee
# Get some new credentials since the other ones are cloud scope.
ee_creds = ee.ServiceAccountCredentials(SERVICE_ACCOUNT, KEY)
ee.Initialize(ee_creds)
Explanation: Serialize a computation
Before you can send a request to compute something, the computation needs to be put into the Earth Engine expression graph format. The following demonstrates how to obtain the expression graph.
Authenticate to Earth Engine
Get Earth Engine scoped credentials from the service account. Use them to initialize Earth Engine.
End of explanation
coords = [
-121.58626826832939,
38.059141484827485,
]
region = ee.Geometry.Point(coords)
collection = ee.ImageCollection('COPERNICUS/S2')
collection = collection.filterBounds(region)
collection = collection.filterDate('2020-04-01', '2020-09-01')
image = collection.median()
Explanation: Define a computation
Prototype a simple computation with the client API. Note that the result of the computation is an Image.
End of explanation
serialized = ee.serializer.encode(image)
Explanation: Serialize the expression graph
This will create an object that represents the Earth Engine expression graph (specifically, an Expression). In general, you should build these with one of the client APIs.
End of explanation
# Make a projection to discover the scale in degrees.
proj = ee.Projection('EPSG:4326').atScale(10).getInfo()
# Get scales out of the transform.
scale_x = proj['transform'][0]
scale_y = -proj['transform'][4]
Explanation: Create the desired projection (WGS84) at the desired scale (10 meters for Sentinel-2). This is just to discover the desired scale in degrees, the units of the projection. These scales will be used to specify the affine transform in the request.
End of explanation
import json
url = 'https://earthengine.googleapis.com/v1beta/projects/{}/image:computePixels'
url = url.format(PROJECT)
response = session.post(
url=url,
data=json.dumps({
'expression': serialized,
'fileFormat': 'PNG',
'bandIds': ['B4','B3','B2'],
'grid': {
'dimensions': {
'width': 640,
'height': 640
},
'affineTransform': {
'scaleX': scale_x,
'shearX': 0,
'translateX': coords[0],
'shearY': 0,
'scaleY': scale_y,
'translateY': coords[1]
},
'crsCode': 'EPSG:4326',
},
'visualizationOptions': {'ranges': [{'min': 0, 'max': 3000}]},
})
)
image_content = response.content
Explanation: Send the request
Make a POST request to the computePixels endpoint. Note that the request contains the Expression, which is the serialized computation. It also contains a PixelGrid. The PixelGrid contains dimensions for the desired output and an AffineTransform in the units of the requested coordinate system. Here the coordinate system is geographic, so the transform is specified with scale in degrees and geographic coordinates of the upper left corner of the requested image patch.
End of explanation
# Import the Image function from the IPython.display module.
from IPython.display import Image
Image(image_content)
Explanation: If you are running this in a notebook, you can display the results using the IPython image display widget.
End of explanation |
13,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers
Step2: Important Note
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step9: Expected Output
Step10: 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
Step11: 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
Step12: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
Step13: Note
Step14: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
3.4 - Filtering boxes
yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
Step16: 3.5 - Run the graph on an image
Let the fun begin. You have created a (sess) graph that can be summarized as follows
Step17: Run the following cell on the "test.jpg" image to verify that your function is correct. | Python Code:
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
Explanation: Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
You will learn to:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
End of explanation
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
# Step 1: Compute box scores
# Calculo da probabilidade de cada classe (Pc = confianca de ter algo no box, C = vetor de probabilidades de cada classe)
### START CODE HERE ### (โ 1 line)
box_scores = box_confidence * box_class_probs # 19x19x5x80 (80 scores)
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (โ 2 lines)
box_classes = K.argmax(box_scores, axis=-1) # 19x19x5x1 (1 class idx)
box_class_scores = K.max(box_scores, axis=-1) # 19x19x5x1 (1 class score)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (โ 1 line)
filtering_mask = box_class_scores >= threshold # 19x19x5x1 (1 boolean)
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (โ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
Explanation: Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).
1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
2.1 - Model details
First things to know:
- The input is a batch of images of shape (m, 608, 608, 3)
- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- boxes: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- box_class_probs: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
Exercise: Implement yolo_filter_boxes().
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
2. For each box, find:
- the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint)
Reminder: to call a Keras function, you should use K.function(...).
End of explanation
# GRADED FUNCTION: iou
def iou(box1, box2):
Implement the intersection over union (IoU) between box1 and box2
ย ย ย ย
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
ย ย ย ย box2 -- second box, list object with coordinates (x1, y1, x2, y2)
ย ย ย ย
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (โ 5 lines)
xi1 = np.max([box1[0], box2[0]])
yi1 = np.max([box1[1], box2[1]])
xi2 = np.min([box1[2], box2[2]])
yi2 = np.min([box1[3], box2[3]])
inter_area = (yi2 - yi1) * (xi2 - xi1)
### END CODE HERE ###ย ย ย ย
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (โ 3 lines)
box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])
box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (โ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
2.3 - Non-max suppression
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called "Intersection over Union", or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> Figure 8 </u>: Definition of "Intersection over Union". <br> </center></caption>
Exercise: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1).
- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
- In order to compute the intersection area, you need to make sure the height and width of the intersection are positive, otherwise the intersection area should be zero. Use max(height, 0) and max(width, 0).
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
End of explanation
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (โ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (โ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
Explanation: Expected Output:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation):
- tf.image.non_max_suppression()
- K.gather()
End of explanation
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
### START CODE HERE ###
# Retrieve outputs of the YOLO model (โ1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (โ1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (โ1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes
python
boxes = scale_boxes(boxes, image_shape)
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
End of explanation
sess = K.get_session()
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.
End of explanation
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
Explanation: 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
End of explanation
yolo_model = load_model("model_data/yolo.h5")
Explanation: 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
End of explanation
yolo_model.summary()
Explanation: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
End of explanation
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
Explanation: Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
3.3 - Convert output of the model to usable bounding box tensors
The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
End of explanation
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
Explanation: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
3.4 - Filtering boxes
yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
End of explanation
def predict(sess, image_file):
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (โ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
Explanation: 3.5 - Run the graph on an image
Let the fun begin. You have created a (sess) graph that can be summarized as follows:
<font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font>
<font color='purple'> yolo_model.output </font> is processed by yolo_head. It gives you <font color='purple'> yolo_outputs </font>
<font color='purple'> yolo_outputs </font> goes through a filtering function, yolo_eval. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
Exercise: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute scores, boxes, classes.
The code below also uses the following function:
python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
Important note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
End of explanation
out_scores, out_boxes, out_classes = predict(sess, "family2.jpg")
Explanation: Run the following cell on the "test.jpg" image to verify that your function is correct.
End of explanation |
13,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous
Step1: Import section specific modules
Step2: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are used, as it allows for the focusing of light to a single focal point. At the focal point a reciever is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
Step3: Figure 1.10.1
Step4: Figure 1.10.2
Step6: Figure 1.10.3
Step7: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example, consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechnically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it easier to manage as the telescope is physically smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limations and acheive a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
Step8: Figure 1.10.4a
Step9: Figure 1.10.4b
Step10: Figure 1.10.5
Step11: Figure 1.10.6a
Step12: Figure 1.10.6b
Step13: Figure 1.10.6c
Step14: Figure 1.10.6d
Step15: Figure 1.10.6e | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
1. Radio Science using Interferometric Arrays
Previous: 1.9 A brief introduction to interferometry
Next: 1.11 Modern Interferometric Arrays
Import standard modules:
End of explanation
import ipywidgets
from IPython.display import Image
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/hart_26m_15m_2012-09-11_08511.jpg')
Explanation: 1.10 The Limits of Single Dish Astronomy
In the previous section ➞ of this chapter we introduced the concepts and historical background of interferometry. Earlier in the chapter we presented some of the basic astrophysical sources which emit in the radio spectrum. In this section we will try to answer why we need to use interferometry in radio astronomy. A related question we will try to answer is why we can not just use a single telescope as is done in traditional optical astronomy.
Single telescopes are used in radio astronomy, and provide complimentary observational data to that of interferometric arrays. Astronomy with a single radio telescope is often called single dish astronomy as the telescope usually has a dish reflector (Figure 1.10.1). This dish is usually parabolic, but other shapes are used, as it allows for the focusing of light to a single focal point. At the focal point a reciever is placed - among other instruments this could be a camera in the optical, a bolometer in the far-infrared, or an antenna feed in the radio. Instead of a single dish telescope, a more general term would be a single element telescope which can be as simple as a dipole (Figure 1.10.2). An interferometric array (Figure 1.10.3) is used to create a synthesized telescope as it is considered a single telescope synthesized out of many elements (each element is also a telescope, it can get even more confusing).
End of explanation
Image(filename='figures/kaira_lba_element.jpg')
Explanation: Figure 1.10.1: 26 meter dish at HartRAO, South Africa used for single dish observations and as part of interferometric VLBI networks. Credit: M Gaylard / HartRAO⤴
End of explanation
Image(filename='../5_Imaging/figures/2013_kat7_20.jpg')
Explanation: Figure 1.10.2: LOFAR LBA dipole element. Credit: KAIRA/D. McKay-Bukowski⤴
End of explanation
def WhichDiameter(wavelength=1., angres=(15e-3/3600)):
Compute the diameter of an aperture as a function of angular resolution and observing wavelength
c = 299792458. #spped of light, m/s
freq = c/(wavelength)/1e6 #
D = 1.22 * wavelength/np.radians(angres) # assuming a circular aperture
print '\n'
print 'At a frequency of %.3f MHz (Lambda = %.3f m)'%(freq, wavelength)
print 'the aperture diameter is D = %f m'%D
print 'to achieve an angular resolution of %f degrees / %f arcmin / %f arcsec'%(angres, angres*60, angres*3600)
print '\n'
w = ipywidgets.interact(WhichDiameter, angres=((15e-3/3600), 10, 1e-5), wavelength=(0.5e-6, 1, 1e-7))
Explanation: Figure 1.10.3: Inner 5 dishes of KAT-7, a 7 element interferometric array located in South Africa which can be combined into a single synthesized telescope. Credit: SKA-SA⤴
Depending on the science goals of an expierment or an observatory different types of telescopes are built. So what is the main driver for building an interferometric array to create a synthesized telescope? It all comes down the resolution of a telescope which is related to the wavelength of light and the physical size of the telescope.
1.10.1. Aperture Diameter and Angular Resolution
If we consider a generic dish radio telescope, and ignoring blockage from feeds and structure and any practical issues, we can think of the dish as having a circular aperture. We will use the term 'primary beam' later in Chapter 7 to discuss this aperture in detail. Until then we can think of the dish aperture size as being the collecting area. The larger the aperture the more collecting area, thus the more sensitive (a measure of how well the telescope is able to measure a signal) the telescope. This is the same as in photography. Since we are modelling our simple telescope as a circle then the collection area $A$, or aperture size, is proportional to the diameter of the dish $D$.
$$A \propto D^2$$
An additional effect of a larger aperture is an increase in the angular resolution of the telescope. That is the ability to differentiate between two source (say stars) which are separated by some angular distance. Using the Rayleigh criterion the angular resolution $\Delta \theta$ (in radians) of a dish of diameter $D$ is
$$\Delta \theta = 1.22 \frac{\lambda}{D},$$
where $\lambda$ is the observing wavelength. Since light in the radio regime of the light spectrum has a longer wavelength as compared to optical light we can see that a radio telescope with the same collectiong area diameter as an optical telescope will have a much lower angular resolution.
The sensitivity of an telescope is directly proportional to the collecting area. The angular resolution of the telescope is inversely proportional to the aperture diameter. Usually, we want both high sensitivity and fine angular resolution, since we are interested in accurately measuring the strength of the signal and positions of sources. A natural way to improve both the sensitivity and angular resolution of a single telescope is to increase the collecting area.
The following table shows the angular resolution as a function of aperture diameter $D$ and observing wavelength for a single dish telescope.
| Telescope Type | Angular Resolution <br> $\Delta \theta$ | Visible <br> $\lambda$ = 500 nm | Infrared <br> $\lambda$ = 10 $\mu$m | Radio EHF <br> $\lambda$ = 10 mm <br> 30 GHz | Radio UHF <br> $\lambda$ = 1 m <br> 300 Mhz|
|:---:|:---:|:---:|:---:|:---:|:---:|
| Amatuer | 0.8'' | 15 cm | 3 m | 3 km | 300 km |
| Automated Follow-up | 0.25'' | 50 cm | 10 m | 10 km | 100 km |
| Small Science | 0.12'' | 1 m | 21 m | 21 km | 2100 km |
| Large Science | 0.015'' (15 mas) | 8 m | 168 m | 168 km | 16800 km |
Table 1.10.1: Angular resolution of a telescope as a function of the aperture diameter $D$ and observing wavelength.
As we can see in Table 1.10.1, a radio telescope many orders of magnitude larger in diameter compared to an optical telescope is required to achieve the same angula resolution of the sky. It is very reasonable to build a 15 cm optical telescope, in fact they can be easily bought at a store. But a radio telescope, observing at 300 MHz, which has the same resolution (0.8 arcseconds) needs to have an aperture of 300 km! Now, this would not only be prohibitvely expensive, but the engineering is comepletely infeasible. As a matter of reference the largest single dish telescopes are on the order of a few hundred meters in diameter (see FAST in China, Arecibo in Puerto Rico). The following example shows how the diamter of a telescope varies as a function of observing wavelegth and desired angular resolution.
End of explanation
Image(filename='figures/gbt_300foot_telescope.jpg')
Explanation: 1.10.2 Physical limitations of single dishes
There are certain physical limitations to account for when designing single dish radio telescopes. As an example, consider that, due to its limited field of view and the rotation of the earth, an antenna will have to track a source on the sky to maintain a constant sensitivity. In principle this can be achieved by mounting the antenna on a pedestal and mechnically steering it with a suitable engines. However, in order to maintain the integrity of the antenna, the control systems for these engines need to be incredibly precise. Clearly, this gets harder as the size of the instrument increases and will constitute a critical design point on the engineering side. This is true in the optical case as well, but it easier to manage as the telescope is physically smaller.
There is an upper limit on how large we can build steerable single dish radio telescopes. This is because, just like everything else, the metals that these telescopes are made out of can only withstand finite amounts of stress and strain before deforming. Perhaps one of the greatest reminders of this fact came in 1988 with the <cite data-cite='2008ASPC..395..323C'>collapse of the 300 foot Green Bank Telescope</cite> ⤴ (see Figure 1.10.4). Clearly, large steerable telescopes run the risk of collapsing under their own weight. The 100 meter Green Bank Telescope (GBT) which replaced the 300 foot telescope is the largest steerable telescope in the world.
Larger single dish apertures can still be reached though. By leaving the reflector fixed and allowing the receiver at the focus to move along the focal plane (or along the caustic) of the instrument will mimic a slowly varying pointing in the sky (a so called steerable focus telescope). Indeed, this is how the Arecibo Observatory radio telescope (see Figure 1.10.5) operates. However, steerable focus telescopes come with limitations of their own (e.g. material cost and available space). In order to overcome these physical limations and acheive a higher angular resolution we must use interferometric arrays to form a synthesized telescope.
End of explanation
Image(filename='figures/gbt_300foot_collapse.jpg')
Explanation: Figure 1.10.4a: 300 foot Green Bank Telescope located in West Virgina, USA during initial operations in 1962. Credit: NRAO⤴
End of explanation
Image(filename='figures/arecibo_observatory.jpg')
Explanation: Figure 1.10.4b: November, 1988, a day after the collapse of the 300 foot GBT telescope due to structural defects. Credit: NRAO⤴
End of explanation
Image(filename='figures/cartoon_1.png')
Explanation: Figure 1.10.5: 300 m Arecibo Telescope lying in a natural cavity in Puerto Rico. The receiver is located in the white spherical structure held up by wires, and is repositioned to "point" the telescope. Credit: courtesy of the NAIC - Arecibo Observatory, a facility of the NSF⤴
1.10.3 Creating a Synthesized Telescope using Interferometry
Here we will attempt to develop some intuition for what an interferometric array is and how it is related to a single dish telescope. Before getting into the mathematics we will construct a cartoon example. A simple single dish telescope is made up of a primary reflector dish on a mount to point in some direction in the sky and a signal receptor at the focal point of the reflector (Figure 1.3.6a). This receptor, in the case of radio astronomy is an antenna, in optical astronomy the receptor is often a camera.
Basic optics tells us how convex lenses can be used to form real images of sources that are very far away. The image of a source that is infinitely far away will form at exactly the focal point of the lens, the location of which is completely determined by the shape of the lens (under the "thin lens" approximation). Sources of astrophysical interest can be approximated as being infinitely far away as long as they are at distances much farther away than the focal point of the lens. This is immediately obvious from the equation of a thin convex lens:
$$ \frac{1}{o} + \frac{1}{i} = \frac{1}{f}, $$
where $i, ~ o$ and $f$ are the image, object and focal distances respectively. Early astronomers exploited this useful property of lenses to build the first optical telescopes. Later on concave mirrors replaced lenses because it was easier to control their physical and optical properties (e.g. curvature, surface quality etc.). Reflective paraboloids are the most efficient at focussing incoming plane waves (travelling on-axis) into a single locus (the focal point) and are therefore a good choice for the shape of a collector.
In our simple model the sky only contains a single astrophysical source, which is detected by pointing the telescope in its location in the sky.
End of explanation
Image(filename='figures/cartoon_2.png')
Explanation: Figure 1.10.6a: A simple dish telescope which reflects incoming plane waves (red dashed) along ray tracing paths (cyan) to a receptor at the focal point of the parabolic dish.
Ignoring real world effects like aperture blockage and reflector inefficiencies, plane waves are focused to a single point using a parabolic reflector. At that focus if a signal receptor. We can imagine the reflector is made up of many smaller reflectors, each with its own reflection path. A single dish, in the limit of fully sampling the observing wavelength $\lambda$, can be thought of as being made up of enough reflectors of diameter $\lambda/2$ to fill the collecting area of the dish. In our simple example, we just break the dish into 8 reflectors (Figure 1.10.6b). This is in fact what is often done with very large telescopes when it is not feasible to build a single large mirror, such as in the W. M. Keck Observatory. At this point we have not altered the telescope, we are just thinking about the reflector as being made up of multiple smaller reflectors.
<div class=advice>
<b>Note:</b> We can interpret a single dish telescope as a *continuous interferometer* by applying the Wiener-Khinchin theorem. See Chapter 2 of [<cite data-cite='2007isra.book.....T'>Interferometry and Synthesis in Radio Astronomy</cite> ⤴](http://adsabs.harvard.edu/abs/2007isra.book.....T) for an in depth discussion.
</div>
End of explanation
Image(filename='figures/cartoon_3.png')
Explanation: Figure 1.10.6b: The dish reflector can be thought of as being made up of multiple smaller reflectors, each with its own light path to the focus.
Now instead of capturing all the signal at a single point, there is no reason we can not capture the signal at the smaller, individual reflector focus points. If that signal is captured, we can digitally combine the signals at the main focus point later (Figure 1.10.6c). This is the first trick of interferometry. Radio waves can be sufficiently sampled in time digital record the signals (this becomes more difficult at higher frequencies, and not possible in the near-infrared and higher). The cost is that a receptor needs to be built for each sub-reflector, and additional hardware is required to combine the signals. The dish optically combines the light, we are just doing the same digitally.
End of explanation
Image(filename='figures/cartoon_4.png')
Explanation: Figure 1.10.6c: A receptor at each sub-reflector captures the light signals. To recreate the combined signal at the main receptor the signals are digitally combined.
The next leap is that there is no reason the sub-reflectors need to be set in the shape of a dish since the combination of the signal at the main focus is being done digitally. Since light travels at a constant speed then any repositioning of a sub-reflector just requires a time delay correction. So, we move each element to the ground, and construct a pointing system for each sub-reflector (Figure 1.10.6d). We now has an array of smaller single dish telescopes! By including the correct time delays on each signal the original, larger single dish telescope can be reconstructed. This digital operation is called beamforming and related to interferometry.
End of explanation
Image(filename='figures/cartoon_5.png')
Explanation: Figure 1.10.6d: The sub-reflector elements of the original telescope are set on the ground with their own pointing systems. The original signal can be reconstructed digitally and by including the appropriate time delay for each telescope.
The beamforming opertation recombines all the signals into a single signal, with can be thought of as a single pixel camera. But, we can do better using a correlator which is a way of computing visibilities which are then used to form a image (Figure 1.10.6e), this will all be covered throughout the chapters that follow. But for now it is important to know that interferometric arrays have the advantage over single dish telescopes in that images can be generated instead of a single combined signal. The advantage is that much larger 'synthesized' telescopes can be constructed this compared to a single dish telescope. The correlator also allows for the creation of image over a beamformer at the cost of additional computing hardware.
End of explanation
Image(filename='figures/cartoon_6.png')
Explanation: Figure 1.10.6e: By using correlator hardware instead of a beamformer an image of the sky can be created.
The next trick of interferometry is that we do not neccessarly need to sample the entire original dish (Figure 1.10.6f). We do lose sensitivity and, as will be discussed in later chapeters, spatial frequency modes, but by using only a subset of elements and understanding interferometry we can build synthesized telescopes that are many kilometers in diameter (e.g. MeerKAT) or as large as the Earth (e.g. VLBI netowrks). This is why radio interferometry can be used to produce the highest resolution telescopes in the world.
End of explanation |
13,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework
Step1: I follow the scikit-image tutorial on segmentation | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
%matplotlib inline
from skimage.filters import sobel
from scipy import ndimage as ndi
from skimage.measure import regionprops
from skimage.color import label2rgb
from skimage.morphology import watershed
Explanation: Homework: scikit-image
Counting objects
In class, we saw how to count the number of objects in a microscopy image. Here, we will repeat that exercise, but make use of some of the algorithms in scikit-image to segment the image, and then to determine properties of the resulting objects.
As input, use the image skimage.data.coins. Segment the image, and then calculate the area and eccentricity of each coin.
There are various ways to do this. One would be:
Equalize the input image (see skimage.exposure)
Threshold the image (skimage.filters.otsu)
Remove objects touching the boundary (skimage.segmentation.clear_border)
Apply morphological closing (skimage.morphology.closing)
Remove small objects (skimage.measure.regionprops).
Visualize the results if you want with skimage.color.label2rgb.
Calculate the area and eccentricity of each coin, and display the
original image with this information on it (matplotlib.pyplot.text or matplotlib.pyplot.annotate)
Panorama stitching
One of the scikit-image tutorials shows how to do panorama stitching.
Take 3 or 4 overlapping photos of your own, and use the procedure described to stitch your own panorama.
Extra: Image Stacking
Reprocess one of the datasets from http://www.rawastrodata.com/. See http://www.rawastrodata.com/pages/typesofimages.html for a description of the different kind of images.
Counting objects
End of explanation
# import coin image
coins = data.coins()
# use amplitude of gradient to construct an elevation map
elevation_map = sobel(coins)
# choose markers from extreme parts of histogram of grey value
markers = np.zeros_like(coins)
markers[coins < 30] = 1
markers[coins > 160] = 2
# use watershed to obtain segmentation
segmentation = watershed(elevation_map, markers)
# fill holes in segments
segmentation = ndi.binary_fill_holes(segmentation - 1)
# label coins
labeled_coins, _ = ndi.label(segmentation)
# overlay coins with color labels
coin_label_overlay = label2rgb(labeled_coins-1, image=coins)
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111)
ax.imshow(coin_label_overlay)
ax.axis('off');
for region in regionprops(labeled_coins):
# skip small areas
if region.area > 100:
minr, minc, maxr, maxc = region.bbox
annot = "Area={0}\n Ecc={1:.2g}".format(region.area, region.eccentricity)
ax.text(minc-5, minr+5, annot, color="white")
Explanation: I follow the scikit-image tutorial on segmentation:
http://scikit-image.org/docs/stable/user_guide/tutorial_segmentation.html
End of explanation |
13,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MR Imaging
Class
Step1: Next we define two small functions to convert between coordinate systems.
Step5: Now we will define several functions that will be employed in the tutorial to illustrate concepts. The narrative tutorial begins below the function definitions. If you are curious about how to draw the plots, or to dig deeper into the code demonstrating the principles, feel free to read these. However, they are mostly for manipulating plots, so don't feel the need to concern yourself with the details.
Step6: Pulse Sequences for measuring T1 and T2 signals
T1 Signals (used for anatomical images)
Inversion-Recovery (IR)
Inversion-Recovery pulse sequences are a method for measuring T1
relaxation (spin-lattice). As the sequence name suggests, the pulse
sequence first inverts the net magnetization ($180^o$ pulse). Then, the
sequence simply pauses for a time, TI, to let the longitudinal
magnetization recover towards steady state across the tissue. Then, a
$90^o$ pulse is delivered that places the net magnetization in the
transverse plane. The transverse magnetization is measured right away,
before significant dephasing, in order to estimate the T1 properties.
To help you visualize the events, run this code a few times. The first
plot shows the 180-TI-90 sequence for a relatively slow T1 value.
Step7: Now suppose the tissue has a faster T1 relaxation
Step8: Question 1
Step11: MR Image Formation
We now ask
Step12: The function rf_signal produces the RF signal that we will measure given a particular time constant and Larmor frequency.
The RF signal is periodic. We can summarize the amplitude and
frequency of this signal by plotting the Fourier Transform amplitude
spectrum. This plot measures the amplitude of each harmonic (temporal
frequency) in the signal.
We're going to be making quite a few plots with an RF signal over time and its spectral density.
Let's write a general function so that we avoid typing the same thing over and over again.
As a general rule of thumb for progamming, any time you find yourself using "copy" and "paste", it's time
to write a function.
Step13: Next, consider the signal as the relaxation constant (t_constant)
increases. Notice that over the same period of time, the rf_signal has a
larger amplitude (there is less decay).
Step14: Question 2
Step15: Changing magnetic field strength
Now, suppose that we change the magnetic field strength. Remember that the
Larmor frequency is proportional to the magnetic field. Consequently,
the frequency of the RF signal will increase. We can compare the signals
at two different frequencies as follows
Step16: It is easiest to see the change in frequency if we compute the Fourier transform of the two signals.
Step17: This figure is important. It shows that the frequency of the response
from each beaker depends on the local magnetic field. We have already
seen that the amplitude at the response frequency depends on the time
constant. We can take advantage of these two observations by introducing
a gradient into the magnetic field.
By inserting a gradient, the two beakers experience slightly different
magnetic fields and thus the signals have two different Larmor
frequencies. The combined RF signal from the two beakers will be the
sum of the signals from the two beakers.
Step18: We can see the amplitude of the signals from each of the beakers by
plotting the Fourier Transform spectrum
Step19: Finally, if the two beakers represent substances with different time
constants, we will be able to measure this by estimating the amplitudes
of the two peaks in the spectrum. Here, we create a signal in which the
two beakers are in a gradient and the substances have slightly different
time constants.
Step20: You can see that the amplitude of peaks in the signal change, reflecting
the different time constants. We can distinguish which amplitude is
associated with which beaker because their responses are at different
frequencies.
Step21: Question 3
Step22: We choose a pulse frequency. This frequency will excite the planar
section of the tissue that is at the Larmor frequency we wish to excite.
Step23: Now, create a sinusoid RF pulse at that frequency
Step24: Here is the Fourier Transform spectrum of the RF pulse.
The frequency content of the pulse determines which plane we excite.
Step25: We can control the position that is excited by adjusting the frequency.
Step26: A second parameter we would like to control is the slice width. There
are two ways we can adjust the slice width. One way, as described in the
book, is to change the gradient. A second way, illustrated here, is to
change the timing of the RF pulse.
In this example, we create an RF pulse that is the product of the
sinusoid and a sinc function. (Type sinc? to read about this
important function). Each sinc function has its own frequency, too.
Step27: Question 4
Step28: Plot the starting positions in the transverse plane
Step29: Now define a new plotting function to save some typing.
Step30: Suppose we apply a gradient across the x-axis. In the presence of this
gradient, the signals from the two beakers on the left will differ from
the RF signals emitted by the two beakers on the right. When we measure
with only the x-gradient (Gx), we obtain the sum of the two beakers on
the left in one frequency band and the sum of the two beakers on the
right in a second frequency band.
Step31: Here is the total signal
Step32: Now, suppose that we turn off the Gx gradient and introduce a gradient in
the y-dimension (Gy). This changes the Larmor frequency of the beakers
at the top and bottom of the square. Suppose the frequency for the
bottom beakers is a little lower. Then the spins in the bottom beakers
rotate more slowly and they end up pointing in a different direction from
the spins at the top. After applying Gy for a certain amount of time,
the spins at the top and bottom will point in opposite directions.
Step33: Next, we switch off Gy and turn on Gx. As before we will measure the
combination of the spins on the left at one frequency and the combination
of the spins at the right at a different frequency. Because the spins of
the top and bottom beaker oppose one another, however, the total RF
signal we obtain now is the difference between the top and bottom.
Step34: Total signal | Python Code:
%pylab inline
import matplotlib as mpl
mpl.rcParams["figure.figsize"] = (8, 6)
mpl.rcParams["axes.grid"] = True
from IPython.display import display, clear_output
from time import sleep
Explanation: MR Imaging
Class: Psych 204a
Tutorial: MR Imaging
Author: Wandell
Date: 03.15.04
Duration: 90 minutes
Copyright: Stanford University, Brian A. Wandell
Checked:
Oct 2007: Rory Sayres
Sep 2009: Jon Winawer
Translated to Python by Michael Waskom, 10/2012
This tutorial covers two related topics. First, pulse sequence methods
for measuring T1 or T2 are illustrated. These are introduced by
explaining two types of pulse sequences (inversion recovery and spin
echo) that selectively emphasize the T1 or T2 tissue properties.
Then, the tutorial continues with a very simple example of how one can
form images of the relaxation constants (T1 or T2) at different positions
You should complete the tutorial "mrTutMR" prior to this one.
First we set up the pylab environment and import a few display-relevant functions.
End of explanation
def cart2pol(x, y):
theta = arctan2(y, x)
r = sqrt(x ** 2 + y ** 2)
return theta, r
def pol2cart(theta, r):
x = r * cos(theta)
y = r * sin(theta)
return x, y
Explanation: Next we define two small functions to convert between coordinate systems.
End of explanation
def inversion_recovery(T1, f=None, ax=None):
Graphical illustration of the Inversion Recovery idea.
if not all([f, ax]):
f, ax = subplots(1, 1)
ax.set_aspect("equal")
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
# Original net magnetization
m = [0, 10]
p0, r0 = cart2pol(*m)
arr = Arrow(0, 0, *m)
ax.add_patch(arr)
ax.set_title("Initial magnetization")
display(f)
sleep(1.5)
# Flip the magnetization with a 180 pulse
ax.set_title("$180^o$ RF pulse")
p, r = cart2pol(*m)
p = p0 - pi
m = pol2cart(p, r)
arr.remove()
arr = Arrow(0, 0, *m)
ax.add_patch(arr)
clear_output(True)
display(f)
sleep(.5)
# Let the T1 value decay for a while at the T1 rate.
ax.set_title("Recovery")
for t in arange(0, 0.8, 0.15): # Time in seconds
arr.remove()
r = r0 - 2 * r0 *exp(-t / T1)
p = p0 if r > 0 else p
m = pol2cart(p, abs(r))
arr = Arrow(0, 0, *m)
ax.add_patch(arr)
clear_output(True)
display(f)
sleep(.2)
# Rotate the magnetization using a 90 deg pulse to measure it
ax.set_title("$90^o$ RF pulse")
for i in range(5):
p, r = cart2pol(*m)
p = p - (pi / 2) / 5
m = pol2cart(p, r)
arr.remove()
arr = Arrow(0, 0, *m)
ax.add_patch(arr)
clear_output(True)
display(f)
sleep(.2)
ax.set_title("Measure RF now")
clear_output(True)
display(f)
plt.close()
def spin_echo(TE=16, f=None, ax=None):
Graphical illustration of the Spin Echo dephasing and echo formation.
if not all(f, ax):
f, ax = subplots(1, 1)
ax.set_aspect("equal")
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
ax.set_xlabel("X axis")
ax.set_ylabel("Y axis")
# Original net magnetization
ma = [10, 0]
p0, r0 = cart2pol(*ma)
arr_a = Arrow(0, 0, *ma, color="blue")
ax.add_patch(arr_a)
mb = ma
arr_b = arrow(0, 0, *mb, color="green")
ax.set_title("$90^o$ pulse and dephasing")
display(f)
sleep(1.2)
for i in range(TE):
arr_a.remove()
arr_b.remove()
pa, ra = cart2pol(*ma)
pa = pa - pi / 8
ma = pol2cart(pa, ra)
arr_a = Arrow(0, 0, *ma, color="blue")
ax.add_patch(arr_a)
pb, rb = cart2pol(*mb)
pb = pb - pi / 10
mb = pol2cart(pb, rb)
arr_b = Arrow(0, 0, *mb, color="green")
ax.add_patch(arr_b)
clear_output(True)
display(f)
sleep(.2)
# Apply a 180 deg pulse that rotates the spins around the x-axis
sleep(.8)
ax.set_title("Inverting ($180^o$ pulse)")
clear_output(True)
display(f)
sleep(.5)
arr_a.remove()
arr_b.remove()
pa, ra = cart2pol(*ma)
pa = -pa
ma = pol2cart(pa, ra)
arr_a = Arrow(0, 0, *ma, color="blue")
ax.add_patch(arr_a)
pb, rb = cart2pol(*mb)
pb = -pb
mb = pol2cart(pb, rb)
arr_b = Arrow(0, 0, *mb, color="green")
ax.add_patch(arr_b)
clear_output(True)
display(f)
# Now keep going
ax.set_title("Catching up")
clear_output(True)
display(f)
sleep(.5)
for i in range(TE):
arr_a.remove()
arr_b.remove()
pa, ra = cart2pol(*ma)
pa = pa - pi / 8
ma = pol2cart(pa, ra)
arr_a = Arrow(0, 0, *ma, color="blue")
ax.add_patch(arr_a)
pb, rb = cart2pol(*mb)
pb = pb - pi / 10
mb = pol2cart(pb, rb)
arr_b = Arrow(0, 0, *mb, color="green")
ax.add_patch(arr_b)
clear_output(True)
display(f)
sleep(.2)
ax.set_title("The echo arrives")
clear_output(True)
display(f)
plt.close()
def phase_encode(rate, spin_dir, n_steps=15):
Visualization of the phase-encoding.
f, axes = subplots(2, 2, figsize=(8, 8))
axes = axes.ravel()
for ax in axes:
ax.set_aspect("equal")
ax.set_xlabel("X axis")
ax.set_ylabel("Y axis")
ax.set_xlim(-10, 10)
ax.set_ylim(-10, 10)
a = empty(4, object)
for i in range(n_steps):
sleep(.2)
pa = zeros(4)
ra = zeros(4)
for j, ax in enumerate(axes):
pa[j], ra[j] = cart2pol(spin_dir[j, 0], spin_dir[j, 1])
pa[j] = pa[j] - rate[j]
spin_dir[j, 0], spin_dir[j, 1] = pol2cart(pa[j], ra[j])
if i:
a[j].remove()
a[j] = Arrow(0, 0, *spin_dir[j, :])
ax.add_patch(a[j])
clear_output(True)
display(f)
plt.close()
Explanation: Now we will define several functions that will be employed in the tutorial to illustrate concepts. The narrative tutorial begins below the function definitions. If you are curious about how to draw the plots, or to dig deeper into the code demonstrating the principles, feel free to read these. However, they are mostly for manipulating plots, so don't feel the need to concern yourself with the details.
End of explanation
inversion_recovery(T1=2.8)
Explanation: Pulse Sequences for measuring T1 and T2 signals
T1 Signals (used for anatomical images)
Inversion-Recovery (IR)
Inversion-Recovery pulse sequences are a method for measuring T1
relaxation (spin-lattice). As the sequence name suggests, the pulse
sequence first inverts the net magnetization ($180^o$ pulse). Then, the
sequence simply pauses for a time, TI, to let the longitudinal
magnetization recover towards steady state across the tissue. Then, a
$90^o$ pulse is delivered that places the net magnetization in the
transverse plane. The transverse magnetization is measured right away,
before significant dephasing, in order to estimate the T1 properties.
To help you visualize the events, run this code a few times. The first
plot shows the 180-TI-90 sequence for a relatively slow T1 value.
End of explanation
inversion_recovery(T1=0.6)
Explanation: Now suppose the tissue has a faster T1 relaxation
End of explanation
spin_echo(TE=16)
Explanation: Question 1:
If we apply a 90-degree pulse at the time when exactly half the signal has
recovered, what do you expect the transverse magnetization to be?
(Extra credit: is this the same as applying a pulse at T1? Why or why not?)
Comparing the two plots, you can see that the amplitude of the net
transverse magnetization depends on the value of T1. The final $90^o$
flip let us measure the size of the longitudinal magnetization.
Because the measurement takes place immediately after this flip,
there is not much time for spin-spin dephasing and we measure mainly
properties of T1. That is why such sequences are called 'T1-weighted'.
T2 Signals (used for BOLD images)
Spin Echo (Hahn)
In principle, to make a T2 measurement we need only to flip the net
magnetization $90^o$ and then measure the signal decay as the spins
dephase. Because the T2 reduction is very fast compared to the T1
reduction, the decay we see will be nearly all due to T2.
The spin-spin dephasing occurs so quickly that it is almost impossible to
obtain a T2 measurement soon enough after the 90 deg flip. The Hahn Spin
Echo, and its partner the Gradient Echo, make it possible to separate the
time between the pulse and the measurement.
The next visualizataion shows two spins within a single voxel. These
spins are experiencing slightly differeent local magnetic fields.
Consequently, one precesses around the origin slightly faster than the
other. After a little time the signals are well out of phase. Then, a
180 deg inverting pulse is introduced. This pulse rotates the spins
around the horizontal axis (x-axis). Seen within the plane, this causes
the two spins to reverse positions so that the leader becomes the
follower. The leader is still in a higher local field, and so after a
while it catches up. At this moment the spin phasese come together to
create an echo of the signal after the first 90 deg pulse. This echo
happens at a point in time that is well separated from the inverting
pulse.
The time until the inverse pulse determines when the echo will occur
End of explanation
Mo = 1 # Net magnetization
larmor_freq = [12, 18] # Larmor frequency in MHz/10
T1 = [1.5, 3]
t = arange(0, 1, .005) * (3 * max(T1)) # Time samples in secs
def rf_signal(tau=1, net_mag=1, t_samples=None, larmor_freq=12, ph=0):
Estimate an rf signal based on the various parameters.
if t_samples is None:
t_samples = arange(0, 1, 0.005) * (4 * tau)
signal = exp_decay(tau, net_mag, t_samples) * cos(t_samples * larmor_freq + ph)
return signal
def exp_decay(tau, Mo, t):
Create an exponential decay that will be used for either
longitudinal or transverse decay modeling.
return Mo * exp(-t / tau)
Explanation: MR Image Formation
We now ask: How can we distinguish signals from different locations?
This is the key step in learning how to form an image of the MR time
constant's properties.
Consider a simple situation in which we have two beakers sitting next to
one another on a table. Both contain water. To start, suppose the
magnetic field is uniform, and suppose that we measure T1.
We are going to need these variables to develop the story
End of explanation
def rf_plot(time, signal, title):
f, ax = subplots(1, 1, figsize=(6, 6))
ax.plot(time, signal)
ax.set_xlabel('Time (s)')
ax.set_ylabel('RF signal')
ax.set_title(title)
def rf_kspace_plot(time, signal, titles):
f, (ax1, ax2) = subplots(1, 2, figsize=(14, 6))
ax1.plot(t, signal)
ax1.set_xlabel("Time (s)")
ax1.set_ylabel("RF signal")
ax1.set_title(titles[0])
ax2.psd(signal)
ax2.set_title(titles[1])
signal = rf_signal(T1[0], Mo, t, larmor_freq[1])
rf_kspace_plot(t, signal, ["Beaker Signal", "Slice Selection"])
Explanation: The function rf_signal produces the RF signal that we will measure given a particular time constant and Larmor frequency.
The RF signal is periodic. We can summarize the amplitude and
frequency of this signal by plotting the Fourier Transform amplitude
spectrum. This plot measures the amplitude of each harmonic (temporal
frequency) in the signal.
We're going to be making quite a few plots with an RF signal over time and its spectral density.
Let's write a general function so that we avoid typing the same thing over and over again.
As a general rule of thumb for progamming, any time you find yourself using "copy" and "paste", it's time
to write a function.
End of explanation
t_constant = [1, 2, 4]
f, axes = subplots(1, 3, figsize=(14, 5), sharey=True)
for i, ax in enumerate(axes):
signal = rf_signal(t_constant[i], Mo, t, larmor_freq[0])
ax.set_ylim(-1, 1)
ax.plot(t, signal)
Explanation: Next, consider the signal as the relaxation constant (t_constant)
increases. Notice that over the same period of time, the rf_signal has a
larger amplitude (there is less decay).
End of explanation
t_constant = [1, 2, 4]
f, axes = subplots(1, 3, figsize=(14, 5), sharey=True)
for i, ax in enumerate(axes):
signal = rf_signal(t_constant[i], Mo, t, larmor_freq[1])
ax.psd(signal)
Explanation: Question 2:
If you were to plot the Fourier Transform for each of the three
subplots, how do you expect they would differ?
End of explanation
larmor_freqs = [6, 124]
f, axes = subplots(1, 2, figsize=figsize(13, 7), sharey=True)
for i, freq in enumerate(larmor_freqs):
signal = rf_signal(T1[0], Mo, t, freq);
axes[i].plot(t, signal)
axes[i].set_title("Frequency: %d Hz" % freq)
Explanation: Changing magnetic field strength
Now, suppose that we change the magnetic field strength. Remember that the
Larmor frequency is proportional to the magnetic field. Consequently,
the frequency of the RF signal will increase. We can compare the signals
at two different frequencies as follows
End of explanation
f, axes = subplots(1, 2, figsize=figsize(13, 7), sharey=True)
for i, freq in enumerate(larmor_freqs):
ax = axes[i]
signal = rf_signal(T1[0], Mo, t, freq)
ax.psd(signal)
ax.set_title('Fourier Spectrum for %d Hz Signal' % freq)
Explanation: It is easiest to see the change in frequency if we compute the Fourier transform of the two signals.
End of explanation
signal = rf_signal(T1[0], Mo, t, larmor_freqs[0]) + rf_signal(T1[0], Mo, t, larmor_freqs[1])
rf_plot(t, signal, "Two Beakers in Gradient")
Explanation: This figure is important. It shows that the frequency of the response
from each beaker depends on the local magnetic field. We have already
seen that the amplitude at the response frequency depends on the time
constant. We can take advantage of these two observations by introducing
a gradient into the magnetic field.
By inserting a gradient, the two beakers experience slightly different
magnetic fields and thus the signals have two different Larmor
frequencies. The combined RF signal from the two beakers will be the
sum of the signals from the two beakers.
End of explanation
rf_kspace_plot(t, signal, ["Two beakers in gradient", "Fourier Space"])
Explanation: We can see the amplitude of the signals from each of the beakers by
plotting the Fourier Transform spectrum
End of explanation
signal = rf_signal(T1[0], Mo, t, larmor_freqs[0]) + rf_signal(T1[1], Mo, t, larmor_freqs[1])
rf_plot(t, signal, "Two beakers in gradient")
Explanation: Finally, if the two beakers represent substances with different time
constants, we will be able to measure this by estimating the amplitudes
of the two peaks in the spectrum. Here, we create a signal in which the
two beakers are in a gradient and the substances have slightly different
time constants.
End of explanation
rf_kspace_plot(t, signal, ["Two Beakers in Gradient", "Fourier Space"])
Explanation: You can see that the amplitude of peaks in the signal change, reflecting
the different time constants. We can distinguish which amplitude is
associated with which beaker because their responses are at different
frequencies.
End of explanation
n_time = 256.
t = arange(n_time) / n_time - 0.5
rf_pulse = zeros(int(n_time))
pulse_duration = round(n_time / 2)
pulse_start = n_time / 2 - pulse_duration / 2
pulse_stop = pulse_start + pulse_duration - 1
idx = slice(int(pulse_start), int(pulse_stop))
pulse_t = t[idx]
Explanation: Question 3:
When computing the RF signal in the last example, which variable(s)
represented the magnetic field gradient?
From answering Question 2, and understanding this simple case, you should
understand how the gradient associates different RF signal frequencies
with different spatial locations.
This is why determining the frequency amplitude of the signal, using the
Fourier Transform, is so important: RF signal frequency is associated
with spatial position. The amplitude at that frequency can be
interpreted as the decay rate.
This simple example provides you with the basic principles of an
important MR imaging term: k-space. Specifically, in this example the
frequency axis of the Fourier Transform is analogous to k-space.
There are important limitations to the method we have developed up this point.
Mainly, this method works if we only need to make images of an array of
beakers along a line. To make estimates of beakers spread across a table
top (2D) or filling up a box (3D) we need to do more. These methods will
be explained below.
The main difference between this example and general imaging is
dimensionality. In MR imaging the stimuli don't fall along one dimension,
so we can't simply use a one-dimensional frequency axis to assign
position. In general the position in k-space corresponds to a position
in a two-dimensional plane that represents various spatial frequencies.
(By the way, there are historical reasons why they call it k-space. I
forget these reasons. If someone can remind me, I will give them a firm
handshake and a warm smile.)
We will start to extend the ideas on imaging in this first section to
slightly more complex cases in the next section.
Slice Selection
In nearly all fMRI imaging, the data are acquired using a series of
measurements from different planar sections (slices) of the tissue. The
measurements are made by selectively exciting the spins, one planar
section at a time. If only one plane is excited, then we can be
confident that the signal we measure must arise from a voxel in that
plane.
In the next part of this tutorial, we will reivew how we can excite
the spins in one planar section. Following that we will review how we
can distinguish different positions within the excited plane.
The principles used to understand slice selection are the same as the
principles used to distinguish signals from beakers at two positions.
The main difference is that in the previous example we used gradients to
distinguish received signals. In slice selection we use gradients to
selectively deliver an excitation.
The idea is this. We introduce a magnetic field gradient across the
sample changing the Larmor frequency across the sample.
When we deliver an RF pulse at a particular frequency, then,
only the spins in one portion of the gradient field will be excited.
What would such a pulse look like? Let's create some examples.
Let's perform some example calculations. We initialize the rf_pulse to zero, and then we initialize some parameters.
End of explanation
pulse_freq = 25
Explanation: We choose a pulse frequency. This frequency will excite the planar
section of the tissue that is at the Larmor frequency we wish to excite.
End of explanation
rf_pulse[idx] = sin(2 * pi * pulse_freq * pulse_t)
rf_plot(t, rf_pulse, "RF pulse")
Explanation: Now, create a sinusoid RF pulse at that frequency:
End of explanation
rf_kspace_plot(t, rf_pulse, ["RF Pulse", "Slice Selection"])
Explanation: Here is the Fourier Transform spectrum of the RF pulse.
The frequency content of the pulse determines which plane we excite.
End of explanation
pulse_freq = 50
rf_pulse[idx] = sin(2 * pi * pulse_freq * pulse_t)
rf_kspace_plot(t, rf_pulse, ["RF Pulse", "Slice Selection"])
Explanation: We can control the position that is excited by adjusting the frequency.
End of explanation
sinc_freq = 20 # Try 10, 20 and 40.
pulse_freq = 50 # Try 25, 50 and 75.
rf_pulse[idx] = sin(2 * pi * pulse_freq * pulse_t)
rf_pulse[idx] = rf_pulse[idx] * sinc(sinc_freq * pulse_t)
rf_kspace_plot(t, rf_pulse, ["RF Pulse", "Slice Selection"])
Explanation: A second parameter we would like to control is the slice width. There
are two ways we can adjust the slice width. One way, as described in the
book, is to change the gradient. A second way, illustrated here, is to
change the timing of the RF pulse.
In this example, we create an RF pulse that is the product of the
sinusoid and a sinc function. (Type sinc? to read about this
important function). Each sinc function has its own frequency, too.
End of explanation
T1 = array([(1, 2), (3, 4)], float) / 2
larmor_freq = [15., 50.]
ph = [0, pi]
t = arange(0, 5, .02)
Mo = 1
rate = [0., 0., 0., 0.]
spin_dir = array([(10, 0), (10, 0), (10, 0), (10, 0)], float)
Explanation: Question 4:
Run the above code a few times, varying the pulse and sinc frequency
values. What effect does each parameter have on slice position and
slice width?
Image Formation using Frequency Encoding and Phase Encoding
The Hornak MRI book has a very
good discussion of imaging, including frequency and phase encoding.
Please read Chapters 6 and 7 for a useful discussion of the principles
further described here. Also, the Huettel et al. book is very clear and
useful in describing pulse sequences (Chapters 4 and 5).
Earlier, we reviewed how to use a gradient field to associate different
positions along a line with different RF signal frequencies. That method
is often called frequency encoding. In this section, we describe how to
measure along the second spatial dimension. This measurement is
sometimes called the phase-encoding dimension. Taken together, the
methods described in this section are sometimes called Fourier Transform
imaging.
Consider the problem of identifying signals from 4 beakers placed at the
corners of a square. These four beakers are in the planar section and the
spins were set in motion using the methods described in the previous
section.
First, we set up the basic parameters for the four beakers.
End of explanation
phase_encode(rate, spin_dir, 1)
Explanation: Plot the starting positions in the transverse plane:
End of explanation
def plot_rf_2d(fs, ps, t, T1, Mo=1):
f, axes = subplots(2, 2, figsize=(8, 8))
signal = zeros((2, 2, len(t)))
freq_text = ["F1", "F2"]
fs = reshape(fs, (2, 2)).T
ps = reshape(ps, (2, 2)).T
for i in range(2):
for j in range(2):
signal[i, j, :] = rf_signal(T1[i, j], Mo, t, fs[i, j], ps[i, j])
axes[i, j].plot(t, signal[i, j, :])
axes[i, j].set_title(freq_text[j])
return signal
Explanation: Now define a new plotting function to save some typing.
End of explanation
freqs = [larmor_freq[0]] * 2 + [larmor_freq[1]] * 2
phases = [ph[0]] * 4
signal = plot_rf_2d(freqs, phases, t, T1, Mo)
Explanation: Suppose we apply a gradient across the x-axis. In the presence of this
gradient, the signals from the two beakers on the left will differ from
the RF signals emitted by the two beakers on the right. When we measure
with only the x-gradient (Gx), we obtain the sum of the two beakers on
the left in one frequency band and the sum of the two beakers on the
right in a second frequency band.
End of explanation
s = (signal[0] + signal[1]).sum(axis=0)
rf_kspace_plot(t, s, ["Total RF", "Total RF Signal"])
Explanation: Here is the total signal:
End of explanation
rate = [pi / 8, pi / 8, pi / 16, pi / 16]
spin_dir = array([(10, 0), (10, 0), (10, 0), (10, 0)], float)
phase_encode(rate, spin_dir, 15)
Explanation: Now, suppose that we turn off the Gx gradient and introduce a gradient in
the y-dimension (Gy). This changes the Larmor frequency of the beakers
at the top and bottom of the square. Suppose the frequency for the
bottom beakers is a little lower. Then the spins in the bottom beakers
rotate more slowly and they end up pointing in a different direction from
the spins at the top. After applying Gy for a certain amount of time,
the spins at the top and bottom will point in opposite directions.
End of explanation
ps = [ph[0], ph[1]] * 2
signal = plot_rf_2d(freqs, ps, t, T1, Mo)
Explanation: Next, we switch off Gy and turn on Gx. As before we will measure the
combination of the spins on the left at one frequency and the combination
of the spins at the right at a different frequency. Because the spins of
the top and bottom beaker oppose one another, however, the total RF
signal we obtain now is the difference between the top and bottom.
End of explanation
s = (signal[0] + signal[1]).sum(axis=0)
rf_kspace_plot(t, s, ["Total RF", "Total RF Signal"])
Explanation: Total signal:
End of explanation |
13,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.2.4. Comparing word use between corpora
In previous notebooks we examined changes in word use over time using several different statistical approaches. In this notebook, we will examine differences in word use between two different corpora.
Web of Science dataset
In this notebook we will use data retrieved from the ISI Web of Science database. One corpus is from the journal Plant Journal over the period 1991-2013. The other corpus is from the journal Plant Journal, 19991-2013. Each corpus is comprised of several WoS field-tagged metadata files contained in a folder.
Tethne's WoS parser can load all of the data files in a single directory all at once. This may take a few minutes, since Tethne goes to a lot of trouble in indexing all of the records for easy access later on.
Step1: Conditional frequency distribution
This next step should look familiar. We will create a conditional frequency distribution for words in these two corpora. We have two conditions
Step2: Now we can use tabulate to generate a contingency table showing the number of times each word is used within each journal.
Step3: Is there a difference?
As a first step, we may wish to establish whether or not there is a difference between the two corpora. In this simplistic example, we will compare the rate at which a specific word is used in the two journals. In practice, your comparisons will probably be more sophisticated -- but this is a starting point.
So
Step4: To calculate the expected values, we first calculate the expected probabilities of each word under the null hypothesis. The probability of "photosynthesis" occurring is the total number of occurrences of "photosynthesis" (sum of the first column) divided by the total number of tokens (sum of the whole table). The probability of "photosynthesis" not occuring is calculated similarly, using the second column.
Step5: Now we calculate the expected counts from those probabilities. The expected counts can be found by multiplying the probabilities of the word occuring and not occuring by the total number of tokens in each corpus.
Step6: Now we obtain the log likelihood using the equation above
Step7: So, do the two corpora differ in terms of their use of the word "photosynthesis"? In other words, can we reject the null hypothesis (that they do not)? Per Dunning (1993), under the null hypothesis the distribution of the test statistic (log likelihood) should follow a $\chi^2$ distribution. So we can obtain the probability of the calculated log-likelihood under the null hypothesis using the PDF of $\chi^2$ with one degree of freedom.
The Scientific Python (SciPy) package has a whole bunch of useful distributions, including $\chi^2$.
Step8: Here's the PDF of $\chi^2$ with one degree of freedom.
Step9: We can calculate the probability of our observed log-likelihood from the PDF. If it is less than 0.05, then we can reject the null hypothesis.
Step10: Money.
A Bayesian approach
We have shown that these two corpora differ significantly in their usage of the term "photosynthesis". In many cases, we may want to go one step further, and actually quantify that difference. We can use a similar approach to the one that we used when comparing word use between years | Python Code:
from tethne.readers import wos
pj_corpus = wos.read('../data/Baldwin/PlantJournal/')
pp_corpus = wos.read('../data/Baldwin/PlantPhysiology/')
Explanation: 1.2.4. Comparing word use between corpora
In previous notebooks we examined changes in word use over time using several different statistical approaches. In this notebook, we will examine differences in word use between two different corpora.
Web of Science dataset
In this notebook we will use data retrieved from the ISI Web of Science database. One corpus is from the journal Plant Journal over the period 1991-2013. The other corpus is from the journal Plant Journal, 19991-2013. Each corpus is comprised of several WoS field-tagged metadata files contained in a folder.
Tethne's WoS parser can load all of the data files in a single directory all at once. This may take a few minutes, since Tethne goes to a lot of trouble in indexing all of the records for easy access later on.
End of explanation
word_counts = nltk.ConditionalFreqDist([
(paper.journal, normalize_token(token))
for paper in chain(pj_corpus, pp_corpus) # chain() strings the two corpora together.
for token in nltk.word_tokenize(getattr(paper, 'abstract', ''))
if filter_token(token)
])
Explanation: Conditional frequency distribution
This next step should look familiar. We will create a conditional frequency distribution for words in these two corpora. We have two conditions: the journal is Plant Physiology and the journal is Plant Journal.
End of explanation
# Don't run this without setting ``samples``!
word_counts.tabulate(samples=['photosynthesis', 'growth', 'stomatal'])
Explanation: Now we can use tabulate to generate a contingency table showing the number of times each word is used within each journal.
End of explanation
plant_jour_photosynthesis = word_counts['PLANT JOURNAL']['photosynthesis']
plant_jour_notphotosynthesis = word_counts['PLANT JOURNAL'].N() - plant_jour_photosynthesis
plant_phys_photosynthesis = word_counts['PLANT PHYSIOLOGY']['photosynthesis']
plant_phys_notphotosynthesis = word_counts['PLANT PHYSIOLOGY'].N() - plant_phys_photosynthesis
# Create a 2x2 array.
contingency_table = np.array([[plant_jour_photosynthesis, plant_jour_notphotosynthesis],
[plant_phys_photosynthesis, plant_phys_notphotosynthesis]],
dtype=int)
contingency_table
Explanation: Is there a difference?
As a first step, we may wish to establish whether or not there is a difference between the two corpora. In this simplistic example, we will compare the rate at which a specific word is used in the two journals. In practice, your comparisons will probably be more sophisticated -- but this is a starting point.
So: Is the term photosynthesis used disproportionately in Plant Physiology compared to Plant Journal?
$H_0: P("photosynthesis" \Bigm|J = "Plant Journal") = P("photosynthesis" \Bigm| J="Plant Physiology")$
To test this hypothesis, we will use Dunning's log-likelihood ratio, which is a popular metric in text analysis. In a nutshell, we want to assess whether or not the relative use of the term "photosynthesis" is sufficiently skewed to reject the null hypothesis.
The log likelihood ratio is calculated from a contingency table, similar to the one above. For a single word, our table will show the number of tokens that are the word "photosynthesis", and the number of tokens that are not, for each journal.
[ show table here ]
$
\sum_i O_i ln \frac{O_i}{E_i}
$
where $O_i$ is the observed value in cell $i$, and $E_i$ is the expected value in cell $i$.
First we will calculate the observed contingency table.
End of explanation
# We multiply the values in the contingency table by 1. to coerce the
# integers to floating-point numbers, so that we can divide without
# losing precision.
expected_probabilities = 1.*contingency_table.sum(axis=0)/contingency_table.sum()
expected_probabilities
Explanation: To calculate the expected values, we first calculate the expected probabilities of each word under the null hypothesis. The probability of "photosynthesis" occurring is the total number of occurrences of "photosynthesis" (sum of the first column) divided by the total number of tokens (sum of the whole table). The probability of "photosynthesis" not occuring is calculated similarly, using the second column.
End of explanation
# We multiply each 2-element array by a square matrix containing ones, and then
# transpose one of the resulting matrices so that the product gives the expected
# counts.
expected_counts = np.floor((np.ones((2, 2))*expected_probabilities)*\
(np.ones((2, 2))*contingency_table.sum(axis=1)).T).astype(int)
expected_counts
Explanation: Now we calculate the expected counts from those probabilities. The expected counts can be found by multiplying the probabilities of the word occuring and not occuring by the total number of tokens in each corpus.
End of explanation
loglikelihood = np.sum(1.*contingency_table*np.log(1.*contingency_table/expected_counts))
loglikelihood
Explanation: Now we obtain the log likelihood using the equation above:
End of explanation
distribution = stats.chi2(df=1) # df: degrees of freedom.
Explanation: So, do the two corpora differ in terms of their use of the word "photosynthesis"? In other words, can we reject the null hypothesis (that they do not)? Per Dunning (1993), under the null hypothesis the distribution of the test statistic (log likelihood) should follow a $\chi^2$ distribution. So we can obtain the probability of the calculated log-likelihood under the null hypothesis using the PDF of $\chi^2$ with one degree of freedom.
The Scientific Python (SciPy) package has a whole bunch of useful distributions, including $\chi^2$.
End of explanation
X = np.arange(1, 100, 0.1)
plt.plot(X, distribution.pdf(X), lw=2)
plt.ylabel('Probability')
plt.xlabel('Value of $\chi^2$')
plt.show()
Explanation: Here's the PDF of $\chi^2$ with one degree of freedom.
End of explanation
distribution.pdf(loglikelihood), distribution.pdf(loglikelihood) < 0.05
Explanation: We can calculate the probability of our observed log-likelihood from the PDF. If it is less than 0.05, then we can reject the null hypothesis.
End of explanation
count_data = pd.DataFrame(columns=['Journal', 'Year', 'Count'])
chunk_size = 400 # This shouldn't be too large.
i = 0
# The slice() function automagically divides each corpus up into
# sequential years. We can use chain() to combine the two iterators
# so that we only have to write this code once.
for year, papers in chain(pj_corpus.slice(), pp_corpus.slice()):
tokens = [normalize_token(token)
for paper in papers # getattr() lets us set a default.
for token in nltk.word_tokenize(getattr(paper, 'abstract', ''))
if filter_token(token)]
N = len(tokens) # Number of tokens in this year.
for x in xrange(0, N, chunk_size):
current_chunk = tokens[x:x+chunk_size]
count = nltk.FreqDist(current_chunk)['photosynthesis']
# Store the count for this chunk as an observation.
count_data.loc[i] = [paper.journal, year, count]
i += 1 # Increment the index variable.
PJ_mean = pymc.Gamma('PJ_mean', beta=1.)
PP_mean = pymc.Gamma('PP_mean', beta=1.)
PJ_counts = pymc.Poisson('PJ_counts',
mu=PJ_mean,
value=count_data[count_data.Journal == 'PLANT JOURNAL'].Count,
observed=True)
PP_counts = pymc.Poisson('PP_counts',
mu=PP_mean,
value=count_data[count_data.Journal == 'PLANT PHYSIOLOGY'].Count,
observed=True)
model = pymc.Model({
'PJ_mean': PJ_mean,
'PP_mean': PP_mean,
'PJ_counts': PJ_counts,
'PP_counts': PP_counts
})
M1 = pymc.MCMC(model)
M2 = pymc.MCMC(model)
M3 = pymc.MCMC(model)
M1.sample(iter=20000, burn=2000, thin=20)
M2.sample(iter=20000, burn=2000, thin=20)
M3.sample(iter=20000, burn=2000, thin=20)
pymc.Matplot.plot(M1)
PJ_mean_samples = M1.PJ_mean.trace()[:]
PJ_mean_samples = np.append(PJ_mean_samples, M2.PJ_mean.trace()[:])
PJ_mean_samples = np.append(PJ_mean_samples, M3.PJ_mean.trace()[:])
PP_mean_samples = M1.PP_mean.trace()[:]
PP_mean_samples = np.append(PP_mean_samples, M2.PP_mean.trace()[:])
PP_mean_samples = np.append(PP_mean_samples, M3.PP_mean.trace()[:])
# Plot the 95% credible interval as box/whiskers.
plt.boxplot([PJ_mean_samples, PP_mean_samples],
whis=[2.5, 97.5],
labels=['Plant Journal', 'Plant Physiology'],
showfliers=False)
plt.ylim(0, 0.3)
plt.ylabel('Rate for term "photosyntheis"')
plt.show()
Explanation: Money.
A Bayesian approach
We have shown that these two corpora differ significantly in their usage of the term "photosynthesis". In many cases, we may want to go one step further, and actually quantify that difference. We can use a similar approach to the one that we used when comparing word use between years: use an MCMC simulation to infer mean rates of use (and credibility intervals) for each corpus.
Rather than starting with a null hypothesis that there is no difference between corpora, we will begin with the belief that there is an independent rate of use for each corpus. We will then infer those rates, and sample from their posterior distributions to generate credible intervals.
Once again, we will model the rate of use with the Poisson distribution. So we must generate count data for evenly-sized chunks of each corpus. We'll put all of our count observations into a single dataframe.
End of explanation |
13,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building Models in PyMC
Bayesian inference begins with specification of a probability model
relating unknown variables to data. PyMC provides three basic building
blocks for Bayesian probability models
Step1: Similarly, the rate parameters can automatically be given exponential priors
Step3: Decorator
Uniformly-distributed discrete stochastic variable $switchpoint$ in the disasters model could alternatively be created from a function that computes its log-probability as follows
Step4: Note that this is a simple Python function preceded by a Python
expression called a decorator, here called
@stochastic. Generally, decorators enhance functions with
additional properties or functionality. The Stochastic object
produced by the @stochastic decorator will evaluate its
log-probability using the function switchpoint. The value
argument, which is required, provides an initial value for the
variable. The remaining arguments will be assigned as parents of
switchpoint (i.e. they will populate the parents dictionary).
To emphasize, the Python function decorated by @stochastic should
compute the log-density or log-probability of the variable. That
is why the return value in the example above is $-\log(t_h-t_l+1)$
rather than $1/(t_h-t_l+1)$.
Direct
Its also possible to instantiate Stochastic directly
Step5: Notice that the log-probability and random variate functions are
specified externally and passed to Stochastic as arguments. This
is a rather awkward way to instantiate a stochastic variable;
consequently, such implementations should be rare.
Data Stochastics
Data are represented by Stochastic objects whose observed attribute
is set to True. If a stochastic variable's observed flag is True,
its value cannot be changed, and it won't be sampled by the fitting
method.
In each interface, an optional keyword argument observed can be set to
True. In the decorator interface, the
@observed decorator is used instead of @stochastic
Step6: In the other interfaces, the observed=True argument is added to the
instantiation of the Stochastic, or its subclass
Step7: The Deterministic class
The Deterministic class represents variables whose values are
completely determined by the values of their parents. For example, in
our disasters model, $rate$ is a deterministic variable.
Step8: so rate's value can be computed exactly from the values of its parents
early_mean, late_mean and switchpoint.
A Deterministic variable's most important attribute is value, which
gives the current value of the variable given the values of its parents.
Like Stochastic's logp attribute, this attribute is computed
on-demand and cached for efficiency.
A Deterministic variable has the following additional attributes
Step9: All the objects thus created have trace=False and plot=False by default.
Decorator
We have seen in the disasters example how the decorator interface is used to create a deterministic variable. Notice that rather than returning the log-probability, as is the
case for Stochastic objects, the function returns the value of the deterministic object, given its parents. Also notice that, unlike for Stochastic objects, there is no value argument
passed, since the value is calculated deterministically by the
function itself.
Direct
Deterministic objects can also be instantiated directly
Step10: Containers
In some situations it would be inconvenient to assign a unique label to
each parent of some variable. Consider $y$ in the following model
Step11: PyMC automatically wraps array $x$ in an appropriate Container class.
The expression 'x_%i' % i labels each Normal object in the container
with the appropriate index $i$. For example, if i=1, the name of the
corresponding element becomes x_1.
Containers, like variables, have an attribute called value. This
attribute returns a copy of the (possibly nested) iterable that was
passed into the container function, but with each variable inside
replaced with its corresponding value.
The Potential class
For some applications, we want to be able to modify the joint density by
incorporating terms that don't correspond to probabilities of variables
conditional on parents, for example
Step12: The function supplied should return the potential's current
log-probability or log-density as a Numpy float. The
potential decorator can take verbose and cache_depth arguments
like the stochastic decorator.
Direct
The same potential could be created directly as follows
Step13: Example
Step14: Fitting Models
PyMC provides three objects that fit models
Step15: This call will cause $M$ to fit the model using Powell's method, which does not require derivatives. The variables in DisasterModel have now been set to their maximum a posteriori values
Step16: We can also calculate model selection statistics, AIC and BIC
Step17: MAP has two useful methods
Step18: The approximate joint posterior mean and covariance of the variables are
available via the attributes mu and C, which the the approximate posterior mean and variance/covariance, respectively
Step19: As with MAP, the variables have been set to their maximum a
posteriori values (which are also in the mu attribute) and the AIC
and BIC of the model are available.
We can also generate samples from the posterior
Step20: In addition to the methods and attributes of MAP, NormApprox
provides the following methods
Step21: Step methods
Step method objects handle individual stochastic variables, or sometimes groups
of them. They are responsible for making the variables they handle take single
MCMC steps conditional on the rest of the model. Each subclass of
StepMethod implements a method called step(), which is called by
MCMC. Step methods with adaptive tuning parameters can optionally implement
a method called tune(), which causes them to assess performance (based on
the acceptance rates of proposed values for the variable) so far and adjust.
The major subclasses of StepMethod are Metropolis and
AdaptiveMetropolis. PyMC provides several flavors of the
basic Metropolis steps.
Metropolis
Metropolis and subclasses implement Metropolis-Hastings steps. To tell an
MCMC object
Step22: Metropolis itself handles float-valued variables, and subclasses
DiscreteMetropolis and BinaryMetropolis handle integer- and
boolean-valued variables, respectively.
Metropolis' __init__ method takes the following arguments
Step23: AdaptativeMetropolis's init method takes the following arguments | Python Code:
import pymc as pm
import numpy as np
from pymc.examples import disaster_model
switchpoint = pm.DiscreteUniform('switchpoint', lower=0, upper=110)
Explanation: Building Models in PyMC
Bayesian inference begins with specification of a probability model
relating unknown variables to data. PyMC provides three basic building
blocks for Bayesian probability models: Stochastic, Deterministic
and Potential.
A Stochastic object represents a variable whose value is not
completely determined by its parents, and a Deterministic object
represents a variable that is entirely determined by its parents. In
object-oriented programming parlance, Stochastic and Deterministic
are subclasses of the Variable class, which only serves as a template
for other classes and is never actually implemented in models.
The third basic class, Potential, represents 'factor potentials', which are not variables but simply
log-likelihood terms and/or constraints that are multiplied into joint
distributions to modify them. Potential and Variable are subclasses
of Node.
The Stochastic class
A stochastic variable has the following primary attributes:
value
: The variable's current value.
logp
: The log-probability of the variable's current value given the values
of its parents.
A stochastic variable can optionally be endowed with a method called
random, which draws a value for the variable given the values of its
parents.
Creation of stochastic variables
There are three main ways to create stochastic variables, called the
automatic, decorator, and direct interfaces.
Automatic
Stochastic variables with standard distributions provided by PyMC can be created in a
single line using special subclasses of Stochastic. For example, the uniformly-distributed discrete variable $switchpoint$ in the coal mining disasters model is created using the automatic interface as follows:
End of explanation
early_mean = pm.Exponential('early_mean', beta=1., value=1)
late_mean = pm.Exponential('late_mean', beta=1., value=1)
Explanation: Similarly, the rate parameters can automatically be given exponential priors:
End of explanation
@pm.stochastic
def switchpoint(value=1900, t_l=1851, t_h=1962):
The switchpoint for the rate of disaster occurrence.
if value > t_h or value < t_l:
# Invalid values
return -np.inf
else:
# Uniform log-likelihood
return -np.log(t_h - t_l + 1)
Explanation: Decorator
Uniformly-distributed discrete stochastic variable $switchpoint$ in the disasters model could alternatively be created from a function that computes its log-probability as follows:
End of explanation
def switchpoint_logp(value, t_l, t_h):
if value > t_h or value < t_l:
return -np.inf
else:
return -np.log(t_h - t_l + 1)
def switchpoint_rand(t_l, t_h):
return np.round( (t_l - t_h) * np.random.random() ) + t_l
switchpoint = pm.Stochastic( logp = switchpoint_logp,
doc = 'The switchpoint for the rate of disaster occurrence.',
name = 'switchpoint',
parents = {'t_l': 1851, 't_h': 1962},
random = switchpoint_rand,
trace = True,
value = 1900,
dtype=int,
rseed = 1.,
observed = False,
cache_depth = 2,
plot=True,
verbose = 0)
Explanation: Note that this is a simple Python function preceded by a Python
expression called a decorator, here called
@stochastic. Generally, decorators enhance functions with
additional properties or functionality. The Stochastic object
produced by the @stochastic decorator will evaluate its
log-probability using the function switchpoint. The value
argument, which is required, provides an initial value for the
variable. The remaining arguments will be assigned as parents of
switchpoint (i.e. they will populate the parents dictionary).
To emphasize, the Python function decorated by @stochastic should
compute the log-density or log-probability of the variable. That
is why the return value in the example above is $-\log(t_h-t_l+1)$
rather than $1/(t_h-t_l+1)$.
Direct
Its also possible to instantiate Stochastic directly:
End of explanation
from scipy.stats.distributions import poisson
@pm.observed
def likelihood(value=[1, 2, 1, 5], parameter=3):
return poisson.logpmf(value, parameter).sum()
Explanation: Notice that the log-probability and random variate functions are
specified externally and passed to Stochastic as arguments. This
is a rather awkward way to instantiate a stochastic variable;
consequently, such implementations should be rare.
Data Stochastics
Data are represented by Stochastic objects whose observed attribute
is set to True. If a stochastic variable's observed flag is True,
its value cannot be changed, and it won't be sampled by the fitting
method.
In each interface, an optional keyword argument observed can be set to
True. In the decorator interface, the
@observed decorator is used instead of @stochastic:
End of explanation
disasters = pm.Poisson('disasters', mu=2,
value=disaster_model.disasters_array,
observed=True)
Explanation: In the other interfaces, the observed=True argument is added to the
instantiation of the Stochastic, or its subclass:
End of explanation
@pm.deterministic
def rate(s=switchpoint, e=early_mean, l=late_mean):
''' Concatenate Poisson means '''
out = np.empty(len(disaster_model.disasters_array))
out[:s] = e
out[s:] = l
return out
Explanation: The Deterministic class
The Deterministic class represents variables whose values are
completely determined by the values of their parents. For example, in
our disasters model, $rate$ is a deterministic variable.
End of explanation
x = pm.MvNormal('x', np.ones(3), np.eye(3))
y = pm.MvNormal('y', np.ones(3), np.eye(3))
x+y
print(x[0])
print(x[0]+y[2])
Explanation: so rate's value can be computed exactly from the values of its parents
early_mean, late_mean and switchpoint.
A Deterministic variable's most important attribute is value, which
gives the current value of the variable given the values of its parents.
Like Stochastic's logp attribute, this attribute is computed
on-demand and cached for efficiency.
A Deterministic variable has the following additional attributes:
parents
: A dictionary containing the variable's parents. The keys of the dictionary correspond to the names assigned to the variable's parents by the variable, and the values correspond to the actual parents.
children
: A set containing the variable's children, which must be nodes.
Deterministic variables have no methods.
Creation of deterministic variables
Deterministic variables are less complicated than stochastic variables,
and have similar automatic, decorator, and direct
interfaces:
Automatic
A handful of common functions have been wrapped in Deterministic
objects. These are brief enough to list:
LinearCombination
: Has two parents $x$ and $y$, both of which must be iterable (i.e. vector-valued). This function returns:
\[\sum_i x_i^T y_i\]
Index
: Has two parents $x$ and index. $x$ must be iterable, index must be valued as an integer.
\[x[\text{index}]\]
Index is useful for implementing dynamic models, in which the parent-child connections change.
Lambda
: Converts an anonymous function (in Python, called lambda functions) to a Deterministic instance on a single line.
CompletedDirichlet
: PyMC represents Dirichlet variables of length $k$ by the first $k-1$ elements; since they must sum to 1, the $k^{th}$ element is determined by the others. CompletedDirichlet appends the $k^{th}$ element to the value of its parent $D$.
Logit, InvLogit, StukelLogit, StukelInvLogit
: Common link functions for generalized linear models, and their inverses.
Its a good idea to use these classes when feasible in order to give hints to step methods.
Certain elementary operations on variables create deterministic variables. For example:
End of explanation
def rate_eval(switchpoint=switchpoint, early_mean=early_mean, late_mean=late_mean):
value = np.zeros(111)
value[:switchpoint] = early_mean
value[switchpoint:] = late_mean
return value
rate = pm.Deterministic(eval = rate_eval,
name = 'rate',
parents = {'switchpoint': switchpoint,
'early_mean': early_mean,
'late_mean': late_mean},
doc = 'The rate of disaster occurrence.',
trace = True,
verbose = 0,
dtype=float,
plot=False,
cache_depth = 2)
Explanation: All the objects thus created have trace=False and plot=False by default.
Decorator
We have seen in the disasters example how the decorator interface is used to create a deterministic variable. Notice that rather than returning the log-probability, as is the
case for Stochastic objects, the function returns the value of the deterministic object, given its parents. Also notice that, unlike for Stochastic objects, there is no value argument
passed, since the value is calculated deterministically by the
function itself.
Direct
Deterministic objects can also be instantiated directly:
End of explanation
N = 10
x_0 = pm.Normal('x_0', mu=0, tau=1)
x = np.empty(N, dtype=object)
x[0] = x_0
for i in range(1, N):
x[i] = pm.Normal('x_%i' % i, mu=x[i-1], tau=1)
@pm.observed
def y(value=1, mu=x, tau=100):
return pm.normal_like(value, (mu**2).sum(), tau)
Explanation: Containers
In some situations it would be inconvenient to assign a unique label to
each parent of some variable. Consider $y$ in the following model:
$$\begin{align}
x_0 &\sim N (0,\tau_x)\
x_{i+1}|x_i &\sim \text{N}(x_i, \tau_x)\
&i=0,\ldots, N-2\
y|x &\sim N \left(\sum_{i=0}^{N-1}x_i^2,\tau_y\right)
\end{align}$$
Here, $y$ depends on every element of the Markov chain $x$, but we
wouldn't want to manually enter $N$ parent labels x_0,
x_1, etc.
This situation can be handled naturally in PyMC:
End of explanation
@pm.potential
def rate_constraint(l1=early_mean, l2=late_mean):
if np.abs(l2 - l1) > 1:
return -np.inf
return 0
Explanation: PyMC automatically wraps array $x$ in an appropriate Container class.
The expression 'x_%i' % i labels each Normal object in the container
with the appropriate index $i$. For example, if i=1, the name of the
corresponding element becomes x_1.
Containers, like variables, have an attribute called value. This
attribute returns a copy of the (possibly nested) iterable that was
passed into the container function, but with each variable inside
replaced with its corresponding value.
The Potential class
For some applications, we want to be able to modify the joint density by
incorporating terms that don't correspond to probabilities of variables
conditional on parents, for example:
$$\begin{eqnarray}
p(x_0, x_2, \ldots x_{N-1}) \propto \prod_{i=0}^{N-2} \psi_i(x_i, x_{i+1}).
\end{eqnarray}$$
In other cases we may want to add probability terms to existing models.
For example, suppose we want to constrain the difference between the early and late means in the disaster model to be less than 1, so that the joint density becomes:
$$p(y,\tau,\lambda_1,\lambda_2) \propto p(y|\tau,\lambda_1,\lambda_2) p(\tau) p(\lambda_1) p(\lambda_2) I(|\lambda_2-\lambda_1| \lt 1)$$
Arbitrary factors are implemented by objects of class Potential. Bayesian
hierarchical notation doesn't accomodate these potentials.
Potentials have one important attribute, logp, the log of their
current probability or probability density value given the values of
their parents. The only other additional attribute of interest is
parents, a dictionary containing the potential's parents. Potentials
have no methods. They have no trace attribute, because they are not
variables. They cannot serve as parents of variables (for the same
reason), so they have no children attribute.
Creation of Potentials
There are two ways to create potentials:
Decorator
A potential can be created via a decorator in a way very similar to
Deterministic's decorator interface:
End of explanation
def rate_constraint_logp(l1=early_mean, l2=late_mean):
if np.abs(l2 - l1) > 1:
return -np.inf
return 0
rate_constraint = pm.Potential(logp = rate_constraint_logp,
name = 'rate_constraint',
parents = {'l1': early_mean, 'l2': late_mean},
doc = 'Constraint on rate differences',
verbose = 0,
cache_depth = 2)
Explanation: The function supplied should return the potential's current
log-probability or log-density as a Numpy float. The
potential decorator can take verbose and cache_depth arguments
like the stochastic decorator.
Direct
The same potential could be created directly as follows:
End of explanation
# Log dose in each group
log_dose = [-.86, -.3, -.05, .73]
# Sample size in each group
n = 5
# Outcomes
deaths = [0, 1, 3, 5]
## Write your answer here
Explanation: Example: Bioassay model
Recall from a previous lecture the bioassay example, where the number of deaths in a toxicity experiment was modeled as a binomial response, with the probability of death being a linear function of dose:
$$\begin{aligned}
y_i &\sim \text{Bin}(n_i, p_i) \
\text{logit}(p_i) &= a + b x_i
\end{aligned}$$
Implement this model in PyMC (we will show you how to fit the model later!)
End of explanation
from pymc.examples import gelman_bioassay
M = pm.MAP(gelman_bioassay)
M.fit(method='fmin_powell')
Explanation: Fitting Models
PyMC provides three objects that fit models:
MCMC, which coordinates Markov chain Monte Carlo algorithms. The actual work of updating stochastic variables conditional on the rest of the model is done by StepMethod objects.
MAP, which computes maximum a posteriori estimates.
NormApprox, the joint distribution of all stochastic variables in a model is approximated as normal using local information at the maximum a posteriori estimate.
All three objects are subclasses of Model, which is PyMC's base class
for fitting methods. MCMC and NormApprox, both of which can produce
samples from the posterior, are subclasses of Sampler, which is PyMC's
base class for Monte Carlo fitting methods. Sampler provides a generic
sampling loop method and database support for storing large sets of
joint samples. These base classes implement some basic methods that are
inherited by the three implemented fitting methods, so they are
documented at the end of this section.
Maximum a posteriori estimates
The MAP class sets all stochastic variables to their maximum a
posteriori values using functions in SciPy's optimize package; hence,
SciPy must be installed to use it. MAP can only handle variables whose
dtype is float, so it will not work, for example, on the disaster model example.
We can fit the bioassay example using MAP:
End of explanation
M.alpha.value
M.beta.value
Explanation: This call will cause $M$ to fit the model using Powell's method, which does not require derivatives. The variables in DisasterModel have now been set to their maximum a posteriori values:
End of explanation
M.AIC
M.BIC
Explanation: We can also calculate model selection statistics, AIC and BIC:
End of explanation
N = pm.NormApprox(gelman_bioassay)
N.fit()
Explanation: MAP has two useful methods:
fit(method ='fmin', iterlim=1000, tol=.0001)
: The optimization method may be fmin, fmin_l_bfgs_b, fmin_ncg,
fmin_cg, or fmin_powell. See the documentation of SciPy's
optimize package for the details of these methods. The tol and
iterlim parameters are passed to the optimization function under
the appropriate names.
revert_to_max()
: If the values of the constituent stochastic variables change after
fitting, this function will reset them to their maximum a
posteriori values.
The useful attributes of MAP are:
logp
: The joint log-probability of the model.
logp_at_max
: The maximum joint log-probability of the model.
AIC
: Akaike's information criterion for this model.
BIC
: The Bayesian information criterion for this model.
One use of the MAP class is finding reasonable initial states for MCMC
chains. Note that multiple Model subclasses can handle the same
collection of nodes.
Normal approximations
The NormApprox class extends the MAP class by approximating the
posterior covariance of the model using the Fisher information matrix,
or the Hessian of the joint log probability at the maximum.
End of explanation
N.mu[N.alpha]
N.C[N.alpha, N.beta]
Explanation: The approximate joint posterior mean and covariance of the variables are
available via the attributes mu and C, which the the approximate posterior mean and variance/covariance, respectively:
End of explanation
N.sample(100)
N.trace('alpha')[:10]
Explanation: As with MAP, the variables have been set to their maximum a
posteriori values (which are also in the mu attribute) and the AIC
and BIC of the model are available.
We can also generate samples from the posterior:
End of explanation
M = pm.MCMC(gelman_bioassay, db='sqlite')
Explanation: In addition to the methods and attributes of MAP, NormApprox
provides the following methods:
sample(iter)
: Samples from the approximate posterior distribution are drawn and stored.
isample(iter)
: An 'interactive' version of sample(): sampling can be paused, returning control to the user.
draw
: Sets all variables to random values drawn from the approximate posterior.
MCMC
The MCMC class implements PyMC's core business: producing Markov chain Monte Carlo samples for
a model's variables. Its primary job is to create and coordinate a collection of 'step
methods', each of which is responsible for updating one or more
variables.
MCMC provides the following useful methods:
sample(iter, burn, thin, tune_interval, tune_throughout, save_interval, ...)
: Runs the MCMC algorithm and produces the traces. The iter argument
controls the total number of MCMC iterations. No tallying will be
done during the first burn iterations; these samples will be
forgotten. After this burn-in period, tallying will be done each
thin iterations. Tuning will be done each tune_interval
iterations. If tune_throughout=False, no more tuning will be done
after the burnin period. The model state will be saved every
save_interval iterations, if given.
isample(iter, burn, thin, tune_interval, tune_throughout, save_interval, ...)
: An interactive version of sample. The sampling loop may be paused
at any time, returning control to the user.
use_step_method(method, *args, **kwargs):
: Creates an instance of step method class method to handle some
stochastic variables. The extra arguments are passed to the init
method of method. Assigning a step method to a variable manually
will prevent the MCMC instance from automatically assigning one.
However, you may handle a variable with multiple step methods.
stats():
: Generate summary statistics for all nodes in the model.
The sampler's MCMC algorithms can be accessed via the step_method_dict
attribute. M.step_method_dict[x] returns a list of the step methods
M will use to handle the stochastic variable x.
After sampling, the information tallied by M can be queried via
M.db.trace_names. In addition to the values of variables, tuning
information for adaptive step methods is generally tallied. These
โtracesโ can be plotted to verify that tuning has in fact terminated. After sampling ends you can retrieve the trace as
M.trace[โvar_nameโ].
We can instantiate a MCMC sampler for the bioassay example as follows:
End of explanation
M.use_step_method(pm.Metropolis, M.alpha, proposal_sd=1., proposal_distribution='Normal')
Explanation: Step methods
Step method objects handle individual stochastic variables, or sometimes groups
of them. They are responsible for making the variables they handle take single
MCMC steps conditional on the rest of the model. Each subclass of
StepMethod implements a method called step(), which is called by
MCMC. Step methods with adaptive tuning parameters can optionally implement
a method called tune(), which causes them to assess performance (based on
the acceptance rates of proposed values for the variable) so far and adjust.
The major subclasses of StepMethod are Metropolis and
AdaptiveMetropolis. PyMC provides several flavors of the
basic Metropolis steps.
Metropolis
Metropolis and subclasses implement Metropolis-Hastings steps. To tell an
MCMC object :math:M to handle a variable :math:x with a Metropolis step
method, you might do the following:
End of explanation
from pymc.examples import disaster_model_linear
M = pm.MCMC(disaster_model_linear)
M.use_step_method(pm.AdaptiveMetropolis, M.params_of_mean)
Explanation: Metropolis itself handles float-valued variables, and subclasses
DiscreteMetropolis and BinaryMetropolis handle integer- and
boolean-valued variables, respectively.
Metropolis' __init__ method takes the following arguments:
stochastic
: The variable to handle.
proposal_sd
: A float or array of floats. This sets the proposal standard deviation if the proposal distribution is normal.
scale
: A float, defaulting to 1. If the scale argument is provided but not proposal_sd, proposal_sd is computed as follows:
python
if all(self.stochastic.value != 0.):
self.proposal_sd = (ones(shape(self.stochastic.value)) *
abs(self.stochastic.value) * scale)
else:
self.proposal_sd = ones(shape(self.stochastic.value)) * scale
proposal_distribution
: A string indicating which distribution should be used for proposals.
Current options are 'Normal' and 'Prior'. If
proposal_distribution=None, the proposal distribution is chosen
automatically. It is set to 'Prior' if the variable has no
children and has a random method, and to 'Normal' otherwise.
Alhough the proposal_sd attribute is fixed at creation, Metropolis
step methods adjust their initial proposal standard deviations using an
attribute called adaptive_scale_factor. During tuning, the
acceptance ratio of the step method is examined, and this scale factor
is updated accordingly. If the proposal distribution is normal,
proposals will have standard deviation
self.proposal_sd * self.adaptive_scale_factor.
By default, tuning will continue throughout the sampling loop, even
after the burnin period is over. This can be changed via the
tune_throughout argument to MCMC.sample. If an adaptive step
method's tally flag is set (the default for Metropolis), a trace of
its tuning parameters will be kept. If you allow tuning to continue
throughout the sampling loop, it is important to verify that the
'Diminishing Tuning' condition of Roberts and Rosenthal (2007) is satisfied: the
amount of tuning should decrease to zero, or tuning should become very
infrequent.
If a Metropolis step method handles an array-valued variable, it
proposes all elements independently but simultaneously. That is, it
decides whether to accept or reject all elements together but it does
not attempt to take the posterior correlation between elements into
account. The AdaptiveMetropolis class (see below), on the other hand,
does make correlated proposals.
AdaptiveMetropolis
The AdaptativeMetropolis (AM) step method works like a regular
Metropolis step method, with the exception that its variables are
block-updated using a multivariate jump distribution whose covariance is
tuned during sampling. Although the chain is non-Markovian, it has
correct ergodic properties (Haario et al., 2001).
AdaptiveMetropolis works on vector-valued, continuous stochastics:
End of explanation
M = pm.MCMC(gelman_bioassay)
M.sample(10000, burn=5000)
%matplotlib inline
pm.Matplot.plot(M.LD50)
Explanation: AdaptativeMetropolis's init method takes the following arguments:
stochastics
: The stochastic variables to handle. These will be updated jointly.
cov (optional)
: An initial covariance matrix. Defaults to the identity matrix,
adjusted according to the scales argument.
delay (optional)
: The number of iterations to delay before computing the empirical
covariance matrix.
scales (optional):
: The initial covariance matrix will be diagonal, and its diagonal
elements will be set to scales times the stochastics' values,
squared.
interval (optional):
: The number of iterations between updates of the covariance matrix.
Defaults to 1000.
greedy (optional):
: If True, only accepted jumps will be counted toward the delay
before the covariance is first computed. Defaults to True.
shrink_if_necessary (optional):
: Whether the proposal covariance should be shrunk if the acceptance
rate becomes extremely small.
In this algorithm, jumps are proposed from a multivariate normal
distribution with covariance matrix $\Sigma$. The algorithm first
iterates until delay samples have been drawn (if greedy is true,
until delay jumps have been accepted). At this point, $\Sigma$ is
given the value of the empirical covariance of the trace so far and
sampling resumes. The covariance is then updated each interval
iterations throughout the entire sampling run. It is this constant
adaptation of the proposal distribution that makes the chain
non-Markovian.
DiscreteMetropolis
This class is just like Metropolis, but specialized to handle
Stochastic instances with dtype int. The jump proposal distribution
can either be 'Normal', 'Prior' or 'Poisson' (the default). In the
normal case, the proposed value is drawn from a normal distribution
centered at the current value and then rounded to the nearest integer.
BinaryMetropolis
This class is specialized to handle Stochastic instances with dtype
bool.
For array-valued variables, BinaryMetropolis can be set to propose
from the prior by passing in dist="Prior". Otherwise, the argument
p_jump of the init method specifies how probable a change is. Like
Metropolis' attribute proposal_sd, p_jump is tuned throughout the
sampling loop via adaptive_scale_factor.
Automatic assignment of step methods
Every step method subclass (including user-defined ones) that does not
require any __init__ arguments other than the stochastic variable to
be handled adds itself to a list called StepMethodRegistry in the PyMC
namespace. If a stochastic variable in an MCMC object has not been
explicitly assigned a step method, each class in StepMethodRegistry is
allowed to examine the variable.
To do so, each step method implements a class method called
competence(stochastic), whose only argument is a single stochastic
variable. These methods return values from 0 to 3; 0 meaning the step
method cannot safely handle the variable and 3 meaning it will most
likely perform well for variables like this. The MCMC object assigns
the step method that returns the highest competence value to each of its
stochastic variables.
Running MCMC Samplers
We can carry out Markov chain Monte Carlo sampling by calling the sample method (or in the terminal, isample) with the appropriate arguments.
End of explanation |
13,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Absolute Motion
During the first week we discussed reference frames and coordinate systems to represent the motion of particles. This was the โabsolute motionโ that was always measured relative to a fixed reference frame.
Let's define the fixed reference frame to be A
The particle moves in 3D. The point the particle is located is denoted P.
Example 1
Step1: Example 2
Step2: Relative Motion
During the first week we discussed reference frames and coordinate systems to represent the motion of particles. This was the โabsolute motionโ that was always measured relative to a fixed reference frame.
This week we are going to discuss relative motion. In this case observations are made in a moving reference frame.
Example case | Python Code:
from rel_motion import *
# Fixed Frame A
A = np.eye(3) # identity matrix E1=(1,0,0), E2=(0,1,0), E3=(0,0,1)
rO = np.array((0,0,0))
rP = np.array( (5, 0, 0))
plotAbsMotion(A, rO, rP)
Explanation: Absolute Motion
During the first week we discussed reference frames and coordinate systems to represent the motion of particles. This was the โabsolute motionโ that was always measured relative to a fixed reference frame.
Let's define the fixed reference frame to be A
The particle moves in 3D. The point the particle is located is denoted P.
Example 1: We observe a statue located at position P.
End of explanation
from rel_motion import *
# define path of trajectory for particle P observed in fixed frame A
def projectile(v0,t):
x = v0*t
y = v0*t
z = v0*t - 0.5*g*t**2
return [x,y,z]
T = np.linspace(0, 1, 100)
v0 = 5
x_t1 = np.array([projectile(v0,t) for t in T])
x_t2 = np.array([projectile(v0/2., t) for t in T])
#origin as function of T
o_t = np.array([ [0,0,0] for t in T])
ntc = [('rP1', (o_t,x_t1), blue),
('rP2', (o_t,x_t2), green)]
# run animation
%matplotlib
fig, args = runAnimation(ntc)
anim = animation.FuncAnimation(fig, **args)
plt.show()
Explanation: Example 2: a projectile is launched into the air from the ground with an initial velocity v0 in the x,y, and z directions. Assuming only gravity is acting on the projectile, what is it's equation of motion? What about when the velocity is 0.5v0?
$^Ar_P(t) = v_0t\hat{E}_1 + v_0t\hat{E}_2 + (v_0t - \frac{1}{2}gt^2) \hat{E}_3$
End of explanation
from rel_motion import *
%matplotlib
import time
# Fixed Frame A
A = np.eye(3) # identity matrix E1=(1,0,0), E2=(0,1,0), E3=(0,0,1)
rO = np.array((0,0,0))
# Moving Frame B
B = np.eye(3) # identity matrix
rQ = np.array([5., 5., 5.])
# moving point P observed in B
rPQ = np.array((0.,0.,0.))
# point P observed in A?
rP = rQ + rPQ
# Position of rQ
vQ= np.array([1., 0., 0.])
vPQ = np.array([3.,0.0,0.])
dt = 4
plotAll(A, B, rO, rQ, rPQ, rP)
ax =plt.gca()
fig = plt.gcf()
for i in range(10):
rQ += vQ*dt
rPQ += vPQ*dt
rP = rQ + rPQ
plotAll(A, B, rO, rQ, rPQ, rP, ax, labels=False)
plotAll(A, B, rO, rQ, rPQ, rP, ax)
plt.show()
from rel_motion import *
%matplotlib
import time
# Fixed Frame A
A = np.eye(3) # identity matrix E1=(1,0,0), E2=(0,1,0), E3=(0,0,1)
rO = np.array((0,0,0))
# Moving Frame B
B = np.eye(3) # identity matrix
rQ = np.array([5., 5., 5.])
# moving point P observed in B
rPQ = np.array((0.,0.,0.))
# point P observed in A?
rP = rQ + rPQ
# Position of rQ
vQ= np.array([1., 0., 0.])
vPQ = np.array([3.,0.0,0.])
dt = 4
plotAll(A, B, rO, rQ, rPQ, rP)
ax =plt.gca()
fig = plt.gcf()
for i in range(10):
rQ += vQ*dt
rPQ += vPQ*dt
rP = rQ + rPQ
plotAll(A, B, rO, rQ, rPQ, rP, ax, labels=False)
plotAll(A, B, rO, rQ, rPQ, rP, ax)
plt.cla()
plt.show()
from rel_motion import *
# define path of trajectory for particle P observed in fixed frame A
def applyVel(T, p0, vel):
p_t = np.array([p0 + vel(t)*t for t in T])
return p_t
def vO(t):
return np.array([0., 0., 0.])
def vQ(t):
return np.array([2., 0., 0.])
def vPQ(t):
return np.array([3., 0., 0.])
def compute_rP(rQ, rPQ, vQ, vPQ, t):
return np.array((rQ + rPQ).tolist())
T = np.linspace(0, 1, 100)
#initial values
rO0 = np.array([0., 0., 0.])
rQ0 = np.array([5., 5., 5.])
rPQ0 = np.array((0.,0.,0.))
rP0 = compute_rP(rQ0, rPQ0, vQ, vPQ, 0)
rO_t = applyVel(T, rO0, vO)
rQ_t = applyVel(T, rQ0, vQ)
rPQ_t = applyVel(T, rPQ0, vPQ)
rP_t = np.array([compute_rP(q_i, pq_i, vQ, vPQ, t_i) for (q_i, pq_i, t_i) in zip( rQ_t, rPQ_t, T) ])
names = ['rO', 'rQ', 'rPQ', 'rP']
colors = [red, black, green, blue]
trajs = [
(rO_t, rO_t),
(rO_t, rQ_t),
(rQ_t, rPQ_t),
(rO_t, rP_t)
]
ntc = zip(names, trajs, colors)
# run animation
%matplotlib
fig, args = runAnimation(ntc)
anim = animation.FuncAnimation(fig, **args)
plt.show()
Explanation: Relative Motion
During the first week we discussed reference frames and coordinate systems to represent the motion of particles. This was the โabsolute motionโ that was always measured relative to a fixed reference frame.
This week we are going to discuss relative motion. In this case observations are made in a moving reference frame.
Example case: I am in car driving in a straight line at 60 mph on the highway. A car passes me going 20 mph faster than I am. How fast is the car going?
For this discussion, we will assume there is a fixed reference frame A and a moving reference frame B.
The fixed frame Aโs coordinate system is represented by the basis {E1,E2,E3}
The moving frame Bโs coordinate system is represented by the basis {e1,e2,e3}
O is the position of Aโs origin
Q is the position of Bโs origin
P is a moving point observed
End of explanation |
13,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Preparing text to use with TensorFlow models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Write some sentences
Feel free to write your own sentences here.
Step3: Create the Tokenizer and define an out of vocabulary token
When creating the Tokenizer, you can specify the max number of words in the dictionary. You can also specify a token to represent words that are out of the vocabulary (OOV), in other words, that are not in the dictionary. This OOV token will be used when you create sequences for sentences that contain words that are not in the word index.
Step4: Tokenize the words
Step5: Turn sentences into sequences
Each word now has a unique number in the word index. However, words in a sentence are in a specific order. You can't just randomly mix up words and have the outcome be a sentence.
For example, although "chocolate isn't good for dogs" is a perfectly fine sentence, "dogs isn't for chocolate good" does not make sense as a sentence.
So the next step to representing text in a way that can be meaningfully used by machine learning programs is to create numerical sequences that represent the sentences in the text.
Each sentence will be converted into a sequence where each word is replaced by its number in the word index.
Step6: Make the sequences all the same length
Later, when you feed the sequences into a neural network to train a model, the sequences all need to be uniform in size. Currently the sequences have varied lengths, so the next step is to make them all be the same size, either by padding them with zeros and/or truncating them.
Use f.keras.preprocessing.sequence.pad_sequences to add zeros to the sequences to make them all be the same length. By default, the padding goes at the start of the sequences, but you can specify to pad at the end.
You can optionally specify the maximum length to pad the sequences to. Sequences that are longer than the specified max length will be truncated. By default, sequences are truncated from the beginning of the sequence, but you can specify to truncate from the end.
If you don't provide the max length, then the sequences are padded to match the length of the longest sentence.
For all the options when padding and truncating sequences, see https
Step7: What happens if some of the sentences contain words that are not in the word index?
Here's where the "out of vocabulary" token is used. Try generating sequences for some sentences that have words that are not in the word index. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
# Import Tokenizer and pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
Explanation: Preparing text to use with TensorFlow models
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c02_nlp_padding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c02_nlp_padding.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
The high level steps to prepare text to be used in a machine learning model are:
Tokenize the words to get numerical values for them
Create numerical sequences of the sentences
Adjust the sequences to all be the same length.
In this colab, you learn how to use padding to make the sequences all be the same length.
Import the classes you need
End of explanation
sentences = [
'My favorite food is ice cream',
'do you like ice cream too?',
'My dog likes ice cream!',
"your favorite flavor of icecream is chocolate",
"chocolate isn't good for dogs",
"your dog, your cat, and your parrot prefer broccoli"
]
print(sentences)
Explanation: Write some sentences
Feel free to write your own sentences here.
End of explanation
tokenizer = Tokenizer(num_words = 100, oov_token="<OOV>")
Explanation: Create the Tokenizer and define an out of vocabulary token
When creating the Tokenizer, you can specify the max number of words in the dictionary. You can also specify a token to represent words that are out of the vocabulary (OOV), in other words, that are not in the dictionary. This OOV token will be used when you create sequences for sentences that contain words that are not in the word index.
End of explanation
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
print(word_index)
Explanation: Tokenize the words
End of explanation
sequences = tokenizer.texts_to_sequences(sentences)
print (sequences)
Explanation: Turn sentences into sequences
Each word now has a unique number in the word index. However, words in a sentence are in a specific order. You can't just randomly mix up words and have the outcome be a sentence.
For example, although "chocolate isn't good for dogs" is a perfectly fine sentence, "dogs isn't for chocolate good" does not make sense as a sentence.
So the next step to representing text in a way that can be meaningfully used by machine learning programs is to create numerical sequences that represent the sentences in the text.
Each sentence will be converted into a sequence where each word is replaced by its number in the word index.
End of explanation
padded = pad_sequences(sequences)
print("\nWord Index = " , word_index)
print("\nSequences = " , sequences)
print("\nPadded Sequences:")
print(padded)
# Specify a max length for the padded sequences
padded = pad_sequences(sequences, maxlen=15)
print(padded)
# Put the padding at the end of the sequences
padded = pad_sequences(sequences, maxlen=15, padding="post")
print(padded)
# Limit the length of the sequences, you will see some sequences get truncated
padded = pad_sequences(sequences, maxlen=3)
print(padded)
Explanation: Make the sequences all the same length
Later, when you feed the sequences into a neural network to train a model, the sequences all need to be uniform in size. Currently the sequences have varied lengths, so the next step is to make them all be the same size, either by padding them with zeros and/or truncating them.
Use f.keras.preprocessing.sequence.pad_sequences to add zeros to the sequences to make them all be the same length. By default, the padding goes at the start of the sequences, but you can specify to pad at the end.
You can optionally specify the maximum length to pad the sequences to. Sequences that are longer than the specified max length will be truncated. By default, sequences are truncated from the beginning of the sequence, but you can specify to truncate from the end.
If you don't provide the max length, then the sequences are padded to match the length of the longest sentence.
For all the options when padding and truncating sequences, see https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences
End of explanation
# Try turning sentences that contain words that
# aren't in the word index into sequences.
# Add your own sentences to the test_data
test_data = [
"my best friend's favorite ice cream flavor is strawberry",
"my dog's best friend is a manatee"
]
print (test_data)
# Remind ourselves which number corresponds to the
# out of vocabulary token in the word index
print("<OOV> has the number", word_index['<OOV>'], "in the word index.")
# Convert the test sentences to sequences
test_seq = tokenizer.texts_to_sequences(test_data)
print("\nTest Sequence = ", test_seq)
# Pad the new sequences
padded = pad_sequences(test_seq, maxlen=10)
print("\nPadded Test Sequence: ")
# Notice that "1" appears in the sequence wherever there's a word
# that's not in the word index
print(padded)
Explanation: What happens if some of the sentences contain words that are not in the word index?
Here's where the "out of vocabulary" token is used. Try generating sequences for some sentences that have words that are not in the word index.
End of explanation |
13,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Progress Reporting and Command Observers <a href="https
Step1: We need to add a command to display the progress reported by the ProcessObject
Step2: Back to watching the progress of out Gabor image source. First lets create the filter as an object
Step3: The ProcessObject interface for the Invoker or Subject
SimpleITK doesn't have a large heirachy of inheritance. It has been kept to a minimal, so there is no common Object or LightObject base class as ITK has. As most of the goals for the events have to do with observing processes, the "Subject" interface of the Observer patter or the "Invoker" part of the Command design pattern, has been added to a ProcessObject base class for filters.
The ProcessObject base class has the following methods of handling commands
Step4: Deriving from the Command class
The traditional way of using Commands in ITK involves deriving from the Command class and adding to the ProcessObject.
Step5: A reference to the Command object must be maintained, or else it will be removed from the ProcessObject.
Step6: Using a labmda function as the Command
In Python the AddCommand has been extended to accept PyCommand objects and implicitly creates a PyCommand from a callable python argument. This is really useful.
Step7: Access to ITK data during command execution
The commands are not too useful unless you can query the filter through the SimpleITK interface. A couple status variables and methods are exposed in the SimpleITK ProcessObject through the polymorphic interface of the same ITK class.
Step9: Utilizing Jupyter Notebooks and Commands
Utilization of commands and events frequently occurs with advanced integration into graphical user interfaces. Let us now export this advanced integration into Jupyter Notebooks.
Jupyter notebooks support displaying output as HTML, and execution of javascript on demand. Together this can produce animation.
Step12: Support for Bi-direction JavaScript
It's possible to get button in HTML to execute python code...
Step13: A caveat with this approach is that the IPython kernel must continue to execute while the filter is running. So we must place the filter in a thread.
Step14: While the lambda command are convenient, the lack for having an object to hold data can still be problematic. For example in the above code the uuid, is used to uniquely identify the HTML element. So if the filter is executed multiple times then the JavaScript update will be confused on what to update.
Step16: A Reusable class for IPython Progress
There currently are too many caveats without support for Abort. Let us create a reusable class which will automatically generate the UUID and just display the progress. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import SimpleITK as sitk
print(sitk.Version())
import sys
import os
import threading
from myshow import myshow
from myshow import myshow3d
size = 256 # if this is too fast increase the size
img = sitk.GaborSource(
sitk.sitkFloat32,
size=[size] * 3,
sigma=[size * 0.2] * 3,
mean=[size * 0.5] * 3,
frequency=0.1,
)
myshow3d(img, zslices=[int(size / 2)], dpi=40);
myshow(img);
Explanation: Progress Reporting and Command Observers <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F11_Progress.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
SimpleITK Filters and other classes derived from ProcessObjects have the ability for user code to be executed when certain events occur. This is known as the Command and Observer design patters to implement user callbacks. This allows for the monitoring and abortion of processes as they are being executed.
Consider the following image source which takes a few seconds to execute. It would be nice to quickly know how long your going to need to wait, to know if you can go get a cup of coffee.
End of explanation
help(sitk.Command)
class MyCommand(sitk.Command):
def __init__(self):
# required
super(MyCommand, self).__init__()
def Execute(self):
print("MyCommand::Execute Called")
cmd = MyCommand()
cmd.Execute()
help(sitk.PyCommand)
cmd = sitk.PyCommand()
cmd.SetCallbackPyCallable(lambda: print("PyCommand Called"))
cmd.Execute()
Explanation: We need to add a command to display the progress reported by the ProcessObject::GetProgress method during the sitkProgressEvent. This involves three components:
Events
ProcessObject's methods
Commands
We'll look at some examples after a brief explanation of these components.
Events
The avaiable events to observed are defined in a namespace enumeration.
<table>
<tr><td>sitkAnyEvent</td><td>Occurs for all event types.</td></tr>
<tr><td>sitkAbortEvent</td><td>Occurs after the process has been aborted, but before exiting the Execute method.</td></tr>
<tr><td>sitkDeleteEvent</td><td>Occurs when the underlying itk::ProcessObject is deleted.</td></tr>
<tr><td>sitkEndEvent</td><td>Occurs at then end of normal processing.</td></tr>
<tr><td>sitkIterationEvent</td><td>Occurs with some algorithms that run for a fixed or undetermined number of iterations.</td></tr>
<tr><td>sitkProgressEvent</td><td>Occurs when the progress changes in most process objects.</td></tr>
<tr><td>sitkStartEvent</td><td>Occurs when then itk::ProcessObject is starting.</td></tr>
<tr><td>sitkUserEvent</td><td>Other events may fall into this enumeration.</td></tr>
</table>
The convention of pre-fixing enums with "sitk" is continued, although it's getting a little crowded.
C++ is more strongly typed than Python it allows for implicit conversion from an enum type to an int, but not from an int to an enum type. Care needs to be made to ensure the correct enum value is passed in Python.
ProcessObject's methods
To be able to interface with the ProcessObject during execution, the object-oriented interface must be used to access the method of the ProcessObject. While any constant member function can be called during a command call-back there are two common methods:
ProcessObject::GetProgress()
ProcessObject::Abort()
The methods are only valid during the Command while a process is being executed, or when the process is not in the Execute method.
Additionally it should be noted that follow methods can not be called during a command or from another thread during execution Execute and RemoveAllCommands. In general the ProcessObject should not be modified during execution.
Commands
The command design pattern is used to allow user code to be executed when an event occurs. It is implemented in the Command class. The Command class provides an Execute method to be overridden in derived classes.
There are three ways to define a command with SimpleITK in Python.
Derive from the Command class.
Use the PyCommand class' SetCallbackPyCallable method.
Use an inline lambda function in ProcessOject::AddCommand.
End of explanation
size = 256
filter = sitk.GaborImageSource()
filter.SetOutputPixelType(sitk.sitkFloat32)
filter.SetSize([size] * 3)
filter.SetSigma([size * 0.2] * 3)
filter.SetMean([size * 0.5] * 3)
filter.SetFrequency(0.1)
img = filter.Execute()
myshow3d(img, zslices=[int(size / 2)], dpi=40);
Explanation: Back to watching the progress of out Gabor image source. First lets create the filter as an object
End of explanation
help(sitk.ProcessObject)
Explanation: The ProcessObject interface for the Invoker or Subject
SimpleITK doesn't have a large heirachy of inheritance. It has been kept to a minimal, so there is no common Object or LightObject base class as ITK has. As most of the goals for the events have to do with observing processes, the "Subject" interface of the Observer patter or the "Invoker" part of the Command design pattern, has been added to a ProcessObject base class for filters.
The ProcessObject base class has the following methods of handling commands: AddCommand, RemoveAllCommands, and HasCommand.
Adding these functionalities are not available in the procedural interface available for SimpleITK. They are only available through the Object Oriented interface, and break the method chaining interface.
End of explanation
class MyCommand(sitk.Command):
def __init__(self, msg):
# required
super(MyCommand, self).__init__()
self.msg = msg
def __del__(self):
print(f'MyCommand begin deleted: "{self.msg}"')
def Execute(self):
print(self.msg)
cmd1 = MyCommand("Start")
cmd2 = MyCommand("End")
filter.RemoveAllCommands() # this line is here so we can easily re-execute this code block
filter.AddCommand(sitk.sitkStartEvent, cmd1)
filter.AddCommand(sitk.sitkEndEvent, cmd2)
filter.Execute()
Explanation: Deriving from the Command class
The traditional way of using Commands in ITK involves deriving from the Command class and adding to the ProcessObject.
End of explanation
filter.AddCommand(sitk.sitkStartEvent, MyCommand("stack scope"))
print("Before Execution")
filter.Execute()
Explanation: A reference to the Command object must be maintained, or else it will be removed from the ProcessObject.
End of explanation
filter.RemoveAllCommands() # this line is here so we can easily re-execute this code block
filter.AddCommand(sitk.sitkStartEvent, lambda: print("Starting...", end=""))
filter.AddCommand(sitk.sitkStartEvent, lambda: sys.stdout.flush())
filter.AddCommand(sitk.sitkEndEvent, lambda: print("Done"))
filter.Execute()
Explanation: Using a labmda function as the Command
In Python the AddCommand has been extended to accept PyCommand objects and implicitly creates a PyCommand from a callable python argument. This is really useful.
End of explanation
filter.RemoveAllCommands()
filter.AddCommand(
sitk.sitkProgressEvent,
lambda: print(f"\rProgress: {100*filter.GetProgress():03.1f}%...", end=""),
)
filter.AddCommand(sitk.sitkProgressEvent, lambda: sys.stdout.flush())
filter.AddCommand(sitk.sitkEndEvent, lambda: print("Done"))
filter.Execute()
Explanation: Access to ITK data during command execution
The commands are not too useful unless you can query the filter through the SimpleITK interface. A couple status variables and methods are exposed in the SimpleITK ProcessObject through the polymorphic interface of the same ITK class.
End of explanation
import uuid
from IPython.display import HTML, Javascript, display
divid = str(uuid.uuid4())
html_progress = f
<p style="margin:5px">FilterName:</p>
<div style="border: 1px solid black;padding:1px;margin:5px">
<div id="{divid}" style="background-color:blue; width:0%%"> </div>
</div>
def command_js_progress(processObject):
p = processObject.GetProgress()
display(Javascript("$('div#%s').width('%i%%')" % (divid, int(p * 100))))
filter.RemoveAllCommands()
filter.AddCommand(sitk.sitkStartEvent, lambda: display(HTML(html_progress)))
filter.AddCommand(sitk.sitkProgressEvent, lambda: command_js_progress(filter))
filter.Execute()
Explanation: Utilizing Jupyter Notebooks and Commands
Utilization of commands and events frequently occurs with advanced integration into graphical user interfaces. Let us now export this advanced integration into Jupyter Notebooks.
Jupyter notebooks support displaying output as HTML, and execution of javascript on demand. Together this can produce animation.
End of explanation
import uuid
from IPython.display import HTML, Javascript, display
g_Abort = False
divid = str(uuid.uuid4())
html_progress_abort = f
<div style="background-color:gainsboro; border:2px solid black;padding:15px">
<p style="margin:5px">FilterName:</p>
<div style="border: 1px solid black;padding:1px;margin:5px">
<div id="{divid}" style="background-color:blue; width:0%%"> </div>
</div>
<button onclick="set_value()" style="margin:5px" >Abort</button>
</div>
javascript_abort =
<script type="text/Javascript">
function set_value(){
var command = "g_Abort=True"
console.log("Executing Command: " + command);
var kernel = IPython.notebook.kernel;
kernel.execute(command);
}
</script>
def command_js_progress_abort(processObject):
p = processObject.GetProgress()
display(Javascript("$('div#%s').width('%i%%')" % (divid, int(p * 100))))
if g_Abort:
processObject.Abort()
def command_js_start_abort():
g_Abort = False
g_Abort = False
filter.RemoveAllCommands()
filter.AddCommand(sitk.sitkStartEvent, command_js_start_abort)
filter.AddCommand(
sitk.sitkStartEvent, lambda: display(HTML(html_progress_abort + javascript_abort))
)
filter.AddCommand(sitk.sitkProgressEvent, lambda: command_js_progress_abort(filter))
Explanation: Support for Bi-direction JavaScript
It's possible to get button in HTML to execute python code...
End of explanation
import threading
threading.Thread(target=lambda: filter.Execute()).start()
Explanation: A caveat with this approach is that the IPython kernel must continue to execute while the filter is running. So we must place the filter in a thread.
End of explanation
#### The following shows a failure that you will want to avoid.
threading.Thread(target=lambda: filter.Execute()).start()
Explanation: While the lambda command are convenient, the lack for having an object to hold data can still be problematic. For example in the above code the uuid, is used to uniquely identify the HTML element. So if the filter is executed multiple times then the JavaScript update will be confused on what to update.
End of explanation
import uuid
from IPython.display import HTML, Javascript, display
class HTMLProgressWatcher:
def __init__(self, po):
self.processObject = po
self.abort = False
po.AddCommand(sitk.sitkStartEvent, lambda: self.cmdStartEvent())
po.AddCommand(sitk.sitkProgressEvent, lambda: self.cmdProgressEvent())
po.AddCommand(sitk.sitkEndEvent, lambda: self.cmdEndEvent())
def cmdStartEvent(self):
global sitkIPythonProgress_UUID
self.abort = False
self.divid = str(uuid.uuid4())
try:
sitkIPythonProgress_UUID[self.divid] = self
except NameError:
sitkIPythonProgress_UUID = {self.divid: self}
html_progress_abort = f
<p style="margin:5px">{self.processObject.GetName()}:</p>
<div style="border: 1px solid black;padding:1px;margin:5px">
<div id="{self.divid}" style="background-color:blue; width:0%%"> </div>
</div>
display(HTML(html_progress_abort + javascript_abort))
def cmdProgressEvent(self):
p = self.processObject.GetProgress()
display(Javascript("$('div#%s').width('%i%%')" % (self.divid, int(p * 100))))
if self.abort:
self.processObject.Abort()
def cmdEndEvent(self):
global sitkIPythonProgress_UUID
del sitkIPythonProgress_UUID[self.divid]
del self.divid
filter.RemoveAllCommands()
watcher = HTMLProgressWatcher(filter)
filter.Execute()
?threading.Thread.start
Explanation: A Reusable class for IPython Progress
There currently are too many caveats without support for Abort. Let us create a reusable class which will automatically generate the UUID and just display the progress.
End of explanation |
13,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some text
Step2: Apply regex | Python Code:
# Load regex package
import re
Explanation: Title: Match Times
Slug: match_times
Summary: Match Times
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Based on: StackOverflow
Preliminaries
End of explanation
# Create a variable containing a text string
text = 'Chris: 12:34am. Steve: 16:30'
Explanation: Create some text
End of explanation
# Find any text that fits the regex
re.findall(r'([0-1]\d:[0-5]\d)\s*(?:AM|PM)?', text)
Explanation: Apply regex
End of explanation |
13,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anomaly Detection
Step1: Next we transform the DATE column in an appropriate timestamp format, and the fred_dcoilbrenteu SFrame in a TimeSeries object.
Step2: We can plot the fred_dcoilbrenteu time series set as follows.
Step3: Training a Moving Z-Score Model
In this section we train a Moving Z-Score model to reveal where any anomalies exist in the fred_dcoilbrenteu time series.
Step4: The primary output of the Moving Z-score model is the scores field. This TimeSeries object contains
Step5: Of course, the first 252 rows of the scores output don't have a moving average or Z-score. This is because the moving window does not have sufficient data for those observations.
To reveal the 30, lets say, more anomalous data points we can sort the scores SFrame as follows.
Step6: Of cource, a lot more anomalous observations may exist in the fred_dcoilbrenteu time series. A good way to make a final decision on that, is to look at the approximate distribution of the anomaly scores with the SArray.sketch_summary() tool, then get a threshold for the anomaly score with the sketch summary's quantile method. Here we declare the top 1% of the data to be anomalies, characterizing that way 71 data points as "anomalous".
Step7: In the figure below, we plot the original FRED/DCOILBRENTEU time series of "Dollars per Barrel of Crude Oil Brent - Europe", its Moving Average across the years, and the data points that we found to be anomalous.
Step8: Training a Bayesian Changepoints Model
In this second part of our analysis we train a Bayesian Changepoints model to reveal where any anomalies exist in the fred_dcoilbrenteu time series.
Step9: The primary output of the Moving Z-score model is the scores field. This TimeSeries object contains
Step10: To reveal the 30, lets say, more anomalous data points we can sort the scores SFrame as follows.
Step11: One interesting thing is that if you look at the tail of scores, you will see a handful of missing values. These data points have insufficient data after them to compute lagged changepoint scores. The number of missing values in the tail of the dataset can be reduced by equally reducing the lag parameter in our learning model. However, the returned results will be less accurate. Alternatively, one can choose to update the model with new data.
Step12: Of cource, a lot more anomalous observations may exist in the fred_dcoilbrenteu time series. A good way to make a final decision on that, is to look at the approximate distribution of the changepoint scores with the SArray.sketch_summary() tool, then get a threshold for the changepoint score with the sketch summary's quantile method. Again, we declare the top 1% of the data to be anomalies, characterizing that way 75 data points as "anomalous".
Step13: In the figure below, we plot the original FRED/DCOILBRENTEU time series of "Dollars per Barrel of Crude Oil Brent - Europe", its Moving Average across the years, and the data points that we found to be anomalous with both the Moving Z-Score and the Bayesian Changepoint model. | Python Code:
import graphlab as gl
import matplotlib.pyplot as plt
fred_dcoilbrenteu = gl.SFrame.read_csv('./FRED-DCOILBRENTEU.csv')
fred_dcoilbrenteu
Explanation: Anomaly Detection: Moving Z-Score and Bayesian Changepoints Model
Introductory Remarks
Anomalies are data points that are different from other observations in some way, typically measured against a model fit to the data. On the contrary with the ordinary descriptive statistics, we are interested here to found where these anomalous data points exist and not exclude them as outliers.
We assume the anomaly detection task is unsupervised, i.e. we donโt have training data with points labeled as anomalous. Each data point passed to an anomaly detection model is given a score indicating how different the point is relative to the rest of the dataset. The calculation of this score varies between models, but a higher score always indicates a point is more anomalous. Often a threshold is chosen to make a final classification of each point as typical or anomalous; this post-processing step is left to the user.
The GraphLab Create (GLC) Anomaly Detection toolkit currently includes three models for two different data contexts:
Local Outlier Factor, for detecting outliers in multivariate data that are assumed to be independently and identically distributed,
Moving Z-score, for scoring outliers in a univariate, sequential dataset, typically a time series, and
Bayesian Changepoints for identifying changes in the mean or variance of a sequential series.
In this short note, we demonstrate how the Moving Z-Score and Bayesian Changepoints models can be used to reveal anomalies in a time series object. As an example we are going to use the "Crude Oil Prices: Brent - Europe" time series, FRED-DCOILBRENTEU, as it is currently provided by the Quandl database of finance and economic data and the Federal Reserve Bank of St. Luis. This times series covers the daily closing prices of Crude Oil Brent - Europe (Dollars per Barrel, Not Seasonally Adjusted) starting from May 1987 to May 2016. It follows a pretty volatile behavior across the years, and we hope to find out where the most anomalous spot values are. For notes and definitions, please see the corresponding US Energy Information Agency (eia), Explanatory Notes.
The GLC Moving Z-Score Model
In a first step of our analysis, we are going to use the GLC Moving Z-Score implementation. This unsupervised learning model fits a moving average to a univariate time series and identifying that way points that are far from the fitted curve. The MovingZScoreModel works with either TimeSeries or SFrame inputs. A uniform sampling rate is assumed and the data window must be defined in terms of number of observations.
The moving Z-score for a data point $x_{t}$ is simply the value of $x_{t}$ standardized by subtracting the moving mean just prior to time $t$ and dividing by the moving standard deviation which is calculated for the same time interval. In particular, assuming that $w$ stands for the window_size in terms of the number of observations the moving Z-score is defined as:
\begin{equation}
z(x_{t}) = \frac{x_{t}-\bar{x}{t}}{s{t}},
\end{equation}
where the moving average is:
\begin{equation}
\bar{x}{t} = (1/w)\,\sum{i=t-w}^{t-1}x_{i},
\end{equation}
and the standard deviation for the same time interval:
\begin{equation}
s_{t} = \sqrt{(1/w)\,\sum_{i=t-w}^{t-1}(x_{i}-\bar{x}_{t})^{2}}.
\end{equation}
Notes:
The moving Z-score at points within the window_size observations of the beginning of a series are not defined, because there are insufficient points to compute the moving average and moving standard deviation. This is represented by missing values.
Missing values in the input dataset are assigned missing values (โNoneโ) for their anomaly scores as well.
If there is no variation in the values preceding a given observation, the moving Z-score can be infinite or undefined. If the given observation is equal to the moving average, the anomaly score is coded as 'nan'; if the observation is not equal to the moving average, the anomaly score is 'inf'.
The GLC Bayesian Changepoints Model
As a next step of our analysis we are going to use the GLC Bayesian Changepoints model and compare the results of these two methods. The Bayesian Changepoints implementation scores changepoint probability in a univariate sequential dataset, often a time series. Changepoints are abrupt changes in the mean or variance of a time series. For instance, during an economic recession, stock values might suddenly drop to a very low value. The time at which the stock value dropped is called a changepoint.
The Bayesian Changepoints model is an implementation of the Bayesian Online Changepoint Detection algorithm developed by Ryan Adams and David MacKay. This algorithm computes a probability distribution over the possible run lengths at each point in the data, where run length refers to the number of observations since the last changepoint. When the probability of a 0-length run spikes, there is most likely a change point at the current data point.
More specifically, the algorithm follows the procedure below:
Step 1: Observe new datum $x_{t}$ and evaluate the likelihood of seeing this value for each possible run length. This is a probability vector, with an element for all possible run lengths. A Gaussian distribution between each pair of changepoints is assumed.
\begin{equation}
L(r)= P(x|x_{r})
\end{equation}
Step 2: For each possible run length, $r>0$, at current time $t$, calculate the probability of growth. expected_runlength is a parameter describing the a-priori best guess of run length. The larger expected_runlength is, the stronger the evidence must be in the data to support a high changepoint probability.
\begin{equation}
P_{t}(runlength\equiv r) = P_{t-1}(runlength\equiv r-1)\ast L(r)\ast \left(1-\frac{1}{{expected_runlength}}\right)
\end{equation}
Step 3: Calculate probability of change, or $r=0$.
\begin{equation}
P_{t}(runlength\equiv 0)= \sum_{r_{prev}}\left[P_{tโ1}(runlength\equiv r_{prev})\ast L(0)\ast \left(\frac{1}{expected_runlength}\right)\right]
\end{equation}
Step 4: Normalize the probability. For all run length probabilities at time $t$, divide by the sum of all run length probabilities.
\begin{equation}
P_{t}(runlength\equiv r_{i})=\frac{P_{t}(runlength\equiv r_{i})}{\sum_{r}P_{t}(runlength\equiv r)}
\end{equation}
For each incoming point, this process is repeated.
This per-point update is why the method is considered an online learning algorithm.
As described, the algorithm scores each point $x_{t}$ immediately, but if the user can afford to wait several observations, it is often more accurate to assign lagged changepoint scores. The number of observations to wait before scoring a point is set with the lag parameter.
Libraries and Necessary Data Transformation
First we fire up GraphLab Create, all the other necessary libraries for our study and load the FRED/DCOILBRENTEU data set in an SFrame.
End of explanation
import time
import dateutil
def _unix_timestamp_to_datetime(x):
import datetime
import pytz
return dateutil.parser.parse(x)
fred_dcoilbrenteu['DATE'] = fred_dcoilbrenteu['DATE'].apply(_unix_timestamp_to_datetime)
fred_dcoilbrenteu = gl.TimeSeries(fred_dcoilbrenteu, index='DATE')
fred_dcoilbrenteu
Explanation: Next we transform the DATE column in an appropriate timestamp format, and the fred_dcoilbrenteu SFrame in a TimeSeries object.
End of explanation
%matplotlib inline
def plot_time_series(timestamp, values, title, **kwargs):
plt.rcParams['figure.figsize'] = 14, 7
plt.plot_date(timestamp, values, fmt='g-', tz='utc', **kwargs)
plt.title(title)
plt.xlabel('Year')
plt.ylabel('Dollars per Barrel')
plt.rcParams.update({'font.size': 16})
plot_time_series(fred_dcoilbrenteu['DATE'], fred_dcoilbrenteu['VALUE'],\
'Crude Oil Prices: Brent - Europe [FRED/DCOILBRENTEU]')
Explanation: We can plot the fred_dcoilbrenteu time series set as follows.
End of explanation
window_size = 252 # average trading days per year
model_moving_zscore =gl.anomaly_detection.moving_zscore.create(fred_dcoilbrenteu,
window_size, feature='VALUE')
Explanation: Training a Moving Z-Score Model
In this section we train a Moving Z-Score model to reveal where any anomalies exist in the fred_dcoilbrenteu time series.
End of explanation
scores = model_moving_zscore.scores.to_sframe()
scores.print_rows(num_rows=10, max_row_width=100)
scores[252-10:252+10].print_rows(num_rows=60, max_row_width=100)
Explanation: The primary output of the Moving Z-score model is the scores field. This TimeSeries object contains:
row id/time: ID of the corresponding row in the input dataset. Here the dataset is a TimeSeries object and the model returns the DATE timestamp. If it was an SFrame, this column would be filled with the row numbers of the input data.
anomaly score: absolute value of the moving Z-score. A score of 0 indicates that the value is identical to the moving average. The higher the score, the more likely a point is to be an anomaly.
VALUE: the recorded value of Dollars per Barrel of "Crude Oil Brent - Europe".
model update time: time that the model was updated. This is particularly useful for model updating.
End of explanation
scores.sort('anomaly_score', ascending=False).print_rows(num_rows=30, max_row_width=100)
Explanation: Of course, the first 252 rows of the scores output don't have a moving average or Z-score. This is because the moving window does not have sufficient data for those observations.
To reveal the 30, lets say, more anomalous data points we can sort the scores SFrame as follows.
End of explanation
sketch = scores['anomaly_score'].sketch_summary()
threshold = sketch.quantile(0.99)
anomalies = scores[scores['anomaly_score'] > threshold]
anomalies.print_rows(num_rows=30, max_row_width=100)
Explanation: Of cource, a lot more anomalous observations may exist in the fred_dcoilbrenteu time series. A good way to make a final decision on that, is to look at the approximate distribution of the anomaly scores with the SArray.sketch_summary() tool, then get a threshold for the anomaly score with the sketch summary's quantile method. Here we declare the top 1% of the data to be anomalies, characterizing that way 71 data points as "anomalous".
End of explanation
%matplotlib inline
plot_time_series(fred_dcoilbrenteu['DATE'], fred_dcoilbrenteu['VALUE'],\
'Crude Oil Prices: Brent - Europe [FRED/DCOILBRENTEU]', label='FRED/DCOILBRENTEU')
plt.plot_date(scores['DATE'], scores['moving_average'], fmt='b-', tz='utc', lw=2, label='Moving Average')
plt.plot(anomalies['DATE'], anomalies['VALUE'], 'rx', markersize=12, markeredgewidth=1.3, label='Anomalies')
plt.legend(loc='upper left', prop={'size': 16})
plt.show()
Explanation: In the figure below, we plot the original FRED/DCOILBRENTEU time series of "Dollars per Barrel of Crude Oil Brent - Europe", its Moving Average across the years, and the data points that we found to be anomalous.
End of explanation
model_bayesian_changepoints = gl.anomaly_detection.bayesian_changepoints.\
create(fred_dcoilbrenteu,
feature='VALUE',
# avg trading days per year
expected_runlength = 252,
# avg trading days per fiscal quarter
lag=63)
Explanation: Training a Bayesian Changepoints Model
In this second part of our analysis we train a Bayesian Changepoints model to reveal where any anomalies exist in the fred_dcoilbrenteu time series.
End of explanation
scores2 = model_bayesian_changepoints.scores.to_sframe()
scores2.print_rows(num_rows=10, max_row_width=100)
Explanation: The primary output of the Moving Z-score model is the scores field. This TimeSeries object contains:
row id/time: ID of the corresponding row in the input dataset. Here the dataset is a TimeSeries object and the model returns the DATE timestamp. If it was an SFrame, this column would be filled with the row numbers of the input data.
changepoint_score: The probability that the given point is a changepoint. This value is in a range between 0 and 1.
VALUE: the recorded value of Dollars per Barrel of "Crude Oil Brent - Europe".
model update time: time that the model was updated. This is particularly useful for model updating.
End of explanation
scores2.sort('changepoint_score', ascending=False).print_rows(num_rows=30, max_row_width=100)
Explanation: To reveal the 30, lets say, more anomalous data points we can sort the scores SFrame as follows.
End of explanation
scores2.tail(80).print_rows(num_rows=80, max_row_width=100)
Explanation: One interesting thing is that if you look at the tail of scores, you will see a handful of missing values. These data points have insufficient data after them to compute lagged changepoint scores. The number of missing values in the tail of the dataset can be reduced by equally reducing the lag parameter in our learning model. However, the returned results will be less accurate. Alternatively, one can choose to update the model with new data.
End of explanation
sketch2 = scores2['changepoint_score'].sketch_summary()
threshold2 = sketch2.quantile(0.99)
changepoints = scores2[scores2['changepoint_score'] > threshold2]
changepoints.print_rows(num_rows=105, max_row_width=100)
Explanation: Of cource, a lot more anomalous observations may exist in the fred_dcoilbrenteu time series. A good way to make a final decision on that, is to look at the approximate distribution of the changepoint scores with the SArray.sketch_summary() tool, then get a threshold for the changepoint score with the sketch summary's quantile method. Again, we declare the top 1% of the data to be anomalies, characterizing that way 75 data points as "anomalous".
End of explanation
%matplotlib inline
plt.rcParams['figure.figsize'] = 14, 24
plt.figure(1)
plt.subplot(3,1,1)
plt.plot_date(fred_dcoilbrenteu['DATE'], fred_dcoilbrenteu['VALUE'],\
fmt='g-', tz='utc', label='FRED/DCOILBRENTEU')
plt.plot_date(scores['DATE'], scores['moving_average'],\
fmt='b-', tz='utc', lw=2, label='Moving Average')
plt.xlabel('Year')
plt.ylabel('Dollars per Barrel')
plt.title('Crude Oil Prices: Brent - Europe [FRED/DCOILBRENTEU]')
plt.rcParams.update({'font.size': 16})
plt.plot(anomalies['DATE'], anomalies['VALUE'],\
'bx', markersize=12, markeredgewidth=1.3, label='Anomalies [Moving Z-Score]')
plt.legend(loc='upper left', prop={'size': 16})
plt.subplot(3,1,2)
plt.plot_date(fred_dcoilbrenteu['DATE'], fred_dcoilbrenteu['VALUE'],\
fmt='g-', tz='utc', label='FRED/DCOILBRENTEU')
plt.plot_date(scores['DATE'], scores['moving_average'],\
fmt='b-', tz='utc', lw=2, label='Moving Average')
plt.xlabel('Year')
plt.ylabel('Dollars per Barrel')
plt.title('Crude Oil Prices: Brent - Europe [FRED/DCOILBRENTEU]')
plt.rcParams.update({'font.size': 16})
plt.plot(changepoints['DATE'], changepoints['VALUE'],\
'rx', markersize=12, markeredgewidth=1.3, label='Anomalies [Bayesian Changepoints]')
plt.legend(loc='upper left', prop={'size': 16})
plt.subplot(3,1,3)
plt.plot_date(scores2['DATE'], scores2['changepoint_score'],\
fmt='r-', tz='utc', lw=2, label='Bayesian Changepoint Probability')
plt.rcParams.update({'font.size': 16})
plt.xlabel('Year')
plt.ylabel('Changepoint Probability')
plt.title('Crude Oil Prices: Brent - Europe [FRED/DCOILBRENTEU]')
plt.legend(loc='upper left', prop={'size': 16})
plt.show()
Explanation: In the figure below, we plot the original FRED/DCOILBRENTEU time series of "Dollars per Barrel of Crude Oil Brent - Europe", its Moving Average across the years, and the data points that we found to be anomalous with both the Moving Z-Score and the Bayesian Changepoint model.
End of explanation |
13,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display sensitivity maps for EEG and MEG sensors
Sensitivity maps can be produced from forward operators that
indicate how well different sensor types will be able to detect
neural currents from different regions of the brain.
To get started with forward modeling see tut-forward.
Step1: Compute sensitivity maps
Step2: Show gain matrix a.k.a. leadfield matrix with sensitivity map | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
# Read the forward solutions with surface orientation
fwd = mne.read_forward_solution(fwd_fname)
mne.convert_forward_solution(fwd, surf_ori=True, copy=False)
leadfield = fwd['sol']['data']
print("Leadfield size : %d x %d" % leadfield.shape)
Explanation: Display sensitivity maps for EEG and MEG sensors
Sensitivity maps can be produced from forward operators that
indicate how well different sensor types will be able to detect
neural currents from different regions of the brain.
To get started with forward modeling see tut-forward.
End of explanation
grad_map = mne.sensitivity_map(fwd, ch_type='grad', mode='fixed')
mag_map = mne.sensitivity_map(fwd, ch_type='mag', mode='fixed')
eeg_map = mne.sensitivity_map(fwd, ch_type='eeg', mode='fixed')
Explanation: Compute sensitivity maps
End of explanation
picks_meg = mne.pick_types(fwd['info'], meg=True, eeg=False)
picks_eeg = mne.pick_types(fwd['info'], meg=False, eeg=True)
fig, axes = plt.subplots(2, 1, figsize=(10, 8), sharex=True)
fig.suptitle('Lead field matrix (500 dipoles only)', fontsize=14)
for ax, picks, ch_type in zip(axes, [picks_meg, picks_eeg], ['meg', 'eeg']):
im = ax.imshow(leadfield[picks, :500], origin='lower', aspect='auto',
cmap='RdBu_r')
ax.set_title(ch_type.upper())
ax.set_xlabel('sources')
ax.set_ylabel('sensors')
fig.colorbar(im, ax=ax)
fig_2, ax = plt.subplots()
ax.hist([grad_map.data.ravel(), mag_map.data.ravel(), eeg_map.data.ravel()],
bins=20, label=['Gradiometers', 'Magnetometers', 'EEG'],
color=['c', 'b', 'k'])
fig_2.legend()
ax.set(title='Normal orientation sensitivity',
xlabel='sensitivity', ylabel='count')
grad_map.plot(time_label='Gradiometer sensitivity', subjects_dir=subjects_dir,
clim=dict(lims=[0, 50, 100]))
Explanation: Show gain matrix a.k.a. leadfield matrix with sensitivity map
End of explanation |
13,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: AWS (S3, Redshift, Kinesis) + Databricks Spark = Real-time Smart Meter Analytics
Create S3 Bucket
Step2: Copy Postgres to S3 via Postgres dump to CSV and s3cmd upload
Step3: Amazon Redshift
Step4: create table electricity (
dataid integer not null,
localhour timestamp not null distkey sortkey,
use decimal(30,26),
air1 decimal(30,26),
furnace1 decimal(30,26),
car1 decimal(30,26)
);
create table weather (
localhour timestamp not null distkey sortkey,
latitude decimal(30,26),
longitude decimal(30,26),
temperature decimal(30,26),
city varchar(20)
);
create table metadata (
dataid integer distkey sortkey,
city varchar(20),
state varchar(20)
);
Step7: Databricks Spark Analysis (see Databricks) | Python Code:
s3 = boto3.client('s3')
s3.list_buckets()
def create_s3_bucket(bucketname):
Quick method to create bucket with exception handling
s3 = boto3.resource('s3')
exists = True
bucket = s3.Bucket(bucketname)
try:
s3.meta.client.head_bucket(Bucket=bucketname)
except botocore.exceptions.ClientError as e:
error_code = int(e.response['Error']['Code'])
if error_code == 404:
exists = False
if exists:
print 'Bucket {} already exists'.format(bucketname)
else:
s3.create_bucket(Bucket=bucketname, GrantFullControl='dkelly628')
create_s3_bucket('pecanstreetresearch-2016')
Explanation: AWS (S3, Redshift, Kinesis) + Databricks Spark = Real-time Smart Meter Analytics
Create S3 Bucket
End of explanation
# Note: Used s3cmd tools because awscli tools not working in conda env
# 14m rows or ~ 1.2 GB local unzipped; 10min write to CSV and another 10min to upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/electricity-03-06-2016.csv s3://pecanstreetresearch-2016/electricity-03-06-2016.csv
# 200k rows ~ 15 MB local unzipped; 30 sec write to CSV and 15 sec upload to S3
# !s3cmd put ~/Users/Doug/PecanStreet/weather-03-06-2016.csv s3://pecanstreetresearch-2016/weather-03-06-2016.csv
Explanation: Copy Postgres to S3 via Postgres dump to CSV and s3cmd upload
End of explanation
# Quick geohashing before uploading to Redshift
weather_df = pd.read_csv('/Users/Doug/PecanStreet/weather_03-06-2016.csv')
weather_df.groupby(['latitude', 'longitude', 'city']).count()
weather_df['city'] = weather_df['Austin' if weather_df.latitude=30.292432 elif '']
weather_df['city'] = 'city'
weather_df.city.unique()
# weather_df['city'][weather_df.latitude==40.027278] = 'Boulder'
weather_df.to_csv('/Users/Doug/PecanStreet/weather-03-07-2016.csv', index=False)
metadata_df = pd.read_csv('/Users/Doug/PecanStreet/dataport-metadata.csv')
metadata_df = metadata_df[['dataid','city', 'state']]
metadata_df.to_csv('/Users/Doug/PecanStreet/metadata.csv', index=False)
# !s3cmd put metadata.csv s3://pecanstreetresearch-2016/metadata/metadata.csv
redshift = boto3.client('redshift')
# redshift.describe_clusters()
# psql -h pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com -U dkelly628 -d electricity -p 5439
Explanation: Amazon Redshift: NoSQL Columnar Data Warehouse
Quick data cleanup before ETL
End of explanation
# Complete
COPY electricity
FROM 's3://pecanstreetresearch-2016/electricity/electricity-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY weather
FROM 's3://pecanstreetresearch-2016/weather/weather-03-06-2016.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1
dateformat 'auto';
# Complete
COPY metadata
FROM 's3://pecanstreetresearch-2016/metadata/metadata.csv'
CREDENTIALS 'aws_access_key_id=AWS_ACCESS_KEY_ID;aws_secret_access_key=AWS_SECRET_ACCESS_KEY'
CSV
IGNOREHEADER 1;
# Query for checking error log; invaluable
select query, substring(filename,22,25) as filename,line_number as line,
substring(colname,0,12) as column, type, position as pos, substring(raw_line,0,30) as line_text,
substring(raw_field_value,0,15) as field_text,
substring(err_reason,0,45) as reason
from stl_load_errors
order by query desc
limit 10;
# All table definitions are stored in pg_table_def table; different from Postgres
SELECT DISTINCT tablename
FROM pg_table_def
WHERE schemaname = 'public'
ORDER BY tablename;
# Returns household, time, city, usage by hour, and temperature for all residents in Austin, TX
SELECT e.dataid, e.localhour, m.city, SUM(e.use), w.temperature
FROM electricity AS e
JOIN weather AS w ON e.localhour = w.localhour
JOIN metadata AS m ON e.dataid = m.dataid
WHERE m.city = 'Austin'
GROUP BY e.dataid, e.localhour, m.city, w.temperature;
# Returns number of participants by city, state
SELECT m.city, m.state, COUNT(e.dataid) AS participants
FROM electricity AS e
JOIN metadata AS m ON e.dataid = m.dataid
GROUP BY m.city, m.state;
# Setup connection to Pecan Street Dataport
try:
conn = psycopg2.connect("dbname='electricity' user='dkelly628' host='pecanstreet.czxmxphrw2wv.us-east-1.redshift.amazonaws.com' port='5439' password='password'")
except:
# print "Error: Check there aren't any open connections in notebook or pgAdmin"
electricity_df = pd.read_sql("SELECT localhour, SUM(use) AS usage, SUM(air1) AS cooling, SUM(furnace1) AS heating, \
SUM(car1) AS electric_vehicle \
FROM electricity \
WHERE dataid = 7982 AND use > 0 \
AND localhour BETWEEN '2013-10-16 00:00:00'::timestamp AND \
'2016-02-26 08:00:00'::timestamp \
GROUP BY dataid, localhour \
ORDER BY localhour", conn)
electricity_df['localhour'] = electricity_df.localhour.apply(pd.to_datetime)
electricity_df.set_index('localhour', inplace=True)
electricity_df.fillna(value=0.0, inplace=True)
electricity_df[['usage','cooling']].plot(figsize=(18,9), title="Pecan Street Household 7982 Hourly Energy Consumption")
sns.despine();
Explanation: create table electricity (
dataid integer not null,
localhour timestamp not null distkey sortkey,
use decimal(30,26),
air1 decimal(30,26),
furnace1 decimal(30,26),
car1 decimal(30,26)
);
create table weather (
localhour timestamp not null distkey sortkey,
latitude decimal(30,26),
longitude decimal(30,26),
temperature decimal(30,26),
city varchar(20)
);
create table metadata (
dataid integer distkey sortkey,
city varchar(20),
state varchar(20)
);
End of explanation
kinesis = boto3.client('kinesis')
kinesis.create_stream(StreamName='PecanStreet', ShardCount=2)
kinesis.list_streams()
firehose = boto3.client('firehose')
# firehose.create_delivery_stream(DeliveryStreamName='pecanstreetfirehose', S3DestinationConfiguration={'RoleARN': '', 'BucketARN': 'pecanstreetresearch-2016'})
firehose.list_delivery_streams()
def kinesis_write(stream, ):
Method that writes to kinesis stream
kinesis = boto3.client('kinesis')
kinesis.put(StreamName=stream, )
def kinesis_read():
Method to read from kinesis stream
Explanation: Databricks Spark Analysis (see Databricks): Batch analytics on S3, Streaming using Amazon Kinesis Stream
Create Amazon Kinesis Stream for writing streaming data to S3
End of explanation |
13,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
๋ค๋ณ์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ
๋ค๋ณ์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ ํน์ ๊ฐ๋จํ ๋ค๋ณ์ ์ ๊ท ๋ถํฌ(MVN
Step1: ๊ฒฝ์ฐ 2
๋ง์ฝ
$$
\mu = \begin{bmatrix}2 \ 3 \end{bmatrix}. \;\;\;
\Sigma = \begin{bmatrix}2 & -1 \ 2 & 4 \end{bmatrix}
$$
์ด๋ฉด
$$
| \Sigma | = 10,\;\;\;
\Sigma^{-1} = \begin{bmatrix}0.4 & 0.1 \ -0.2 & 0.2 \end{bmatrix}
$$
$$
(x-\mu)^T \Sigma^{-1} (x-\mu) =
\begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
\begin{bmatrix}0.4 & 0.1 \ -0.2 & 0.2 \end{bmatrix}
\begin{bmatrix}x_1 - 2 \ x_2 - 3 \end{bmatrix}
=
\dfrac{1}{10}\left(4(x_1 - 2)^2 - (x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right)
$$
$$
\mathcal{N}(x_1, x_2) = \dfrac{1}{\sqrt{20\pi}}
\exp \left( -\dfrac{1}{20}\left(4(x_1 - 2)^2 - (x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right) \right)
$$
์ด ํ๋ฅ ๋ฐ๋ ํจ์์ ๋ชจ์์ ๋ค์๊ณผ ๊ฐ๋ค. | Python Code:
mu = [2, 3]
cov = [[1, 0], [0, 1]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
Explanation: ๋ค๋ณ์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ
๋ค๋ณ์ ๊ฐ์ฐ์์ ์ ๊ท ๋ถํฌ ํน์ ๊ฐ๋จํ ๋ค๋ณ์ ์ ๊ท ๋ถํฌ(MVN: Multivariate Normal)๋ ๋ณต์์ ํ๋ฅ ๋ณ์๋ฅผ ๋ชจํํํ๋๋ฐ ๊ฐ์ฅ ๋ง์ด ์ฌ์ฉ๋๋ ๋ถํฌ์ด๋ค.
$D$์ฐจ์ ๋ค๋ณ์ ์ ๊ท ๋ถํฌ์ ํ๋ฅ ๋ฐ๋ ํจ์๋ ๋ค์๊ณผ ๊ฐ๋ค.
์ธ์์ผ๋
$$ \mathcal{N}(x \mid \mu, \Sigma) = \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right) $$
์ด ์์์ ๊ฐ ๊ธฐํธ๋ ๋ค์์ ์๋ฏธํ๋ค.
$x \in \mathbf{R}^D $ ํ๋ฅ ๋ณ์ ๋ฒกํฐ
$\mu \in \mathbf{R}^D $ ํ๊ท ๋ฒกํฐ
$\Sigma \in \mathbf{R}^{D\times D} $ ๊ณต๋ถ์ฐ ๋ฒกํฐ
$\Sigma^{-1} \in \mathbf{R}^{D\times D} $ ๊ณต๋ถ์ฐ ๋ฒกํฐ์ ์ญํ๋ ฌ
๊ณต๋ถ์ฐ ๋ฒกํฐ์ ์ญํ๋ ฌ $\Sigma^{-1}$๋ precision matrix ํน์ concentration matrix ๋ผ๊ณ ๋ ํ๋ค.
SciPy์ ๋ค๋ณ์ ์ ๊ท ๋ถํฌ ์ง์
SciPy์ stats ์๋ธํจํค์ง์๋ ๋ค๋ณ์ ์ ๊ท ๋ถํฌ๋ฅผ ์ํ multivariate_normal ํด๋์ค๊ฐ ์๋ค. mean ์ธ์๋ก ํ๊ท ๋ฒกํฐ๋ฅผ, cov ์ธ์๋ก ๊ณต๋ถ์ฐ ํ๋ ฌ์ ๋ฐ๋๋ค.
๋ค๋ณ์ ์ ๊ท ๋ถํฌ์ ์
2์ฐจ์($D=2$) ๋ค๋ณ์ ์ ๊ท ๋ถํฌ์ ์๋ฅผ ๋ช๊ฐ์ง ์ดํด๋ณด์.
์ฐ์ 2์ฐจ์์ด๋ฏ๋ก ํ๋ฅ ๋ณ์ ๋ฒกํฐ๋
$$
x = \begin{bmatrix}x_1 \ x_2 \end{bmatrix}
$$
์ด๋ค.
๊ฒฝ์ฐ 1
๋ง์ฝ
$$
\mu = \begin{bmatrix}2 \ 3 \end{bmatrix}. \;\;\;
\Sigma = \begin{bmatrix}1 & 0 \ 0 & 1 \end{bmatrix}
$$
์ด๋ฉด
$$
|\Sigma|^{1/2} = 1. \;\;\;
\Sigma^{-1} = \begin{bmatrix}1 & 0 \ 0 & 1 \end{bmatrix}
$$
$$
(x-\mu)^T \Sigma^{-1} (x-\mu) =
\begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
\begin{bmatrix}1 & 0 \ 0 & 1 \end{bmatrix}
\begin{bmatrix}x_1 - 2 \ x_2 - 3 \end{bmatrix}
=
(x_1 - 2)^2 + (x_2 - 3)^2
$$
$$
\mathcal{N}(x_1, x_2) = \dfrac{1}{\sqrt{2\pi}}
\exp \left( -\dfrac{1}{2} \left( (x_1 - 2)^2 + (x_2 - 3)^2 \right) \right)
$$
์ด ํ๋ฅ ๋ฐ๋ ํจ์์ ๋ชจ์์ ๋ค์๊ณผ ๊ฐ๋ค.
End of explanation
mu = [2, 3]
cov = [[2, -1],[2, 4]]
rv = sp.stats.multivariate_normal(mu, cov)
xx = np.linspace(0, 4, 120)
yy = np.linspace(1, 5, 150)
XX, YY = np.meshgrid(xx, yy)
plt.grid(False)
plt.contourf(XX, YY, rv.pdf(np.dstack([XX, YY])))
plt.axis("equal")
plt.show()
Explanation: ๊ฒฝ์ฐ 2
๋ง์ฝ
$$
\mu = \begin{bmatrix}2 \ 3 \end{bmatrix}. \;\;\;
\Sigma = \begin{bmatrix}2 & -1 \ 2 & 4 \end{bmatrix}
$$
์ด๋ฉด
$$
| \Sigma | = 10,\;\;\;
\Sigma^{-1} = \begin{bmatrix}0.4 & 0.1 \ -0.2 & 0.2 \end{bmatrix}
$$
$$
(x-\mu)^T \Sigma^{-1} (x-\mu) =
\begin{bmatrix}x_1 - 2 & x_2 - 3 \end{bmatrix}
\begin{bmatrix}0.4 & 0.1 \ -0.2 & 0.2 \end{bmatrix}
\begin{bmatrix}x_1 - 2 \ x_2 - 3 \end{bmatrix}
=
\dfrac{1}{10}\left(4(x_1 - 2)^2 - (x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right)
$$
$$
\mathcal{N}(x_1, x_2) = \dfrac{1}{\sqrt{20\pi}}
\exp \left( -\dfrac{1}{20}\left(4(x_1 - 2)^2 - (x_1 - 2)(x_2 - 3) + 2(x_2 - 3)^2\right) \right)
$$
์ด ํ๋ฅ ๋ฐ๋ ํจ์์ ๋ชจ์์ ๋ค์๊ณผ ๊ฐ๋ค.
End of explanation |
13,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite-Length Capacity of the BSC and BEC Channels
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Calculating the finite-length capacity of the BSC and BEC channels using the normal approximation
* Illustrating the finite-length capacity for different code lengths and different error rates
Step1: Binary Symmetric Channel (BSC)
Start with the BSC for which we have
\begin{equation}
C_\text{BSC} = 1 - h(\delta) = 1+\delta\log_2(\delta)+(1-\delta)\log_2(1-\delta)
\end{equation}
and
\begin{equation}
V_\text{BSC} = \delta(1-\delta)\left(\log_2\left(\frac{1-\delta}{\delta}\right)\right)^2
\end{equation}
Step2: The finite-length capacity for the BSC channel is given by
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e) + \frac{\log_2(n)}{2n}
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{n(C-r) + \frac{1}{2}\log_2(n)}{\sqrt{Vn}}\right)
\end{equation}
For a given channel (i.e., a given $\delta$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture).
Step3: Show finite length capacity estimates for some codes of different lengths $n$
Step4: Different representation, for a given channel (and here, we pick $\delta = 0.11$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
Step5: Binary Erasure Channel (BEC)
For the BEC, we have
\begin{equation}
C_\text{BEC} = 1 - \epsilon
\end{equation}
and
\begin{equation}
V_\text{BEC} = \epsilon(1-\epsilon)
\end{equation}
Step6: The finite-length capacity for the BEC channel is given by (note, here we do not use the correction term)
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e)
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{\sqrt{n}(C-r)}{\sqrt{V}}\right)
\end{equation}
For a given channel (i.e., a given $\epsilon$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture).
Step7: Show finite length capacity estimates for some codes of different lengths $n$
Step8: Different representation, for a given channel (and here, we pick $\epsilon = 0.5$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
Step9: Extra Material, Random Coding Union Bound for the BSC
We now additionally show the Random Coding Union (RCU) bound [2, Th. 16] for the BSC, as it is a fairly easy to calculate the bound in this case. The RCU bound is not part of the lecture and shown here for completeness.
To get the RCU bound, we assume that we perform ML decoding of the random code with $\boldsymbol{x}^{[1]}$ transmitted. We assume that the channel introduces a total number of $t$ errors. Then let $E_m$ denote the event that codeword $\boldsymbol{x}^{[m]}$ is within a sphere of radius $t$ around the received word $\boldsymbol{Y}$. In this case, if any event $E_m$ with $m \geq 2$ occurs, we may make a decoding error (if more than a single codeword which can include $\boldsymbol{x}^{[1]}$ are within this sphere, we may randomly select one and pick $\boldsymbol{x}^{[1]}$ by chance. Hence, the error probability can be bounded as
\begin{align}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) &\leq P\left(\bigcup_{m=2}^M E_m \bigg| \boldsymbol{Y}, t\text{ errors}\right) \
&\stackrel{(a)}{\leq} \sum_{m=2}^M P\left(E_m | \boldsymbol{Y}, t\text{ errors}\right) \
&= (M-1)\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ errors}\right) \
&\leq M\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ errors}\right) \
&\stackrel{(b)}{=} M \sum_{j=0}^t\binom{n}{j}\left(\frac{1}{2}\right)^n \
&= 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}
\end{align}
where $(a)$ is the union bound and $(b)$ is due to the fact that the probability of choosing a certain codeword is $(\frac12)^n = 2^{-n}$ and there are a total number of $\sum_{j=0}^t\binom{n}{t}$ possible codewords around $\boldsymbol{Y}$ (each chosen with probability $2^{-n}$.
The main trick of [2] is now to observe that the union bound can be often loose and $2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}$ can become larger than 1. Hence, [2] introduced the tighter bound
\begin{equation}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) \leq \min\left(1, 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}\right)
\end{equation}
The total probability of error is then obtained by noticing that the errors in the BSC follow a binomial distribution, and we we can state that
\begin{align}
P_e &= \sum_{t=0}^n\binom{n}{t}\delta^t(1-\delta)^{n-t}P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) \
&\leq \sum_{t=0}^n\binom{n}{t}\delta^t(1-\delta)^{n-t} \min\left(1, 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}\right)
\end{align}
The bound states that for the BSC with error probability $\delta$, there exists a code (the random code) with $M$ codewords of length $n$ (and rate $r = \frac{\log_2(M)}{n}$) that has an error probability upper bounded by the above bound under ML decoding.
[2] Y. Polyanskiy, H. V. Poor and S. Verdรบ, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory , vol. 56, no. 5, pp. 2307-2359, May 2010
Step10: Extra Material, Random Coding Union Bound for the BEC
We now additionally show the Random Coding Union (RCU) bound [2, Th. 16] for the BEC, as it is a fairly easy to calculate the bound in this case. The RCU bound is not part of the lecture and shown here for completeness.
To get the RCU bound, we assume that we perform ML decoding of the random code with $\boldsymbol{x}^{[1]}$ transmitted. We assume that the channel introduces a total number of $t$ erasures. At the non-erased positions, the bits have been received correctly. Then let $E_m$ denote the event that codeword $\boldsymbol{x}^{[m]}$ has the same code bits at the non-erased positions as $\boldsymbol{x}^{[1]}$. In this case, the decoder cannot make a decision which codeword to select (they have the same likelihood). It can resolve this tie by randomly selecting a codeword, which may produce a decoding error. Hence, the error probability can be bounded as
\begin{align}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) &\leq P\left(\bigcup_{m=2}^M E_m | \boldsymbol{Y}, t\text{ erasures}\right) \
&\stackrel{(a)}{\leq} \sum_{m=2}^M P\left(E_m | \boldsymbol{Y}, t\text{ erasures}\right) \
&= (M-1)\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \
&\leq M\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \
&\stackrel{(b)}{=} M\left(\frac{1}{2}\right)^{n-t} \
&= 2^{-n(1-r)+t}
\end{align}
where $(a)$ is the union bound and $(b)$ is due to the fact that the probability of choosing $n-t$ positions that are identical to $\boldsymbol{x}^{[1]}$ in these positions is $(\frac12)^{n-t} = 2^{t-n}$.
The main trick of [2] is now to observe that the union bound can be often loose and $2^{-n(1-r)+t}$ can become larger than 1. Hence, [2] introduced the tighter bound
\begin{equation}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \leq \min\left(1, 2^{-n(1-r)+t}\right)
\end{equation}
The total probability of error is then obtained by noticing that the erasures in the BEC follow a binomial distribution, and we we can state that
\begin{align}
P_e &= \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t}P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \
&\leq \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t} \min\left(1, 2^{-n(1-r)+t}\right)
\end{align}
The bound states that for the BEC with erasure probability $\epsilon$, there exists a code (the random code) with $M$ codewords of length $n$ (and rate $r = \frac{\log_2(M)}{n}$) that has an error probability upper bounded by the above bound under ML decoding.
[2] Y. Polyanskiy, H. V. Poor and S. Verdรบ, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory , vol. 56, no. 5, pp. 2307-2359, May 2010 | Python Code:
import numpy as np
from scipy.stats import norm
import matplotlib
import matplotlib.pyplot as plt
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
matplotlib.rc('figure', figsize=(18, 6) )
Explanation: Finite-Length Capacity of the BSC and BEC Channels
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* Calculating the finite-length capacity of the BSC and BEC channels using the normal approximation
* Illustrating the finite-length capacity for different code lengths and different error rates
End of explanation
# capacity of the BSC
def C_BSC(delta):
binary_entropy = -delta*np.log2(delta) - (1-delta)*np.log2(1-delta)
if delta < 1e-20:
binary_entropy = 0
return 1 - binary_entropy
# dispersion of the BSC
def V_BSC(delta):
V = np.square(np.log2((1-delta)/delta)) * delta * (1-delta)
if delta < 1e-20:
V = 0
return V
Explanation: Binary Symmetric Channel (BSC)
Start with the BSC for which we have
\begin{equation}
C_\text{BSC} = 1 - h(\delta) = 1+\delta\log_2(\delta)+(1-\delta)\log_2(1-\delta)
\end{equation}
and
\begin{equation}
V_\text{BSC} = \delta(1-\delta)\left(\log_2\left(\frac{1-\delta}{\delta}\right)\right)^2
\end{equation}
End of explanation
def get_Pe_finite_length_BSC(n, r, delta):
# compute capacity
C = C_BSC(delta)
# compute dispersion
V = V_BSC(delta)
# Q-function is "norm.sf" (survival function)
return norm.sf((n*(C-r) + 0.5*np.log2(n))/np.sqrt(n*V))
Explanation: The finite-length capacity for the BSC channel is given by
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e) + \frac{\log_2(n)}{2n}
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{n(C-r) + \frac{1}{2}\log_2(n)}{\sqrt{Vn}}\right)
\end{equation}
For a given channel (i.e., a given $\delta$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture).
End of explanation
delta_range = np.linspace(0.01,0.12,100)
Pe_BSC_r12_n100 = [get_Pe_finite_length_BSC(100, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n500 = [get_Pe_finite_length_BSC(500, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n1000 = [get_Pe_finite_length_BSC(1000, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n5000 = [get_Pe_finite_length_BSC(5000, 0.5, delta) for delta in delta_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogy(delta_range, Pe_BSC_r12_n100)
plt.semilogy(delta_range, Pe_BSC_r12_n500)
plt.semilogy(delta_range, Pe_BSC_r12_n1000)
plt.semilogy(delta_range, Pe_BSC_r12_n5000)
plt.axvspan(0.11, 0.12, alpha=0.5, color='gray')
plt.axvline(x=0.11, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.01,0.12))
plt.xlabel('BSC Error probability $\delta$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.11, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_Pe_R12.pdf',bbox_inches='tight')
Explanation: Show finite length capacity estimates for some codes of different lengths $n$
End of explanation
#specify \delta
delta = 0.11
n_range = np.linspace(10,2000,100)
C = C_BSC(delta)
V = V_BSC(delta)
r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) + 0.5*np.log2(n)/n for n in n_range]
r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) + 0.5*np.log2(n)/n for n in n_range]
fig = plt.figure(1,figsize=(12,7))
plt.plot(n_range, r_Pe_1em3)
plt.plot(n_range, r_Pe_1em6)
plt.plot(n_range, r_Pe_1em9)
plt.axhline(y=C, color='k')
plt.ylim((0,0.55))
plt.xlim((0,2000))
plt.xlabel('Length $n$', fontsize=16)
plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16)
plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_r_delta_011.pdf',bbox_inches='tight')
Explanation: Different representation, for a given channel (and here, we pick $\delta = 0.11$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
End of explanation
# capacity of the BSC
def C_BEC(epsilon):
return 1 - epsilon
# dispersion of the BSC
def V_BEC(epsilon):
return epsilon*(1-epsilon)
Explanation: Binary Erasure Channel (BEC)
For the BEC, we have
\begin{equation}
C_\text{BEC} = 1 - \epsilon
\end{equation}
and
\begin{equation}
V_\text{BEC} = \epsilon(1-\epsilon)
\end{equation}
End of explanation
def get_Pe_finite_length_BEC(n, r, epsilon):
# compute capacity
C = C_BEC(epsilon)
# compute dispersion
V = V_BEC(epsilon)
# Q-function is "norm.sf" (survival function)
return norm.sf((n*(C-r))/np.sqrt(n*V))
Explanation: The finite-length capacity for the BEC channel is given by (note, here we do not use the correction term)
\begin{equation}
r = \frac{\log_2(M)}{n} \approx C - \sqrt{\frac{V}{n}}Q^{-1}(P_e)
\end{equation}
We can solve this equation for $P_e$, which gives
\begin{equation}
P_e \approx Q\left(\frac{\sqrt{n}(C-r)}{\sqrt{V}}\right)
\end{equation}
For a given channel (i.e., a given $\epsilon$), we can compute the capacity $C$ and the dispersion $V$ and then use it to get an estimate of what error rate an ideal code with an idea decoder could achieve. Note that this is only an estimate and we do not know the exact value. However, we can compute upper and lower bounds, which are relatively close to the approximation (beyond the scope of this lecture).
End of explanation
epsilon_range = np.linspace(0.2,0.6,100)
Pe_BEC_r12_n100 = [get_Pe_finite_length_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n500 = [get_Pe_finite_length_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n1000 = [get_Pe_finite_length_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n5000 = [get_Pe_finite_length_BEC(5000, 0.5, epsilon) for epsilon in epsilon_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogy(epsilon_range, Pe_BEC_r12_n100)
plt.semilogy(epsilon_range, Pe_BEC_r12_n500)
plt.semilogy(epsilon_range, Pe_BEC_r12_n1000)
plt.semilogy(epsilon_range, Pe_BEC_r12_n5000)
plt.axvspan(0.5, 0.55, alpha=0.5, color='gray')
plt.axvline(x=0.5, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.2,0.55))
plt.xlabel('BEC Erasure probability $\epsilon$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_Pe_R12.pdf',bbox_inches='tight')
Explanation: Show finite length capacity estimates for some codes of different lengths $n$
End of explanation
#specify \epsilon
epsilon = 0.5
n_range = np.linspace(10,2000,100)
C = C_BEC(epsilon)
V = V_BEC(epsilon)
r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) for n in n_range]
r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) for n in n_range]
r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) for n in n_range]
fig = plt.figure(1,figsize=(12,7))
plt.plot(n_range, r_Pe_1em3)
plt.plot(n_range, r_Pe_1em6)
plt.plot(n_range, r_Pe_1em9)
plt.axhline(y=C, color='k')
plt.ylim((0,0.55))
plt.xlim((0,2000))
plt.xlabel('Length $n$', fontsize=16)
plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16)
plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_r_epsilon_05.pdf',bbox_inches='tight')
Explanation: Different representation, for a given channel (and here, we pick $\epsilon = 0.5$), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
End of explanation
from scipy.special import comb
def get_Pe_RCU_BSC(n, r, delta):
binomials = [comb(n,t,exact=False) for t in range(n+1)]
return np.sum([binomials[t] * (delta**t) * ((1-delta)**(n-t)) * min(1, np.sum([binomials[j] for j in range(t+1)]) * 2**(-n*(1-r))) for t in range(n+1)])
delta_range = np.linspace(0.01,0.12,100)
Pe_BSC_r12_n100 = [get_Pe_finite_length_BSC(100, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n500 = [get_Pe_finite_length_BSC(500, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n1000 = [get_Pe_finite_length_BSC(1000, 0.5, delta) for delta in delta_range]
Pe_BSC_r12_n5000 = [get_Pe_finite_length_BSC(5000, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n100 = [get_Pe_RCU_BSC(100, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n500 = [get_Pe_RCU_BSC(500, 0.5, delta) for delta in delta_range]
Pe_RCU_BSC_r12_n1000 = [get_Pe_RCU_BSC(1000, 0.5, delta) for delta in delta_range]
fig = plt.figure(1,figsize=(10,7))
plt.semilogy(delta_range, Pe_BSC_r12_n100)
plt.semilogy(delta_range, Pe_BSC_r12_n500)
plt.semilogy(delta_range, Pe_BSC_r12_n1000)
plt.semilogy(delta_range, Pe_BSC_r12_n5000)
plt.axvline(x=0.11, color='k')
plt.gca().set_prop_cycle(None)
#d ashed curves represnt the RCU bound
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n100, '--')
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n500, '--')
plt.semilogy(delta_range, Pe_RCU_BSC_r12_n1000, '--')
plt.axvspan(0.11, 0.12, alpha=0.5, color='gray')
plt.ylim((1e-8,1))
plt.xlim((0.01,0.12))
plt.xlabel('BSC Error probability $\delta$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.11, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BSC_Pe_R12.pdf',bbox_inches='tight')
Explanation: Extra Material, Random Coding Union Bound for the BSC
We now additionally show the Random Coding Union (RCU) bound [2, Th. 16] for the BSC, as it is a fairly easy to calculate the bound in this case. The RCU bound is not part of the lecture and shown here for completeness.
To get the RCU bound, we assume that we perform ML decoding of the random code with $\boldsymbol{x}^{[1]}$ transmitted. We assume that the channel introduces a total number of $t$ errors. Then let $E_m$ denote the event that codeword $\boldsymbol{x}^{[m]}$ is within a sphere of radius $t$ around the received word $\boldsymbol{Y}$. In this case, if any event $E_m$ with $m \geq 2$ occurs, we may make a decoding error (if more than a single codeword which can include $\boldsymbol{x}^{[1]}$ are within this sphere, we may randomly select one and pick $\boldsymbol{x}^{[1]}$ by chance. Hence, the error probability can be bounded as
\begin{align}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) &\leq P\left(\bigcup_{m=2}^M E_m \bigg| \boldsymbol{Y}, t\text{ errors}\right) \
&\stackrel{(a)}{\leq} \sum_{m=2}^M P\left(E_m | \boldsymbol{Y}, t\text{ errors}\right) \
&= (M-1)\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ errors}\right) \
&\leq M\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ errors}\right) \
&\stackrel{(b)}{=} M \sum_{j=0}^t\binom{n}{j}\left(\frac{1}{2}\right)^n \
&= 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}
\end{align}
where $(a)$ is the union bound and $(b)$ is due to the fact that the probability of choosing a certain codeword is $(\frac12)^n = 2^{-n}$ and there are a total number of $\sum_{j=0}^t\binom{n}{t}$ possible codewords around $\boldsymbol{Y}$ (each chosen with probability $2^{-n}$.
The main trick of [2] is now to observe that the union bound can be often loose and $2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}$ can become larger than 1. Hence, [2] introduced the tighter bound
\begin{equation}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) \leq \min\left(1, 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}\right)
\end{equation}
The total probability of error is then obtained by noticing that the errors in the BSC follow a binomial distribution, and we we can state that
\begin{align}
P_e &= \sum_{t=0}^n\binom{n}{t}\delta^t(1-\delta)^{n-t}P(\text{decoding error} | \boldsymbol{Y}, t\text{ errors}) \
&\leq \sum_{t=0}^n\binom{n}{t}\delta^t(1-\delta)^{n-t} \min\left(1, 2^{-n(1-r)}\sum_{j=0}^t\binom{n}{j}\right)
\end{align}
The bound states that for the BSC with error probability $\delta$, there exists a code (the random code) with $M$ codewords of length $n$ (and rate $r = \frac{\log_2(M)}{n}$) that has an error probability upper bounded by the above bound under ML decoding.
[2] Y. Polyanskiy, H. V. Poor and S. Verdรบ, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory , vol. 56, no. 5, pp. 2307-2359, May 2010
End of explanation
def get_Pe_RCU_BEC(n, r, epsilon):
return np.sum([comb(n,t,exact=True) * (epsilon**t) * ((1-epsilon)**(n-t)) * min(1, 2**(-n*(1-r)+t)) for t in range(n+1)])
epsilon_range = np.linspace(0.2,0.6,100)
Pe_BEC_r12_n100 = [get_Pe_finite_length_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n500 = [get_Pe_finite_length_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n1000 = [get_Pe_finite_length_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_BEC_r12_n5000 = [get_Pe_finite_length_BEC(5000, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n100 = [get_Pe_RCU_BEC(100, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n500 = [get_Pe_RCU_BEC(500, 0.5, epsilon) for epsilon in epsilon_range]
Pe_RCU_BEC_r12_n1000 = [get_Pe_RCU_BEC(1000, 0.5, epsilon) for epsilon in epsilon_range]
fig = plt.figure(1,figsize=(10,7))
plt.semilogy(epsilon_range, Pe_BEC_r12_n100)
plt.semilogy(epsilon_range, Pe_BEC_r12_n500)
plt.semilogy(epsilon_range, Pe_BEC_r12_n1000)
plt.semilogy(epsilon_range, Pe_BEC_r12_n5000)
plt.axvline(x=0.5, color='k')
plt.gca().set_prop_cycle(None)
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n100, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n500, '--')
plt.semilogy(epsilon_range, Pe_RCU_BEC_r12_n1000, '--')
plt.axvspan(0.5, 0.55, alpha=0.5, color='gray')
plt.axvline(x=0.5, color='k')
plt.ylim((1e-8,1))
plt.xlim((0.2,0.55))
plt.xlabel('BEC Erasure probability $\epsilon$', fontsize=16)
plt.ylabel('$P_e$', fontsize=16)
plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$', 'C'], fontsize=16)
plt.text(0.5, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90})
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('BEC_Pe_R12.pdf',bbox_inches='tight')
Explanation: Extra Material, Random Coding Union Bound for the BEC
We now additionally show the Random Coding Union (RCU) bound [2, Th. 16] for the BEC, as it is a fairly easy to calculate the bound in this case. The RCU bound is not part of the lecture and shown here for completeness.
To get the RCU bound, we assume that we perform ML decoding of the random code with $\boldsymbol{x}^{[1]}$ transmitted. We assume that the channel introduces a total number of $t$ erasures. At the non-erased positions, the bits have been received correctly. Then let $E_m$ denote the event that codeword $\boldsymbol{x}^{[m]}$ has the same code bits at the non-erased positions as $\boldsymbol{x}^{[1]}$. In this case, the decoder cannot make a decision which codeword to select (they have the same likelihood). It can resolve this tie by randomly selecting a codeword, which may produce a decoding error. Hence, the error probability can be bounded as
\begin{align}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) &\leq P\left(\bigcup_{m=2}^M E_m | \boldsymbol{Y}, t\text{ erasures}\right) \
&\stackrel{(a)}{\leq} \sum_{m=2}^M P\left(E_m | \boldsymbol{Y}, t\text{ erasures}\right) \
&= (M-1)\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \
&\leq M\cdot P\left(E_2 | \boldsymbol{Y}, t\text{ erasures}\right) \
&\stackrel{(b)}{=} M\left(\frac{1}{2}\right)^{n-t} \
&= 2^{-n(1-r)+t}
\end{align}
where $(a)$ is the union bound and $(b)$ is due to the fact that the probability of choosing $n-t$ positions that are identical to $\boldsymbol{x}^{[1]}$ in these positions is $(\frac12)^{n-t} = 2^{t-n}$.
The main trick of [2] is now to observe that the union bound can be often loose and $2^{-n(1-r)+t}$ can become larger than 1. Hence, [2] introduced the tighter bound
\begin{equation}
P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \leq \min\left(1, 2^{-n(1-r)+t}\right)
\end{equation}
The total probability of error is then obtained by noticing that the erasures in the BEC follow a binomial distribution, and we we can state that
\begin{align}
P_e &= \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t}P(\text{decoding error} | \boldsymbol{Y}, t\text{ erasures}) \
&\leq \sum_{t=0}^n\binom{n}{t}\epsilon^t(1-\epsilon)^{n-t} \min\left(1, 2^{-n(1-r)+t}\right)
\end{align}
The bound states that for the BEC with erasure probability $\epsilon$, there exists a code (the random code) with $M$ codewords of length $n$ (and rate $r = \frac{\log_2(M)}{n}$) that has an error probability upper bounded by the above bound under ML decoding.
[2] Y. Polyanskiy, H. V. Poor and S. Verdรบ, "Channel coding rate in the finite blocklength regime," IEEE Trans. Inf. Theory , vol. 56, no. 5, pp. 2307-2359, May 2010
End of explanation |
13,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lid driven Cavity (GPU)
The following command is important to view matplotlib plots on a jupyter notebook
Step1: cf. http
Step2: Making a colorbar, making colormaps, Show colormaps, in matplotlib
cf. http
Step4: where rand(10,10) is a 10x10 array of random numbers in the range $[0.0, 1.0)$ | Python Code:
# %matplotlib inline
Explanation: Lid driven Cavity (GPU)
The following command is important to view matplotlib plots on a jupyter notebook
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import os, sys
from matplotlib.mlab import griddata
import numpy as np
Explanation: cf. http://stackoverflow.com/questions/33436221/displaying-rotatable-3d-plots-in-ipython-or-ipython-notebook
End of explanation
from pylab import *
cdict = {'red': ((0.0, 0.0, 0.0),
(0.5, 1.0, 0.7),
(1.0, 1.0, 1.0)),
'green': ((0.0, 0.0, 0.0),
(0.5, 1.0, 0.0),
(1.0, 1.0, 1.0)),
'blue': ((0.0, 0.0, 0.0),
(0.5, 1.0, 0.0),
(1.0, 0.5, 1.0))}
my_cmap = matplotlib.colors.LinearSegmentedColormap('my_colormap',cdict,256)
pcolor(rand(10,10),cmap=my_cmap)
colorbar()
Explanation: Making a colorbar, making colormaps, Show colormaps, in matplotlib
cf. http://scipy.github.io/old-wiki/pages/Cookbook/Matplotlib/Show_colormaps
i.e. SciPy CookbookMatplotlibShow_colormaps
making your own color bar
End of explanation
rand(10,10)
cdict2 = {'red': ((0.0, 0.0, 0.0),
(0.3, 0.5, 0.5),
(0.6, 0.7, 0.7),
(0.9, 0.8, 0.8),
(1.0, 0.8, 0.8)),
'green': ((0.0, 0.0, 0.0),
(0.3, 0.8, 0.8),
(0.6, 0.7, 0.7),
(0.9, 0.0, 0.0),
(1.0, 0.7, 0.7)),
'blue': ((0.0, 1.0, 1.0),
(0.3, 1.0, 1.0),
(0.6, 0.0, 0.0),
(0.9, 0.0, 0.0),
(1.0, 1.0, 1.0))}
cmap1 = matplotlib.colors.LinearSegmentedColormap('my_colormap2', cdict2, N=256 )
cmap2 = matplotlib.colors.LinearSegmentedColormap('my_colormap2', cdict2, N=256, gamma=0.75 )
pcolor(rand(10,10),cmap=cmap1 )
colorbar()
pcolor(rand(10,10),cmap=cmap2 )
colorbar()
cdict3 = {'red': ((0.0, 1.0, 1.0),
(0.2,1.0, 1.0 ),
(0.4,0.0,0.0),
(0.6,0.0,0.0),
(0.8,0.0,0.0),
(1.0,1.0,1.0)),
'green': ((0.0,0.0,0.0),
(0.2,1.0,1.0),
(0.4,1.0,1.0),
(0.6,1.0,1.0),
(0.8,0.0, 0.0),
(1.0,0.0,0.0)),
'blue': ((0.0,0.0,0.0),
(0.2,0.0,0.0),
(0.4,0.0,0.0),
(0.6,1.0,1.0),
(0.8,1.0,1.0),
(1.0,1.0,1.0))}
cmap3 = matplotlib.colors.LinearSegmentedColormap('my_colormap3', cdict3, N=256)
pcolor(rand(10,10),cmap=cmap3)
colorbar()
minval = -0.4
maxval = 1.6
rangeval = maxval - minval
rangeval*0.2
cdict3b = {'red': ((minval + rangeval*0.2*0, 1.0, 1.0),
(minval + rangeval*0.2*1,1.0, 1.0 ),
(minval + rangeval*0.2*2,0.0,0.0),
(minval + rangeval*0.2*3,0.0,0.0),
(minval + rangeval*0.2*4,1.0,0.0),
(minval + rangeval*0.2*5,1.0,1.0)),
'green': ((minval + rangeval*0.2*0,0.0,0.0),
(minval + rangeval*0.2*1,1.0,1.0),
(minval + rangeval*0.2*2,1.0,1.0),
(minval + rangeval*0.2*3,1.0,1.0),
(minval + rangeval*0.2*4,0.0, 0.0),
(minval + rangeval*0.2*5,0.0,0.0)),
'blue': ((minval + rangeval*0.2*0,0.0,0.0),
(minval + rangeval*0.2*1,0.0,0.0),
(minval + rangeval*0.2*2,0.0,0.0),
(minval + rangeval*0.2*3,0.0,1.0),
(minval + rangeval*0.2*4,1.0,1.0),
(minval + rangeval*0.2*5,1.0,1.0))}
cdict3b
cmap3b = matplotlib.colors.LinearSegmentedColormap('my_colormap3b', cdict3, N=256)
pcolor( np.random.uniform(minval, maxval, size=(10,10)),cmap=cmap3b)
colorbar()
Explanation: where rand(10,10) is a 10x10 array of random numbers in the range $[0.0, 1.0)$
End of explanation |
13,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting info on Priming experiment dataset that's needed for modeling
Info
Step1: Init
Step2: Loading OTU table (filter to just bulk samples)
Step3: Which gradient(s) to simulate?
Step4: Notes
Samples to simulate
Isotope
Step5: Total richness of starting (bulk-soil) community
Method
Step6: Number of taxa in all fractions corresponding to each bulk soil sample
Trying to see the difference between richness of bulk vs gradients (veil line effect)
Step7: Distribution of total sequences per fraction
Number of sequences per sample
Using all samples to assess this one
Just fraction samples
Method
Step8: Distribution fitting
Step9: Notes
Step10: Loading metadata
Step11: Determining association
Step12: Number of taxa along the gradient
Step13: Notes
Step14: For each sample, writing a table of OTU_ID and count
Step15: Making directories for simulations
Step16: Rank-abundance distribution for each sample
Step17: Taxon abundance range for each sample-fraction
Step18: Total abundance of each target taxon
Step19: For each sample, writing a table of OTU_ID and count | Python Code:
baseDir = '/home/nick/notebook/SIPSim/dev/priming_exp/'
workDir = os.path.join(baseDir, 'exp_info')
otuTableFile = '/var/seq_data/priming_exp/data/otu_table.txt'
otuTableSumFile = '/var/seq_data/priming_exp/data/otu_table_summary.txt'
metaDataFile = '/var/seq_data/priming_exp/data/allsample_metadata_nomock.txt'
#otuRepFile = '/var/seq_data/priming_exp/otusn.pick.fasta'
#otuTaxFile = '/var/seq_data/priming_exp/otusn_tax/otusn_tax_assignments.txt'
#genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
Explanation: Getting info on Priming experiment dataset that's needed for modeling
Info:
Which gradient(s) to simulate?
For each gradient to simulate:
Infer total richness of starting community
Get distribution of total OTU abundances per fraction
Number of sequences per sample
Infer total abundance of each target taxon
User variables
End of explanation
import glob
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
library(fitdistrplus)
if not os.path.isdir(workDir):
os.makedirs(workDir)
Explanation: Init
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
Explanation: Loading OTU table (filter to just bulk samples)
End of explanation
%%R -w 900 -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_count = sum(count)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
ggplot(tbl.h.s, aes(day, total_count, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme(
text = element_text(size=16)
)
%%R
tbl.h.s$sample[grepl('700', tbl.h.s$sample)] %>% as.vector %>% sort
Explanation: Which gradient(s) to simulate?
End of explanation
%%R
# bulk soil samples for gradients to simulate
samples.to.use = c(
"X12C.700.14.05.NA",
"X12C.700.28.03.NA",
"X12C.700.45.01.NA",
"X13C.700.14.08.NA",
"X13C.700.28.06.NA",
"X13C.700.45.01.NA"
)
Explanation: Notes
Samples to simulate
Isotope:
12C vs 13C
Treatment:
700
Days:
14
28
45
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(ends_with('.NA'))
tbl$OTUId = rownames(tbl)
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 1:(ncol(tbl)-1)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -w 800
tbl.s = tbl.h %>%
filter(count > 0) %>%
group_by(sample, isotope, treatment, day, rep, fraction) %>%
summarize(n_taxa = n())
ggplot(tbl.s, aes(day, n_taxa, color=rep %>% as.character)) +
geom_point() +
facet_grid(isotope ~ treatment) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R -w 800 -h 350
# filter to just target samples
tbl.s.f = tbl.s %>% filter(sample %in% samples.to.use)
ggplot(tbl.s.f, aes(day, n_taxa, fill=rep %>% as.character)) +
geom_bar(stat='identity') +
facet_grid(. ~ isotope) +
labs(y = 'Number of taxa') +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank()
)
%%R
message('Bulk soil total observed richness: ')
tbl.s.f %>% select(-fraction) %>% as.data.frame %>% print
Explanation: Total richness of starting (bulk-soil) community
Method:
Total number of OTUs in OTU table (i.e., gamma richness)
Just looking at bulk soil samples
Loading just bulk soil
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting all OTUs in a sample
n.OTUs = function(samples, otu.long){
otu.long.f = otu.long %>%
filter(sample %in% samples,
count > 0)
n.OTUs = otu.long.f$OTUId %>% unique %>% length
return(n.OTUs)
}
num.OTUs = lapply(fracs, n.OTUs, otu.long=tbl.h)
num.OTUs = do.call(rbind, num.OTUs) %>% as.data.frame
colnames(num.OTUs) = c('n_taxa')
num.OTUs$sample = rownames(num.OTUs)
num.OTUs
%%R
tbl.s.f %>% as.data.frame
%%R
# joining with bulk soil sample summary table
num.OTUs$data = 'fractions'
tbl.s.f$data = 'bulk_soil'
tbl.j = rbind(num.OTUs,
tbl.s.f %>% ungroup %>% select(sample, n_taxa, data)) %>%
mutate(isotope = gsub('X|\\..+', '', sample),
sample = gsub('\\.[0-9]+\\.NA', '', sample))
tbl.j
%%R -h 300 -w 800
ggplot(tbl.j, aes(sample, n_taxa, fill=data)) +
geom_bar(stat='identity', position='dodge') +
facet_grid(. ~ isotope, scales='free_x') +
labs(y = 'Number of OTUs') +
theme(
text = element_text(size=16)
# axis.text.x = element_text(angle=90)
)
Explanation: Number of taxa in all fractions corresponding to each bulk soil sample
Trying to see the difference between richness of bulk vs gradients (veil line effect)
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R -h 400
tbl.h.s = tbl.h %>%
group_by(sample) %>%
summarize(total_seqs = sum(count))
p = ggplot(tbl.h.s, aes(total_seqs)) +
theme_bw() +
theme(
text = element_text(size=16)
)
p1 = p + geom_histogram(binwidth=200)
p2 = p + geom_density()
grid.arrange(p1,p2,ncol=1)
Explanation: Distribution of total sequences per fraction
Number of sequences per sample
Using all samples to assess this one
Just fraction samples
Method:
Total number of sequences (total abundance) per sample
Loading OTU table
End of explanation
%%R -w 700 -h 350
plotdist(tbl.h.s$total_seqs)
%%R -w 450 -h 400
descdist(tbl.h.s$total_seqs, boot=1000)
%%R
f.n = fitdist(tbl.h.s$total_seqs, 'norm')
f.ln = fitdist(tbl.h.s$total_seqs, 'lnorm')
f.ll = fitdist(tbl.h.s$total_seqs, 'logis')
#f.c = fitdist(tbl.s$count, 'cauchy')
f.list = list(f.n, f.ln, f.ll)
plot.legend = c('normal', 'log-normal', 'logistic')
par(mfrow = c(2,1))
denscomp(f.list, legendtext=plot.legend)
qqcomp(f.list, legendtext=plot.legend)
%%R
gofstat(list(f.n, f.ln, f.ll), fitnames=plot.legend)
%%R
summary(f.ln)
Explanation: Distribution fitting
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
# summarize
tbl.s = tbl %>%
group_by(sample) %>%
summarize(total_count = sum(count))
tbl.s %>% head(n=3)
Explanation: Notes:
best fit:
lognormal
mean = 10.113
sd = 1.192
Does sample size correlate to buoyant density?
Loading OTU table
End of explanation
%%R -i metaDataFile
tbl.meta = read.delim(metaDataFile, sep='\t')
tbl.meta %>% head(n=3)
Explanation: Loading metadata
End of explanation
%%R -w 700
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
ggplot(tbl.j, aes(Density, total_count, color=rep)) +
geom_point() +
facet_grid(Treatment ~ Day)
%%R -w 600 -h 350
ggplot(tbl.j, aes(Density, total_count)) +
geom_point(aes(color=Treatment)) +
geom_smooth(method='lm') +
labs(x='Buoyant density', y='Total sequences') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Determining association
End of explanation
%%R
tbl.s = tbl %>%
filter(count > 0) %>%
group_by(sample) %>%
summarize(n_taxa = sum(count > 0))
tbl.j = inner_join(tbl.s, tbl.meta, c('sample' = 'Sample'))
tbl.j %>% head(n=3)
%%R -w 900 -h 600
ggplot(tbl.j, aes(Density, n_taxa, fill=rep, color=rep)) +
#geom_area(stat='identity', alpha=0.5, position='dodge') +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='Number of taxa') +
facet_grid(Treatment ~ Day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Number of taxa along the gradient
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
message('Number of samples: ', tbl.h$sample %>% unique %>% length)
message('Number of OTUs: ', tbl.h$OTUId %>% unique %>% length)
%%R
tbl.hs = tbl.h %>%
group_by(OTUId) %>%
summarize(
total_count = sum(count),
mean_count = mean(count),
median_count = median(count),
sd_count = sd(count)
) %>%
filter(total_count > 0)
tbl.hs %>% head
Explanation: Notes:
Many taxa out to the tails of the gradient.
It seems that the DNA fragments were quite diffuse in the gradients.
Total abundance of each target taxon: bulk soil approach
Getting relative abundances from bulk soil samples
This has the caveat of likely undersampling richness vs using all gradient fraction samples.
i.e., veil line effect
End of explanation
%%R -i workDir
setwd(workDir)
samps = tbl.h$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'OTU.txt'), collapse='_')
tbl.p = tbl.h %>%
filter(sample == samp, count > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
message('Table written: ', outFile)
message(' Number of OTUs: ', tbl.p %>% nrow)
}
Explanation: For each sample, writing a table of OTU_ID and count
End of explanation
p = os.path.join(workDir, '*_OTU.txt')
files = glob.glob(p)
baseDir = os.path.split(workDir)[0]
newDirs = [os.path.split(x)[1].rstrip('.NA_OTU.txt') for x in files]
newDirs = [os.path.join(baseDir, x) for x in newDirs]
for newDir,f in zip(newDirs, files):
if not os.path.isdir(newDir):
print 'Making new directory: {}'.format(newDir)
os.makedirs(newDir)
else:
print 'Directory exists: {}'.format(newDir)
# symlinking file
linkPath = os.path.join(newDir, os.path.split(f)[1])
if not os.path.islink(linkPath):
os.symlink(f, linkPath)
Explanation: Making directories for simulations
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(matches('OTUId'), ends_with('.NA'))
tbl %>% ncol %>% print
tbl[1:4,1:4]
%%R
# long table format w/ selecting samples of interest
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F) %>%
filter(sample %in% samples.to.use,
count > 0)
tbl.h %>% head
%%R
# ranks of relative abundances
tbl.r = tbl.h %>%
group_by(sample) %>%
mutate(perc_rel_abund = count / sum(count) * 100,
rank = row_number(-perc_rel_abund)) %>%
unite(day_rep, day, rep, sep='-')
tbl.r %>% as.data.frame %>% head(n=3)
%%R -w 900 -h 350
ggplot(tbl.r, aes(rank, perc_rel_abund)) +
geom_point() +
# labs(x='Buoyant density', y='Number of taxa') +
facet_wrap(~ day_rep) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Rank-abundance distribution for each sample
End of explanation
%%R -i otuTableFile
tbl = read.delim(otuTableFile, sep='\t')
# filter
tbl = tbl %>%
select(-ends_with('.NA')) %>%
select(-starts_with('X0MC'))
tbl = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
mutate(sample = gsub('^X', '', sample))
tbl %>% head
%%R
tbl.ar = tbl %>%
#mutate(fraction = gsub('.+\\.', '', sample) %>% as.numeric) %>%
#mutate(treatment = gsub('(.+)\\..+', '\\1', sample)) %>%
group_by(sample) %>%
mutate(rel_abund = count / sum(count)) %>%
summarize(abund_range = max(rel_abund) - min(rel_abund)) %>%
ungroup() %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.ar %>% head(n=3)
%%R -w 800
tbl.ar = tbl.ar %>%
mutate(fraction = as.numeric(fraction))
ggplot(tbl.ar, aes(fraction, abund_range, fill=rep, color=rep)) +
geom_point() +
geom_line() +
labs(x='Buoyant density', y='relative abundanc range') +
facet_grid(treatment ~ day) +
theme_bw() +
theme(
text = element_text(size=16),
legend.position = 'none'
)
Explanation: Taxon abundance range for each sample-fraction
End of explanation
%%R -i otuTableFile
# loading OTU table
tbl = read.delim(otuTableFile, sep='\t') %>%
select(-ends_with('.NA'))
tbl.h = tbl %>%
gather('sample', 'count', 2:ncol(tbl)) %>%
separate(sample, c('isotope','treatment','day','rep','fraction'), sep='\\.', remove=F)
tbl.h %>% head
%%R
# basename of fractions
samples.to.use.base = gsub('\\.[0-9]+\\.NA', '', samples.to.use)
samps = tbl.h$sample %>% unique
fracs = sapply(samples.to.use.base, function(x) grep(x, samps, value=TRUE))
for (n in names(fracs)){
n.frac = length(fracs[[n]])
cat(n, '-->', 'Number of fraction samples: ', n.frac, '\n')
}
%%R
# function for getting mean OTU abundance from all fractions
OTU.abund = function(samples, otu.long){
otu.rel.abund = otu.long %>%
filter(sample %in% samples,
count > 0) %>%
ungroup() %>%
group_by(sample) %>%
mutate(total_count = sum(count)) %>%
ungroup() %>%
mutate(perc_abund = count / total_count * 100) %>%
group_by(OTUId) %>%
summarize(mean_perc_abund = mean(perc_abund),
median_perc_abund = median(perc_abund),
max_perc_abund = max(perc_abund))
return(otu.rel.abund)
}
## calling function
otu.rel.abund = lapply(fracs, OTU.abund, otu.long=tbl.h)
otu.rel.abund = do.call(rbind, otu.rel.abund) %>% as.data.frame
otu.rel.abund$sample = gsub('\\.[0-9]+$', '', rownames(otu.rel.abund))
otu.rel.abund %>% head
%%R -h 600 -w 900
# plotting
otu.rel.abund.l = otu.rel.abund %>%
gather('abund_stat', 'value', mean_perc_abund, median_perc_abund, max_perc_abund)
otu.rel.abund.l$OTUId = reorder(otu.rel.abund.l$OTUId, -otu.rel.abund.l$value)
ggplot(otu.rel.abund.l, aes(OTUId, value, color=abund_stat)) +
geom_point(shape='O', alpha=0.7) +
scale_y_log10() +
facet_grid(abund_stat ~ sample) +
theme_bw() +
theme(
text = element_text(size=16),
axis.text.x = element_blank(),
legend.position = 'none'
)
Explanation: Total abundance of each target taxon: all fraction samples approach
Getting relative abundances from all fraction samples for the gradient
I will need to calculate (mean|max?) relative abundances for each taxon and then re-scale so that cumsum = 1
End of explanation
%%R -i workDir
setwd(workDir)
# each sample is a file
samps = otu.rel.abund.l$sample %>% unique %>% as.vector
for(samp in samps){
outFile = paste(c(samp, 'frac_OTU.txt'), collapse='_')
tbl.p = otu.rel.abund %>%
filter(sample == samp, mean_perc_abund > 0)
write.table(tbl.p, outFile, sep='\t', quote=F, row.names=F)
cat('Table written: ', outFile, '\n')
cat(' Number of OTUs: ', tbl.p %>% nrow, '\n')
}
Explanation: For each sample, writing a table of OTU_ID and count
End of explanation |
13,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 2
Step1: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt downloaded from the SRA website for this project. It contains all of the information on the id numbers needed to download data for all samples for this project.
Project SRA
Step3: Download sequence data using the SRA information
This is a function to make wget calls to download data base on SRA IDs
Step4: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
Step5: Convert from SRA format to FASTQ format
Step6: Make a params file for pyrad analysis
Step7: Assemble in pyrad
We assemble the data set at mincov=4 and mincov=2
Step8: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.5M.
Step9: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std here is the std in means across samples. The std of depths within individuals is much higher.
Step10: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
Step11: Print final stats table
Step12: Infer ML phylogeny in raxml as an unrooted tree
Step13: Plot the tree in R using ape
Step14: Get average phylo distances (GTRgamma distance) | Python Code:
### Notebook 2
### Data set 2 (Phrynosomatidae)
### Authors: Leache et al. (2015)
### Data Location: NCBI SRA SRP063316
Explanation: Notebook 2:
This is an Jupyter/IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set, and further code below to analyze missing data in that data set.
End of explanation
%%bash
## make a new directory for this analysis
mkdir -p empirical_2/fastq/
## IPython code
## import libraries
import pandas as pd
import urllib2
import os
## read in the SRA run table from public github url
## as a pandas data frame
url = "https://raw.githubusercontent.com/"+\
"dereneaton/RADmissing/master/empirical_2_SraRunTable.txt"
intable = urllib2.urlopen(url)
indata = pd.read_table(intable, sep="\t")
## print first few rows
print indata.head()
Explanation: Download the sequence data
Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt downloaded from the SRA website for this project. It contains all of the information on the id numbers needed to download data for all samples for this project.
Project SRA: SRP063316
BioProject ID: PRJNA294316
Biosample numbers: SAMN04027506 -- SAMN04027579
Runs: SRR2240500 -- SRR2240573
End of explanation
def wget_download(SRR, outdir, outname):
Python function to get sra data from ncbi and write to
outdir with a new name using bash call wget
## get output name
output = os.path.join(outdir, outname+".sra")
## create a call string
call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\
"ftp://ftp-trace.ncbi.nlm.nih.gov/"+\
"sra/sra-instant/reads/ByRun/sra/SRR/"+\
"{}/{}/{}.sra;".format(SRR[:6], SRR, SRR)
## call bash script
! $call
Explanation: Download sequence data using the SRA information
This is a function to make wget calls to download data base on SRA IDs
End of explanation
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s):
wget_download(SRR, "empirical_2/fastq/", ID)
Explanation: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
End of explanation
%%bash
## convert sra files to fastq using fastq-dump tool
## output as gzipped into the fastq directory
fastq-dump --gzip -O empirical_2/fastq/ empirical_2/fastq/*.sra
## remove .sra files
rm empirical_2/fastq/*.sra
Explanation: Convert from SRA format to FASTQ format
End of explanation
%%bash
pyrad --version
%%bash
## create a new default params file
pyrad -n
%%bash
## substitute new parameters into file
sed -i '/## 1. /c\empirical_2/ ## 1. working directory ' params.txt
sed -i '/## 6. /c\TGCAGG ## 6. cutters ' params.txt
sed -i '/## 7. /c\30 ## 7. N processors ' params.txt
sed -i '/## 9. /c\6 ## 9. NQual ' params.txt
sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt
sed -i '/## 12./c\4 ## 12. MinCov ' params.txt
sed -i '/## 13./c\10 ## 13. maxSH ' params.txt
sed -i '/## 14./c\empirical_2_m4 ## 14. output name ' params.txt
sed -i '/## 18./c\empirical_2/fastq/*.gz ## 18. data location ' params.txt
sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt
sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt
cat params.txt
Explanation: Make a params file for pyrad analysis
End of explanation
%%bash
pyrad -p params.txt -s 234567 >> log.txt 2>&1
%%bash
sed -i '/## 12./c\2 ## 12. MinCov ' params.txt
sed -i '/## 14./c\empirical_2_m2 ## 14. output name ' params.txt
%%bash
pyrad -p params.txt -s 7 >> log.txt 2>&1
Explanation: Assemble in pyrad
We assemble the data set at mincov=4 and mincov=2
End of explanation
## read in the data
s2dat = pd.read_table("empirical_2/stats/s2.rawedit.txt", header=0, nrows=74)
## print summary stats
print s2dat["passed.total"].describe()
## find which sample has the most raw data
maxraw = s2dat["passed.total"].max()
print "\nmost raw data in sample:"
print s2dat['sample '][s2dat['passed.total']==maxraw]
Explanation: Results
We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples.
Raw data amounts
The average number of raw reads per sample is 1.5M.
End of explanation
## read in the s3 results
s3dat = pd.read_table("empirical_2/stats/s3.clusters.txt", header=0, nrows=74)
## print summary stats
print "summary of means\n=================="
print s3dat['dpt.me'].describe()
## print summary stats
print "\nsummary of std\n=================="
print s3dat['dpt.sd'].describe()
## print summary stats
print "\nsummary of proportion lowdepth\n=================="
print pd.Series(1-s3dat['d>5.tot']/s3dat["total"]).describe()
## find which sample has the greatest depth of retained loci
max_hiprop = (s3dat["d>5.tot"]/s3dat["total"]).max()
print "\nhighest coverage in sample:"
print s3dat['taxa'][s3dat['d>5.tot']/s3dat["total"]==max_hiprop]
## print mean and std of coverage for the highest coverage sample
with open("empirical_2/clust.85/PHBR4.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
print depths.mean(), depths.std()
Explanation: Look at distributions of coverage
pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std here is the std in means across samples. The std of depths within individuals is much higher.
End of explanation
import toyplot
import toyplot.svg
import numpy as np
## read in the depth information for this sample
with open("empirical_2/clust.85/PHBR4.depths", 'rb') as indat:
depths = np.array(indat.read().strip().split(","), dtype=int)
## make a barplot in Toyplot
canvas = toyplot.Canvas(width=350, height=300)
axes = canvas.axes(xlabel="Depth of coverage (N reads)",
ylabel="N loci",
label="dataset2/sample=PHBR4")
## select the loci with depth > 5 (kept)
keeps = depths[depths>5]
## plot kept and discarded loci
edat = np.histogram(depths, range(30)) # density=True)
kdat = np.histogram(keeps, range(30)) #, density=True)
axes.bars(edat)
axes.bars(kdat)
#toyplot.svg.render(canvas, "empirical_2_depthplot.svg")
Explanation: Plot the coverage for the sample with highest mean coverage
Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
End of explanation
cat empirical_2/stats/empirical_2_m4.stats
%%bash
head -n 10 empirical_2/stats/empirical_2_m2.stats
Explanation: Print final stats table
End of explanation
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_2/ \
-n empirical_2_m4 -s empirical_2/outfiles/empirical_2_m4.phy
%%bash
## raxml argumement w/ ...
raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \
-w /home/deren/Documents/RADmissing/empirical_2/ \
-n empirical_2_m2 -s empirical_2/outfiles/empirical_2_m2.phy
%%bash
head -n 20 empirical_2/RAxML_info.empirical_2_m4
%%bash
head -n 20 empirical_2/RAxML_info.empirical_2_m2
Explanation: Infer ML phylogeny in raxml as an unrooted tree
End of explanation
%load_ext rpy2.ipython
%%R -w 600 -h 800
library(ape)
tre <- read.tree("empirical_2/RAxML_bipartitions.empirical_2")
ltre <- ladderize(tre)
plot(ltre, cex=0.8, edge.width=2)
#nodelabels(ltre$node.label)
Explanation: Plot the tree in R using ape
End of explanation
%%R
mean(cophenetic.phylo(ltre))
Explanation: Get average phylo distances (GTRgamma distance)
End of explanation |
13,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FD_1D_DX4_DT4_ABS 1-D acoustic Finite-Difference modelling
GNU General Public License v3.0
Author
Step1: Input Parameter
Step2: Preparation
Step3: Create space and time vector
Step4: Source signal - Ricker-wavelet
Step5: Time stepping
Step6: Save seismograms | Python Code:
%matplotlib inline
import numpy as np
import time as tm
import matplotlib.pyplot as plt
Explanation: FD_1D_DX4_DT4_ABS 1-D acoustic Finite-Difference modelling
GNU General Public License v3.0
Author: Florian Wittkamp
Finite-Difference acoustic seismic wave simulation
Discretization of the first-order acoustic wave equation
Temporal second-order accuracy $O(\Delta T^4)$
Spatial fourth-order accuracy $O(\Delta X^4)$
Temporal discretization is based on the Adams-Basforth method
Theory is available in:
Bohlen, T., & Wittkamp, F. (2016).
Three-dimensional viscoelastic time-domain finite-difference seismic modelling using the staggered Adams-Bashforth time integrator.
Geophysical Journal International, 204(3), 1781-1788.
Initialisation
End of explanation
# Discretization
c1=20 # Number of grid points per dominant wavelength
c2=0.5 # CFL-Number
nx=2000 # Number of grid points
T=10 # Total propagation time
# Source Signal
f0= 10 # Center frequency Ricker-wavelet
q0= 1 # Maximum amplitude Ricker-Wavelet
xscr = 100 # Source position (in grid points)
# Receiver
xrec1=400 # Position Reciever 1 (in grid points)
xrec2=800 # Position Reciever 2 (in grid points)
xrec3=1800 # Position Reciever 3 (in grid points)
# Velocity and density
modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2)))))
rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2)))))
Explanation: Input Parameter
End of explanation
# Init wavefields
vx=np.zeros(nx)
p=np.zeros(nx)
vx=np.zeros(nx)
p=np.zeros(nx)
vx_x=np.zeros(nx)
p_x=np.zeros(nx)
vx_x2=np.zeros(nx)
p_x2=np.zeros(nx)
vx_x3=np.zeros(nx)
p_x3=np.zeros(nx)
vx_x4=np.zeros(nx)
p_x4=np.zeros(nx)
# Calculate first Lame-Paramter
l=rho * modell_v * modell_v
cmin=min(modell_v.flatten()) # Lowest P-wave velocity
cmax=max(modell_v.flatten()) # Highest P-wave velocity
fmax=2*f0 # Maximum frequency
dx=cmin/(fmax*c1) # Spatial discretization (in m)
dt=dx/(cmax)*c2 # Temporal discretization (in s)
lampda_min=cmin/fmax # Smallest wavelength
# Output model parameter:
print("Model size: x:",dx*nx,"in m")
print("Temporal discretization: ",dt," s")
print("Spatial discretization: ",dx," m")
print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
Explanation: Preparation
End of explanation
x=np.arange(0,dx*nx,dx) # Space vector
t=np.arange(0,T,dt) # Time vector
nt=np.size(t) # Number of time steps
# Plotting model
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.subplots_adjust(wspace=0.4,right=1.6)
ax1.plot(x,modell_v)
ax1.set_ylabel('VP in m/s')
ax1.set_xlabel('Depth in m')
ax1.set_title('P-wave velocity')
ax2.plot(x,rho)
ax2.set_ylabel('Density in g/cm^3')
ax2.set_xlabel('Depth in m')
ax2.set_title('Density');
Explanation: Create space and time vector
End of explanation
tau=np.pi*f0*(t-1.5/f0)
q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2)
# Plotting source signal
plt.figure(3)
plt.plot(t,q)
plt.title('Source signal Ricker-Wavelet')
plt.ylabel('Amplitude')
plt.xlabel('Time in s')
plt.draw()
Explanation: Source signal - Ricker-wavelet
End of explanation
# Init Seismograms
Seismogramm=np.zeros((3,nt)); # Three seismograms
# Calculation of some coefficients
i_dx=1.0/(dx)
print("Starting time stepping...")
## Time stepping
for n in range(2,nt):
# Inject source wavelet
p[xscr]=p[xscr]+q[n]
# Update velocity
for kx in range(5,nx-4):
# Calculating spatial derivative
p_x[kx]=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1])
# Update velocity
vx[kx]=vx[kx]-dt/rho[kx]*(13.0/12.0*p_x[kx]-5.0/24.0*p_x2[kx]+1.0/6.0*p_x3[kx]-1.0/24.0*p_x4[kx])
# np.np.zeros old spatial derivations for Adam-Bashforth method
np.copyto(p_x4,p_x3)
np.copyto(p_x3,p_x2)
np.copyto(p_x2,p_x)
# Update pressure
for kx in range(5,nx-4):
# Calculating spatial derivative
vx_x[kx]= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2])
# Update pressure
p[kx]=p[kx]-l[kx]*dt*(13.0/12.0*vx_x[kx]-5.0/24.0*vx_x2[kx]+1.0/6.0*vx_x3[kx]-1.0/24.0*vx_x4[kx])
# np.np.zeros old spatial derivations for Adam-Bashforth method
np.copyto(vx_x4,vx_x3)
np.copyto(vx_x3,vx_x2)
np.copyto(vx_x2,vx_x)
# Save seismograms
Seismogramm[0,n]=p[xrec1]
Seismogramm[1,n]=p[xrec2]
Seismogramm[2,n]=p[xrec3]
print("Finished time stepping!")
Explanation: Time stepping
End of explanation
## Save seismograms
np.save("Seismograms/FD_1D_DX4_DT4_ABS",Seismogramm)
## Plot seismograms
fig, (ax1, ax2, ax3) = plt.subplots(3, 1)
fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 )
ax1.plot(t,Seismogramm[0,:])
ax1.set_title('Seismogram 1')
ax1.set_ylabel('Amplitude')
ax1.set_xlabel('Time in s')
ax1.set_xlim(0, T)
ax2.plot(t,Seismogramm[1,:])
ax2.set_title('Seismogram 2')
ax2.set_ylabel('Amplitude')
ax2.set_xlabel('Time in s')
ax2.set_xlim(0, T)
ax3.plot(t,Seismogramm[2,:])
ax3.set_title('Seismogram 3')
ax3.set_ylabel('Amplitude')
ax3.set_xlabel('Time in s')
ax3.set_xlim(0, T);
Explanation: Save seismograms
End of explanation |
13,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ARDC Training
Step1: Browse the available Data Cubes
Step2: Pick a product
Use the platform and product names from the previous block to select a Data Cube.
Step3: Display Latitude-Longitude and Time Bounds of the Data Cube
Step4: Visualize Data Cube Region
Step5: Pick a smaller analysis region and display that region
Try to keep your region to less than 0.2-deg x 0.2-deg for rapid processing. You can click on the map above to find the Lat-Lon coordinates of any location. You will want to identify a region with an urban aree or some known vegetation. Pick a time window of a few months to a year so we can pick out some clear pixels.
Step6: Load the dataset and the required spectral bands or other parameters
After loading, you will view the Xarray dataset. Notice the dimensions represent the number of pixels in your latitude and longitude dimension as well as the number of time slices (time) in your time series.
Step7: Display Example Images
Single band visualization
For a quick inspection, let's look at one image. The code will allow the selection of any band (red, blue, green, nir, swir1, swir2) to produce a grey-scale image. Select the desired acquisition (time slice) in the block below. You can select from 1 to #, where the max value is the number of time slices noted in the block above. Change the comment statements below to select the bands for the first image.
Step8: Define Cloud Masking Function
Removes clouds and cloud shadows based on the Landsat pixel QA information
This is only for reference ... nothing to modify here
Step9: Set up plotting function (to be used later)
Nothing to modify here
Median Mosaic
Masks clouds from imagery using the median valued cloud-free pixels in the time series
Step10: Fractional Cover
Fractional Cover (FC) is used for landcover type estimation (vegetation, non-green vegetation, bare soil) of each pixel. We use a model from CSIRO (Juan Gerschmann) and apply it to a median mosaic.
Step11: Plotting Fractional Cover Results
Plot Bare Soil (bs), Photosynthetic Vegetation (pv) or Non Photosynthetic Vegetation (npv)
<br>
Plot a False Color RGB result where RGB = bs/pv/npv.
Step12: Spectral Indices
NDVI (vegetation) and NDBI (urbanization)
NDVI = Normalized Difference Vegetation Index
A derived index that correlates well with the existance of vegetation.
$$ NDVI = \frac{(NIR - RED)}{(NIR + RED)}$$
Step13: NDBI = Normalized Difference Build-Up Index
A derived index that correlates well with the existance of urbanization.
<br>
$$NDBI = \frac{(SWIR1 - NIR)}{(SWIR1 + NIR)}$$
Step14: Create a threshold plot
First we will define a minimum threshold and a maximum threshold. Then you will create a plot that colors the region between the threshold a single color (e.g. red) and the region outside the threshold will be BLACK or WHITE. Also, we will calculate the % of pixels and the number of pixels in the threshold range.
Step15: Plot NDVI Mosaic
Step16: Plot NDVI at time t
Step17: Select a single pixel and plot an index value over time | Python Code:
import datacube
import utils.data_cube_utilities.data_access_api as dc_api
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
api = dc_api.DataAccessApi()
dc = datacube.Datacube(app = 'ardc_task_c')
api.dc = dc
Explanation: ARDC Training: Python Notebooks
Task-C: Fractional Cover (FC) and Spectral Indices (NDBI and NDVI)
Import the Datacube Configuration
End of explanation
list_of_products = dc.list_products()
netCDF_products = list_of_products[list_of_products['format'] == 'NetCDF']
netCDF_products
Explanation: Browse the available Data Cubes
End of explanation
# Change the data platform and data cube here
platform = 'LANDSAT_7'
product = 'ls7_usgs_sr_scene'
Explanation: Pick a product
Use the platform and product names from the previous block to select a Data Cube.
End of explanation
from utils.data_cube_utilities.dc_time import _n64_to_datetime, dt_to_str
extents = api.get_full_dataset_extent(platform = platform, product = product, measurements=[])
latitude_extents = (min(extents['latitude'].values),max(extents['latitude'].values))
longitude_extents = (min(extents['longitude'].values),max(extents['longitude'].values))
time_extents = (min(extents['time'].values),max(extents['time'].values))
print("Latitude Extents:", latitude_extents)
print("Longitude Extents:", longitude_extents)
print("Time Extents:", list(map(dt_to_str, map(_n64_to_datetime, time_extents))))
Explanation: Display Latitude-Longitude and Time Bounds of the Data Cube
End of explanation
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = latitude_extents, longitude = longitude_extents)
Explanation: Visualize Data Cube Region
End of explanation
## Vietnam - Central Lam Dong Province ##
# longitude_extents = (105.2, 105.3)
# latitude_extents = (11.25, 11.35)
## Sierra Leone - Freetown
latitude_extents = (8.35, 8.51)
longitude_extents = (-13.30, -13.13)
time_extents = ('2015-01-01', '2015-12-31')
display_map(latitude = latitude_extents, longitude = longitude_extents)
Explanation: Pick a smaller analysis region and display that region
Try to keep your region to less than 0.2-deg x 0.2-deg for rapid processing. You can click on the map above to find the Lat-Lon coordinates of any location. You will want to identify a region with an urban aree or some known vegetation. Pick a time window of a few months to a year so we can pick out some clear pixels.
End of explanation
landsat_dataset = dc.load(latitude = latitude_extents,
longitude = longitude_extents,
platform = platform,
time = time_extents,
product = product,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa'])
landsat_dataset
#view the dimensions and sample content from the cube
Explanation: Load the dataset and the required spectral bands or other parameters
After loading, you will view the Xarray dataset. Notice the dimensions represent the number of pixels in your latitude and longitude dimension as well as the number of time slices (time) in your time series.
End of explanation
acquisition_number = 0
# select an acquisition number from 0 (first time layer) to "time" using the array limits above
%matplotlib inline
#landsat_dataset.red.isel(time = acquisition_number).plot(cmap = "Greys")
landsat_dataset.green.isel(time = acquisition_number).plot(cmap = "Greys")
#landsat_dataset.blue.isel(time = acquisition_number).plot(cmap = "Greys")
#landsat_dataset.nir.isel(time = acquisition_number).plot(cmap = "Greys")
#landsat_dataset.swir1.isel(time = acquisition_number).plot(cmap = "Greys")
#landsat_dataset.swir2.isel(time = acquisition_number).plot(cmap = "Greys")
Explanation: Display Example Images
Single band visualization
For a quick inspection, let's look at one image. The code will allow the selection of any band (red, blue, green, nir, swir1, swir2) to produce a grey-scale image. Select the desired acquisition (time slice) in the block below. You can select from 1 to #, where the max value is the number of time slices noted in the block above. Change the comment statements below to select the bands for the first image.
End of explanation
!grep -re 'water_xarray' .
import numpy as np
def generate_cloud_mask(dataset, include_shadows = False):
#Create boolean Masks for clear and water pixels
clear_pixels = dataset.pixel_qa.values == 2 + 64
water_pixels = dataset.pixel_qa.values == 4 + 64
shadow_pixels= dataset.pixel_qa.values == 8 + 64
a_clean_mask = np.logical_or(clear_pixels, water_pixels)
if include_shadows:
a_clean_mask = np.logical_or(a_clean_mask, shadow_pixels)
return np.invert(a_clean_mask)
def remove_clouds(dataset, include_shadows = False):
#Create boolean Masks for clear and water pixels
clear_pixels = dataset.pixel_qa.values == 2 + 64
water_pixels = dataset.pixel_qa.values == 4 + 64
shadow_pixels= dataset.pixel_qa.values == 8 + 64
a_clean_mask = np.logical_or(clear_pixels, water_pixels)
if include_shadows:
a_clean_mask = np.logical_or(a_clean_mask, shadow_pixels)
return dataset.where(a_clean_mask)
cloud_mask = generate_cloud_mask(landsat_dataset)
cloudless = remove_clouds(landsat_dataset) #landsat_dataset.where(image_is_clean)
Explanation: Define Cloud Masking Function
Removes clouds and cloud shadows based on the Landsat pixel QA information
This is only for reference ... nothing to modify here
End of explanation
from utils.data_cube_utilities.dc_mosaic import create_median_mosaic
def median_mosaic(dataset):
# The mask here is based on pixel_qa products. It comes bundled in with most Landsat Products.
cloud_free_boolean_mask = np.invert(generate_cloud_mask(dataset))
return create_median_mosaic(dataset, clean_mask = cloud_free_boolean_mask)
median_composite = median_mosaic(landsat_dataset)
median_composite.nir.plot(cmap = "Greys")
from utils.data_cube_utilities.dc_rgb import rgb
rgb(median_composite)
Explanation: Set up plotting function (to be used later)
Nothing to modify here
Median Mosaic
Masks clouds from imagery using the median valued cloud-free pixels in the time series
End of explanation
from utils.data_cube_utilities.dc_fractional_coverage_classifier import frac_coverage_classify
frac_classes = frac_coverage_classify(median_composite, clean_mask = np.ones(median_composite.pixel_qa.shape).astype(np.bool))
Explanation: Fractional Cover
Fractional Cover (FC) is used for landcover type estimation (vegetation, non-green vegetation, bare soil) of each pixel. We use a model from CSIRO (Juan Gerschmann) and apply it to a median mosaic.
End of explanation
frac_classes.bs.plot(cmap = "Greys")
#frac_classes.pv.plot(cmap = "Greys")
#frac_classes.npv.plot(cmap = "Greys")
rgb(frac_classes, bands = ['bs', 'pv', 'npv'])
Explanation: Plotting Fractional Cover Results
Plot Bare Soil (bs), Photosynthetic Vegetation (pv) or Non Photosynthetic Vegetation (npv)
<br>
Plot a False Color RGB result where RGB = bs/pv/npv.
End of explanation
def NDVI(dataset):
return (dataset.nir - dataset.red)/(dataset.nir + dataset.red)
Explanation: Spectral Indices
NDVI (vegetation) and NDBI (urbanization)
NDVI = Normalized Difference Vegetation Index
A derived index that correlates well with the existance of vegetation.
$$ NDVI = \frac{(NIR - RED)}{(NIR + RED)}$$
End of explanation
def NDBI(dataset):
return (dataset.swir1 - dataset.nir)/(dataset.swir1 + dataset.nir)
landsat_mosaic = median_mosaic(landsat_dataset)
ndbi = NDBI(landsat_mosaic) # Urbanization - Reds
ndvi_mosaic = NDVI(landsat_mosaic) # Dense Vegetation - Greens
(ndbi).plot(cmap = "Reds")
(ndvi_mosaic).plot(cmap = "Greens")
Explanation: NDBI = Normalized Difference Build-Up Index
A derived index that correlates well with the existance of urbanization.
<br>
$$NDBI = \frac{(SWIR1 - NIR)}{(SWIR1 + NIR)}$$
End of explanation
# Select the time slice for the NVDI output (first slice=0)
t = 0
ndvi_dataset_at_time_t = NDVI(landsat_dataset).isel(time = t)
mask_at_time_t = generate_cloud_mask(landsat_dataset.isel(time = t))
# Define the threshold region bounds
minimum_threshold = 0.6
maximum_threshold = 0.9
from matplotlib.ticker import FuncFormatter
import matplotlib.pyplot as plt
def threshold_plot(da, min_threshold, max_threshold, mask = None, width = 10, *args, **kwargs):
color_in = np.array([255,0,0])
color_out = np.array([0,0,0])
color_cloud = np.array([255,255,255])
array = np.zeros((*da.values.shape, 3)).astype(np.int16)
inside = np.logical_and(da.values > min_threshold, da.values < max_threshold)
outside = np.invert(inside)
masked = np.zeros(da.values.shape).astype(bool) if mask is None else mask
array[inside] = color_in
array[outside] = color_out
array[masked] = color_cloud
def figure_ratio(ds, fixed_width = 10):
width = fixed_width
height = len(ds.latitude) * (fixed_width / len(ds.longitude))
return (width, height)
fig, ax = plt.subplots(figsize = figure_ratio(da,fixed_width = width))
lat_formatter = FuncFormatter(lambda y_val, tick_pos: "{0:.3f}".format(da.latitude.values[tick_pos] ))
lon_formatter = FuncFormatter(lambda x_val, tick_pos: "{0:.3f}".format(da.longitude.values[tick_pos]))
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
plt.title("Threshold: {} < x < {}".format(min_threshold, max_threshold))
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.imshow(array, *args, **kwargs)
plt.show()
Explanation: Create a threshold plot
First we will define a minimum threshold and a maximum threshold. Then you will create a plot that colors the region between the threshold a single color (e.g. red) and the region outside the threshold will be BLACK or WHITE. Also, we will calculate the % of pixels and the number of pixels in the threshold range.
End of explanation
# Plot the NDVI threshold product using a cloud-filterd mosaic
threshold_plot(ndvi_mosaic, minimum_threshold, maximum_threshold, width = 10)
Explanation: Plot NDVI Mosaic
End of explanation
# Plot the NDVI threshold product using a single time slice (one scene)
threshold_plot(ndvi_dataset_at_time_t, minimum_threshold, maximum_threshold, width = 10, mask = mask_at_time_t,)
def threshold_count(da, min_threshold, max_threshold, mask = None):
def count_not_nans(arr):
return np.count_nonzero(~np.isnan(arr))
in_threshold = np.logical_and( da.values > min_threshold, da.values < max_threshold)
total_non_cloudy = count_not_nans(da.values) if mask is None else np.sum(mask)
return dict(total = np.size(da.values),
total_non_cloudy = total_non_cloudy,
inside = np.nansum(in_threshold),
outside = total_non_cloudy - np.nansum(in_threshold)
)
def threshold_percentage(da, min_threshold, max_threshold, mask = None):
counts = threshold_count(da, min_threshold, max_threshold, mask = mask)
return dict(percent_inside_threshold = (counts["inside"] / counts["total"]) * 100.0,
percent_outside_threshold = (counts["outside"] / counts["total"]) * 100.0,
percent_clouds = ( 100.0-counts["total_non_cloudy"] / counts["total"] * 100.0))
threshold_count(ndvi_mosaic,
minimum_threshold,
maximum_threshold)
threshold_percentage(ndvi_mosaic,
minimum_threshold,
maximum_threshold)
threshold_count(ndvi_dataset_at_time_t,
minimum_threshold,
maximum_threshold,
mask = mask_at_time_t)
threshold_percentage(ndvi_dataset_at_time_t,
minimum_threshold,
maximum_threshold,
mask = mask_at_time_t )
Explanation: Plot NDVI at time t
End of explanation
pixel_lat = 11.45
pixel_lon = 105.40
pixel = NDVI(remove_clouds(landsat_dataset)).sel(latitude = pixel_lat,
longitude = pixel_lon,
method = 'nearest') # nearest neighbor selection
plt.figure(figsize = (20,5))
plt.scatter(pixel.time.values, pixel.values)
Explanation: Select a single pixel and plot an index value over time
End of explanation |
13,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting <span style="font-variant
Step1: Imports
We will use the package ply to remove the
<span style="font-variant
Step2: Token Declarations
We begin by declaring the tokens. Note that the variable tokens is a keyword of ply to define the names of the token classes. In this case, we have declared nine different tokens.
- HEAD_START will match the tag <head> that starts the definition of the
<span style="font-variant
Step3: Definition of the States
Once we are inside an <span style="font-variant
Step4: Token Definitions
We proceed to give the definition of the tokens. Note that none of the function defined below
returns a token. Rather all of these function print the transformation of the
<span style="font-variant
Step5: The Definition of the Token SCRIPT_START
Once the scanner reads the opening tag <script> it switches into the state script. In this state it will continue to read and discard characters until it sees the closing tag /script>.
Step6: The Definition of the Token `LINEBREAK``
Groups of newline characters are condensed into a single newline character.
As we are not interested in the variable t.lexer.lineno in this example, we don't have to count the newlines.
This token is active in any state.
Step7: The Definition of the Token TAG
The token TAG is defined as any string that starts with the character < and ends with the character
>. Betweens these two characters there has to be a nonzero number of characters that are different from
the character >. The text of the token is discarded.
Step8: The Definition of the Token NAMED_ENTITY
In order to support named <span style="font-variant
Step9: The regular expression &[a-zA-Z]+;? searches for <span style="font-variant
Step10: The Definition of the Token UNICODE
The regular expression &\#[0-9]+;? searches for <span style="font-variant
Step11: The Definition of the Token ANY
The regular expression . matches any character that is different from a newline character. These characters are printed unmodified. Note that the scanner tries the regular expressions for a given state in the order that they are defined in this notebook. Therefore, it is crucial that the function t_ANY is defined after all other token definitions for the <em style="color
Step12: The Definition of the Token HEAD_END
The regular expression </head> matches the closing head tag. Note that is regular expression is only
active in state header as the name of this function starts with t_header. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the
<em style="color
Step13: The Definition of the Token SCRIPT_END
If the scanner is either in the state script, the function
t_header_script_END recognizes the matching closing tag and switches back to the state
INITIAL.
The regular expression </script> matches the closing script tag. Note that this regular expression is only
active in state script. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the start state of the scanner.
Step14: The Definition of the Token ANY
If the scanner is either in the state header or the state script, the function
t_header_script_ANY eats up all characters without echoing them.
Step15: Error Handling
The function t_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens. In our implementation we print the first character that could not be matched, discard this character and continue.
Step16: The function t_header_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state header.
Step17: The function t_script_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state script.
Step18: Running the Scanner
The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
Step19: The line below generates the scanner. Because the option debug=True is set, we can see the regular expression that is generated for scanning.
Step20: Next, we feed our input string into the generated scanner.
Step21: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner. | Python Code:
data = \
'''
<html>
<head>
<meta charset="utf-8">
<title>Homepage of Prof. Dr. Karl Stroetmann</title>
<link type="text/css" rel="stylesheet" href="style.css" />
<link href="http://fonts.googleapis.com/css?family=Rochester&subset=latin,latin-ext"
rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Pacifico&subset=latin,latin-ext"
rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Cabin+Sketch&subset=latin,latin-ext" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Sacramento" rel="stylesheet" type="text/css">
</head>
<body>
<hr/>
<div id="table">
<header>
<h1 id="name">Prof. Dr. Karl Stroetmann</h1>
</header>
<div id="row1">
<div class="right">
<a id="dhbw" href="http://www.ba-stuttgart.de">Duale Hochschule Baden-Württemberg</a>
<br/>Coblitzallee 1-9
<br/>68163 Mannheim
<br/>Germany
<br>
<br/>Office: Raum 344B
<br/>Phone: +49 621 4105-1376
<br/>Fax: +49 621 4105-1194
<br/>Skype: karlstroetmann
</div>
<div id="links">
<strong class="some">Some links:</strong>
<ul class="inlink">
<li class="inlink">
My <a class="inlink" href="https://github.com/karlstroetmann?tab=repositories">lecture notes</a>,
as well as the programs presented in class, can be found
at <br>
<a class="inlink" href="https://github.com/karlstroetmann?tab=repositories">https://github.com/karlstroetmann</a>.
</li>
<li class="inlink">Most of my papers can be found at <a class="inlink" href="https://www.researchgate.net/">researchgate.net</a>.</li>
<li class="inlink">The programming language SetlX can be downloaded at <br>
<a href="http://randoom.org/Software/SetlX"><tt class="inlink">http://randoom.org/Software/SetlX</tt></a>.
</li>
</ul>
</div>
</div>
</div>
<div id="intro">
As I am getting old and wise, I have to accept the limits of
my own capabilities. I have condensed these deep philosophical
insights into a most beautiful pearl of poetry. I would like
to share these humble words of wisdom:
<div class="poetry">
I am a teacher by profession, <br>
mostly really by obsession; <br>
But even though I boldly try, <br>
I just cannot teach <a href="http://img1.wikia.nocookie.net/__cb20070831020747/uncyclopedia/images/a/a2/Flying_Pig.jpg" id="fp">pigs</a> to fly.</br>
Instead, I slaughter them and fry.
</div>
<div class="citation">
<div class="quote">
Any sufficiently advanced poetry is indistinguishable from divine wisdom.
</div>
<div id="sign">His holiness Pope Hugo Ⅻ.</div>
</div>
</div>
</div>
</body>
</html>
'''
Explanation: Converting <span style="font-variant:small-caps;">Html</span> to Text
This notebook shows how we can use the package ply
to extract the text that is embedded in an <span style="font-variant:small-caps;">Html</span> file.
In order to be concise, it only supports a small subset of
<span style="font-variant:small-caps;">Html</span>. Below is the content of my old
<a href="http://wwwlehre.dhbw-stuttgart.de/~stroetma/">web page</a> that I had used when I still worked at the DHBW Stuttgart. The goal of this notebook is to write
a scanner that is able to extract the text from this web page.
End of explanation
import ply.lex as lex
Explanation: Imports
We will use the package ply to remove the
<span style="font-variant:small-caps;">Html</span> tags and extract the text that
is embedded in the <span style="font-variant:small-caps;">Html</span> shown above.
In this example, we will only use the scanner that is provided by the module ply.lex.
Hence we import the module ply.lex that contains the scanner generator from ply.
End of explanation
tokens = [ 'HEAD_START',
'HEAD_END'
'SCRIPT_START',
'SCRIPT_END',
'TAG',
'LINEBREAK',
'NAMED_ENTITY',
'UNICODE',
'ANY'
]
Explanation: Token Declarations
We begin by declaring the tokens. Note that the variable tokens is a keyword of ply to define the names of the token classes. In this case, we have declared nine different tokens.
- HEAD_START will match the tag <head> that starts the definition of the
<span style="font-variant:small-caps;">Html</span> header.
- HEAD_END will match the tag </head> that ends the definition of the
<span style="font-variant:small-caps;">Html</span> header.
- SCRIPT_START will match the tag <script> that starts embedded JavaScript code.
- SCRIPT_END will match the tag </script> that ends embedded JavaScript code.
- TAG is a token that represents arbitrary <span style="font-variant:small-caps;">Html</span> tags.
- LINEBREAK is a token that will match the newline character \n at the end of a line.
- NAMED_ENTITY is a token that represents named
<span style="font-variant:small-caps;">Html5</span> entities.
- UNICODE is a token that represents a unicode entity.
- ANY is a token that matches any character.
End of explanation
states = [ ('header', 'exclusive'),
('script', 'exclusive')
]
Explanation: Definition of the States
Once we are inside an <span style="font-variant:small-caps;">Html</span> header or inside of some
JavaScript code the rules of the scanning game change. Therefore, we declare two new <em style="color:blue">exclusive scanner states</em>:
- header is the state the scanner is in while it is scanning an
<span style="font-variant:small-caps;">Html</span> header.
- script is the state of the scanner while scanning JavaScript code.
These states are exclusive states and hence the other token definitions do not apply in these
states.
End of explanation
def t_HEAD_START(t):
r'<head>'
t.lexer.begin('header')
Explanation: Token Definitions
We proceed to give the definition of the tokens. Note that none of the function defined below
returns a token. Rather all of these function print the transformation of the
<span style="font-variant:small-caps;">Html</span> that they have matched.
The Definition of the Token HEAD_START
Once the scanner reads the opening tag <head> it switches into the state header. The function begin of the lexer can be used to switch into a different scanner state. In the state header, the scanner continues to read and discard characters until the closing tag </head> is encountered. Note that this token is only recognized in the state START.
End of explanation
def t_SCRIPT_START(t):
r'<script[^>]+>'
t.lexer.begin('script')
Explanation: The Definition of the Token SCRIPT_START
Once the scanner reads the opening tag <script> it switches into the state script. In this state it will continue to read and discard characters until it sees the closing tag /script>.
End of explanation
def t_LINEBREAK(t):
r'\n+'
print()
Explanation: The Definition of the Token `LINEBREAK``
Groups of newline characters are condensed into a single newline character.
As we are not interested in the variable t.lexer.lineno in this example, we don't have to count the newlines.
This token is active in any state.
End of explanation
def t_TAG(t):
r'<[^>]+>'
pass
Explanation: The Definition of the Token TAG
The token TAG is defined as any string that starts with the character < and ends with the character
>. Betweens these two characters there has to be a nonzero number of characters that are different from
the character >. The text of the token is discarded.
End of explanation
from html.entities import html5
html5['auml']
Explanation: The Definition of the Token NAMED_ENTITY
In order to support named <span style="font-variant:small-caps;">Html</span> entities we need to import
the dictionary html5 from the module html.entities. For every named
<span style="font-variant:small-caps;">Html</span> entity e, html[e] is the unicode symbol that is specified by e.
End of explanation
def t_NAMED_ENTITY(t):
r'&[a-zA-Z]+;?'
if t.value[-1] == ';': # ';' is not part of the entity name
entity_name = t.value[1:-1] # so chop it off
else:
entity_name = t.value[1:]
unicode_char = html5[entity_name]
print(unicode_char, end='')
Explanation: The regular expression &[a-zA-Z]+;? searches for <span style="font-variant:small-caps;">Html</span>
entity names. These are strings that start with the character & followed by the name of the entity, optionally followed by the character ;. If a Unicode entity name is found, the corresponding character is printed.
End of explanation
def t_UNICODE(t):
r'&\#[0-9]+;?'
if t.value[-1] == ';':
number = t.value[2:-1]
else:
number = t.value[2:]
print(chr(int(number)), end='')
chr(int('8555'))
Explanation: The Definition of the Token UNICODE
The regular expression &\#[0-9]+;? searches for <span style="font-variant:small-caps;">Html</span> entities that specify a unicode character numerically. The corresponding strings start with the character &
followed by the character # followed by digits and are optionally ended by the character ;.
Note that we had to escape the character # with a backslash because otherwise this character would signal the begin of a comment.
Note further that the function chr takes a number and returns the corresponding unicode character.
For example, chr(int(128034)) returns the character '๐ข'.
End of explanation
def t_ANY(t):
r'.'
print(t.value, end='')
Explanation: The Definition of the Token ANY
The regular expression . matches any character that is different from a newline character. These characters are printed unmodified. Note that the scanner tries the regular expressions for a given state in the order that they are defined in this notebook. Therefore, it is crucial that the function t_ANY is defined after all other token definitions for the <em style="color:blue">start state</em> are given. The start state is the default state of the scanner and therefore the state the scanner is in when it starts scanning.
End of explanation
def t_header_HEAD_END(t):
r'</head>'
t.lexer.begin('INITIAL')
Explanation: The Definition of the Token HEAD_END
The regular expression </head> matches the closing head tag. Note that is regular expression is only
active in state header as the name of this function starts with t_header. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the
<em style="color:blue">start state</em> of the scanner. In the state INITIAL, all token definitions are active, that do not start with either t_header or t_script.
End of explanation
def t_script_SCRIPT_END(t):
r'</script>'
t.lexer.begin('INITIAL')
Explanation: The Definition of the Token SCRIPT_END
If the scanner is either in the state script, the function
t_header_script_END recognizes the matching closing tag and switches back to the state
INITIAL.
The regular expression </script> matches the closing script tag. Note that this regular expression is only
active in state script. Once the closing tag has been found, the function lexer.begin switches the lexer back into the state INITIAL, which is the start state of the scanner.
End of explanation
def t_header_script_ANY(t):
r'.|\n'
pass
Explanation: The Definition of the Token ANY
If the scanner is either in the state header or the state script, the function
t_header_script_ANY eats up all characters without echoing them.
End of explanation
def t_error(t):
print(f"Illegal character: '{t.value[0]}'")
t.lexer.skip(1)
Explanation: Error Handling
The function t_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens. In our implementation we print the first character that could not be matched, discard this character and continue.
End of explanation
def t_header_error(t):
print(f"Illegal character in state 'header': '{t.value[0]}'")
t.lexer.skip(1)
Explanation: The function t_header_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state header.
End of explanation
def t_script_error(t):
print(f"Illegal character in state 'script': '{t.value[0]}'")
t.lexer.skip(1)
Explanation: The function t_script_error is called when a substring at the beginning of the input can not be matched by any of the regular expressions defined in the various tokens and the scanner is in state script.
End of explanation
__file__ = 'main'
Explanation: Running the Scanner
The line below is necessary to trick ply.lex into assuming this program is written in an ordinary python file instead of a Jupyter notebook.
End of explanation
lexer = lex.lex(debug=True)
Explanation: The line below generates the scanner. Because the option debug=True is set, we can see the regular expression that is generated for scanning.
End of explanation
lexer.input(data)
Explanation: Next, we feed our input string into the generated scanner.
End of explanation
def scan(lexer):
for t in lexer:
pass
scan(lexer)
Explanation: In order to scan the data that we provided in the last line, we iterate over all tokens generated by our scanner.
End of explanation |
13,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook is based on one put together by Mark Krumholz and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. The original can be found at https
Step1: 1. Plotting 2D Data
1.1 Generating 2D data
Thus far we have dealt mostly with 1D data
Step2: The first two lines should look familiar
Step3: The first two lines compute the r and phi coordinates in a polar coordinate system. The second two lines define two functions that we'll play with. The function z1 is just the sinc function in 2D, where we take sinc(r) instead of sinc(x). We've put a factor of 2 inside the sin to change the period from 2 pi to pi. The second function z2 just takes z1 and multiplies it by a factor that varies between 1.5 and 0.5 depending on the angle phi, with a period of pi/2 radians in the phi direction.
1.2 Contour plots
How can we represent data like this? One simple way is through a contour plot. The archetypical contour plot is a topographic map
Step4: The output looks like this
Step5: Let's break down these commands so we can understand what they're doing. First, gca() stands for get current axes. This function lets us grab the axes we're currently plotting to -- something that will become important when we get to multiple axes in a few minutes. Then the command set_aspect('equal'), as applied to these axes, tells pyplot that we want the aspect ratio of the plot to be such that the spacing of points on the axes is equal, even if that means not filling the whole plot window.
The result of this procedure should look something like this
Step6: We can also label the contours using the clabel() command. Let's remake this plot using some labelled contours, placing the contours starting at -0.4 and going in intervals of 0.4.
Step7: Note that, to add labels, we assign a variable to the output of the contour function, and then pass that variable to the clabel function. That way we can have multiple sets of contours at once, and label them separately, since each will be referred to by its own set of variables.
Here is the result
Step8: Here is the output
Step9: Note that the colors option to the contour command specifies that all the contours should be drawn in white. The default colors will just blend in with the color fillings already used, so we won't be able to see the lines. Also, by default contours have the nice feature that, for contours that are all the same color, the negative ones will be drawn dashed. Here's the result
Step10: Notice that the syntax of imshow() is a little different than contour, and its operation is a bit different.
For imshow, you don't give arrays of x and y values. This is for two reasons. First, imshow is often used as a way of displaying images, in which case there aren't really x and y coordinates, just pixels values. Second, for contour, you are allowed to have x and y points that aren't evenly spaced, whereas for imshow() you aren't. Thus you don't need to specify every x and y point, just the minimum and maximum.
The mechanism for specifying the minimum and maximum is the extent keyword, which you set equal to a tuple of 4 numbers (xmin, xmax, ymin, ymax). If you don't specify the extent keyword, xmin and ymin are taken to be 0, and xmax and ymax to the number of elements in the data in the x and y directions.
The aspect ratio can be set in the imshow command itself, as opposed to through modifying the axes. This is done via the keyword aspect. Setting it to equal forces equal spacing on the two axes.
Finally, the origin keyword specifies where (0,0) is placed. This is another legacy of imshow also being used for image display. When displaying images, the normal convention is that pixel (0,0) is in the top left corner, (0,1) is right below it, (0,2) is right below that, etc. This is the default for imshow(). To display things as we normally plot data, where (0,0) is the origin, (0,1) is above it, etc., we specify origin='lower', which says to place the origin in the lower left rather than upper left corner.
Here's the result
Step11: The output looks like this
Step12: Note that although the underlying data are the same, changing vmin and vmax has the effect of highlighting different aspects of the data. Choice of colorbars and colorscales therefore has aesthetic and, in some cases, ethical effects (for example, you could obscure something that you didn't want a viewer to see with a calculated colorbar choice).
More information on imshow can be found at http
Step13: <div class=hw>
### Exercise 2
--------------------
Play around with the example above by changing various things unitl you understand everything its doing. Then, use what you learned to create a 2x2 set of raster plots generated by your radial array function from exercise 1. Each should have a colorbar, axis labels and a title, and the entire plot should have a title at the top. | Python Code:
from numpy import *
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: <small><i>This notebook is based on one put together by Mark Krumholz and has been modified to suit the purposes of this course, including expansion/modification of explanations and additional exercises. The original can be found at https://sites.google.com/a/ucsc.edu/krumholz/teaching-and-courses/ast119_w15/class-6.</i></small>
Names: [Insert Your Names Here]
Lab 8 - Plotting in 2D
Lab 8 Contents
Plotting 2D Data
Generating 2D data
Contour plots
Raster, or "heatmap", plots
Multipanel Plots
End of explanation
x=arange(-2*pi,2*pi,0.01)
y=arange(-2*pi,2*pi,0.01)
xx, yy = meshgrid(x, y, indexing='ij')
# the .shape attribute is very useful with 2D arrays
x.shape
xx.shape
Explanation: 1. Plotting 2D Data
1.1 Generating 2D data
Thus far we have dealt mostly with 1D data: something that can be described as a function y(x). However, we also often have 2D data, which is represented as a function of two variables z(x,y). Matplotlib also provides mechanisms to visually represent data of this sort. To start with, let's make some 2D data that we can play with. Here's the first step:
End of explanation
r = sqrt(xx**2 + yy**2)
r.shape
phi = arccos(x/r)
z1 = sin(2*r) / r
z2 = z1 * (1 + 0.5*cos(4*phi))
Explanation: The first two lines should look familiar: they just create two arrays, x and y, that go from -2 pi to 2 pi with a spacing of 0.01. The third line invokes a function you probably haven't seen before, called meshgrid, which is part of the numpy library. The meshgrid function does something very useful. In this case we've passed it two one-dimensional arrays, representing x and y coordinates. We can think of these two arrays as defining a 2D grid of points, with formed by combining all the possible x values with all the possible y values. The meshgrid function gives us back a pair of two-dimensional arrays, xx and yy, that represent the x and y coordinates of these mesh points. Specifically, xx[0,:] = x[0], xx[1,:] = x[1], xx[2,:] = x[2], ..., and similarly yy[:,0] = y[0], yy[:,1] = y[1], yy[:,2] = y[2], and so forth. The keyword indexing='ij' specifies that the x coordinate goes with the first dimension of the output arrays, and the y coordinate with the second. Finally, note that, though we've used it in 2D, the meshgrid command will work for an arbitrary number of dimensions.
Next, let's make some 2D functions from this data. The utility of meshgrid becomes clear as soon as we want to do this:
End of explanation
plt.contour(xx, yy, z1)
Explanation: The first two lines compute the r and phi coordinates in a polar coordinate system. The second two lines define two functions that we'll play with. The function z1 is just the sinc function in 2D, where we take sinc(r) instead of sinc(x). We've put a factor of 2 inside the sin to change the period from 2 pi to pi. The second function z2 just takes z1 and multiplies it by a factor that varies between 1.5 and 0.5 depending on the angle phi, with a period of pi/2 radians in the phi direction.
1.2 Contour plots
How can we represent data like this? One simple way is through a contour plot. The archetypical contour plot is a topographic map: it shows curves of constant height, or more generally equal z value, as a function of (x,y) position. Matplotlib produces contour plots using the contour() command. Basic usage is simple: contour(xx, yy, z). The first two arguments are the x and y coordinates, and these can be either 1D arrays whose size matches the corresponding size of the z array, or 2D arrays whose shape matches that of the z array. Let's see what happens when we do this:
End of explanation
plt.gca().set_aspect('equal')
plt.contour(xx, yy, z1)
Explanation: The output looks like this:
This is a contour plot of the data. One minor annoyance is that the spacing on the x and y axes isn't equal, so things look stretched. We can fix that by changing the aspect ratio of the axes. To do that, we use the commands
End of explanation
plt.gca().set_aspect('equal')
plt.contour(xx, yy, z1, arange(-0.4, 2.0, 0.1))
Explanation: Let's break down these commands so we can understand what they're doing. First, gca() stands for get current axes. This function lets us grab the axes we're currently plotting to -- something that will become important when we get to multiple axes in a few minutes. Then the command set_aspect('equal'), as applied to these axes, tells pyplot that we want the aspect ratio of the plot to be such that the spacing of points on the axes is equal, even if that means not filling the whole plot window.
The result of this procedure should look something like this:
For this to be more useful in a quantitative sense, it's helpful to have some control over the values at which contours are placed. Fortunately, this is easy to do with an optional additional argument to the contour function. After x, y, and z, one can pass in a fourth argument describing the contours. This can be either a single number, which just specifies how many contours to use, or an array giving the exact values to use for the contours. For example, for our sinc function we know that the maximum is 2.0, and the minimum is -0.43 (you can show this analytically, or just check by doing amax(z1) and amin(z1)), so we can choose contours to go from -0.4 to 1.0 in a spacing of 0.1.
End of explanation
plt.gca().set_aspect('equal')
cs = plt.contour(xx, yy, z1, arange(-0.4, 2.0, 0.4))
plt.clabel(cs)
Explanation: We can also label the contours using the clabel() command. Let's remake this plot using some labelled contours, placing the contours starting at -0.4 and going in intervals of 0.4.
End of explanation
plt.gca().set_aspect('equal')
plt.contourf(xx, yy, z2, arange(-0.7, 3.21, 0.5))
Explanation: Note that, to add labels, we assign a variable to the output of the contour function, and then pass that variable to the clabel function. That way we can have multiple sets of contours at once, and label them separately, since each will be referred to by its own set of variables.
Here is the result:
As usual, there are numerous options to control every aspect of the contours, including line styles and thicknesses, colors, label placement, font, etc. For details on contour, see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.contour. For details on clabel, see http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.clabel.
We can also make filled contours, which may be a little easier to view. Filled contours are made by the contourf() command, which acts almost exactly like contour. To make filled contours for our function z2, we can do
End of explanation
plt.gca().set_aspect('equal')
plt.contourf(xx, yy, z2, arange(-0.7, 3.21, 0.5))
cs=plt.contour(xx, yy, z2, arange(-0.7,3.21,0.5), colors='white')
plt.clabel(cs)
Explanation: Here is the output:
Note that the contourf command doesn't draw contour lines, it just fills the space in between. If you want to label the contours, you can do that by calling contour after calling contourf. For example,
End of explanation
plt.imshow(z2, aspect='equal', origin='lower', extent=(-2*pi, 2*pi, -2*pi, 2*pi))
Explanation: Note that the colors option to the contour command specifies that all the contours should be drawn in white. The default colors will just blend in with the color fillings already used, so we won't be able to see the lines. Also, by default contours have the nice feature that, for contours that are all the same color, the negative ones will be drawn dashed. Here's the result:
1.3 Raster, or "heatmap", plots
A second useful way of representing quantitative information is with raster plots, also sometimes called heatmap or colormap plots. In such a plot, we assign every z value to a color, so that the color at a given (x,y) position is determined by the value of z there. It's something like a filled contour plot, but with colors assigned continuously rather than in discrete blocks.
The pyplot tool for making raster plots is called imshow(), short for image-show. Basic usage is as follows:
End of explanation
plt.imshow(z2, aspect='equal', origin='lower', extent=(-2*pi, 2*pi, -2*pi, 2*pi))
plt.colorbar()
Explanation: Notice that the syntax of imshow() is a little different than contour, and its operation is a bit different.
For imshow, you don't give arrays of x and y values. This is for two reasons. First, imshow is often used as a way of displaying images, in which case there aren't really x and y coordinates, just pixels values. Second, for contour, you are allowed to have x and y points that aren't evenly spaced, whereas for imshow() you aren't. Thus you don't need to specify every x and y point, just the minimum and maximum.
The mechanism for specifying the minimum and maximum is the extent keyword, which you set equal to a tuple of 4 numbers (xmin, xmax, ymin, ymax). If you don't specify the extent keyword, xmin and ymin are taken to be 0, and xmax and ymax to the number of elements in the data in the x and y directions.
The aspect ratio can be set in the imshow command itself, as opposed to through modifying the axes. This is done via the keyword aspect. Setting it to equal forces equal spacing on the two axes.
Finally, the origin keyword specifies where (0,0) is placed. This is another legacy of imshow also being used for image display. When displaying images, the normal convention is that pixel (0,0) is in the top left corner, (0,1) is right below it, (0,2) is right below that, etc. This is the default for imshow(). To display things as we normally plot data, where (0,0) is the origin, (0,1) is above it, etc., we specify origin='lower', which says to place the origin in the lower left rather than upper left corner.
Here's the result:
This is useful, but even more useful is if we can add some information on what the colors mean. The tool for doing this is a color bar, which can be added via the colorbar() command:
End of explanation
plt.imshow(z2, aspect='equal', origin='lower', extent=(-2*pi,2*pi,-2*pi,2*pi), vmin=-1, vmax=3)
plt.colorbar()
Explanation: The output looks like this:
It is also possible to control the color scale using the vmin and vmax keywords in the imshow command. These allow one to specify the minimum and maximum values for the color mapping by hand; by default the minimum used is the minimum of the input data, and the maximum is the maximum of the input data. Here's an example:
End of explanation
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(10,3), sharey=True)
f.suptitle('Three plots')
im1 = ax1.set_aspect('equal')
ax1.contourf(xx, yy, z2, arange(-0.7, 3.21, 0.5))
cs=ax1.contour(xx, yy, z2, arange(-0.7,3.21,0.5), colors='white')
ax1.clabel(cs)
ax1.set_xlabel("distance from center")
ax1.set_ylabel("distance from center")
im2 = ax2.imshow(z2, aspect='equal', origin='lower', extent=(-2*pi, 2*pi, -2*pi, 2*pi))
f.colorbar(im2, ax=ax2)
im3 = ax3.imshow(z2, aspect='equal', origin='lower', extent=(-2*pi,2*pi,-2*pi,2*pi), vmin=-1, vmax=3)
f.colorbar(im3, ax=ax3)
plt.tight_layout()
Explanation: Note that although the underlying data are the same, changing vmin and vmax has the effect of highlighting different aspects of the data. Choice of colorbars and colorscales therefore has aesthetic and, in some cases, ethical effects (for example, you could obscure something that you didn't want a viewer to see with a calculated colorbar choice).
More information on imshow can be found at http://matplotlib.org/users/image_tutorial.html and http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.imshow.
<div class=hw>
### Exercise 1
---------------------
Write a function that creates a 2D array where the value of each pixel is the distance from some specified central pixel to that pixel. The function should assume an array size of 100x100, but should allow for specification of larger arrays. It should also assume that the centermost pixel of the array is the "central pixel" from which to calculate distances, unless otherwise specified. Once you have it working, create contour plots with labeled contour levels AND raster plots with colorbars for each of the following:
a) a 100x100 array with the central pixel as the reference pixel
b) a 200x200 array with the lower left corner as the reference pixel
c) a 100x300 array with pixel (75,200) as the reference pixel
## 2. Multi-Panel Plots
Thus far we have been dealing with a single figure, containing a single set of axes. However, it is often desirable to work with multiple plots at once, either in separate figures, or inside a single figure. Matplotlib makes it possible for us to do this. To understand how this works, we need to start with some basic terminology and concepts. The highest level object that a user of pyplot usually deals with is a figure. One can think of a figure as representing a single plotting window, or, if we're writing output to files, a single file.
At the next level down, most things that we can plot inside a figure will go into axes. Axes are exactly what they sound like: a pair of x and y axes, which are characterized by having some range, as well as auxiliary data like tick marks, axis labels, titles, legends, etc. A figure can contain multiple axes, in which case multiple things will be drawn inside the same plotting window. Axes also contain information about their position within a figure. Figures and axes are both examples of graphics containers: they are things into which graphical elements can be placed.
At the next level down are things like lines, filled polygons (filled regions between lines, for example), text, etc. These are known as graphics primitives. Collectively, primitives and containers are known as Artists. They are the basic graphical elements out of which plots are built. Commands like plot, fill_between, etc., produce primitives and attach them to containers.
One figure and one axis are active at any given time. The active figure and axis are the ones into which the lines or other graphics primitives created by commands like plot will be placed. It is possible to place graphics in other figures and axes than the active one by manually specifying where they should go; the active set simply gives the default location.
Figures and the figure command
The easiest case is working with multiple figures. These can be created using the plt.figure() command as follows
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: <div class=hw>
### Exercise 2
--------------------
Play around with the example above by changing various things unitl you understand everything its doing. Then, use what you learned to create a 2x2 set of raster plots generated by your radial array function from exercise 1. Each should have a colorbar, axis labels and a title, and the entire plot should have a title at the top.
End of explanation |
13,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AVISO
Step1: O ponto de partida foi uma lista 326.716 palavras da lรญngua portuguesa que compilei a partir de vรกrias fontes. Aqui eu leio o arquivo, verifico que sรฃo todas palavras รบnicas e exibo uma amostra com as 10 primeiras e as 10 รบltimas
Step2: O primeiro filtro foi pegar sรณ palavras de atรฉ 5 caracteres
Step3: Em seguida, fiquei apenas com as palavras formadas exclusivamente por letras minรบsculas de "a" a "z", sem acentos ou hรญfens
Step4: Examinando a lista obtida atรฉ agora, percebi que haviam muitas palavras que eram apenas plurais, como "acres", "botas" etc. Com este cรณdigo eu removi as palavras que sรฃo iguais a outras palavras apenas com um "s" colado no final. Por exemplo, removi "abios" porque a lista jรก tem a palavra "abio".
Step5: Estamos chegando perto das 7776 palavras que precisamos. Aqui fiquei sรณ com as palavras de 4 ou 5 letras
Step6: Ainda precisamos descartar 74 palavras. Procurei na Web uma lista de palavrรตes, e achei nesta pรกgina. Carreguei as palavras, convertendo em minรบsculas e criando um set. A notaรงรฃo {โฆ for โฆ in โฆ} รฉ uma set comprehension e produz um objeto set. O resultado foi um conjunto de 301 palavrรตes.
Step7: Transformando a lista tam4ou5 em um set fica muito fรกcil remover os palavrรตes fazendo apenas uma subtraรงรฃo entre conjuntos. O resultado eu converti em list novamente, para poder embaralhar a seguir.
Step8: Ainda temos 32 palavras a mais. Na diagramaรงรฃo que fiz depois, a palavra "mamum" causou problemas por ter 3 "m" e entรฃo fica mais larga que as demais. Vamos todas as palavras com mais de 2 "m"
Step9: Ok, vamos tirar esta da lista
Step10: Ainda sobram 31. Para ficar sรณ com a quantidade certa eu embaralhei a lista limpas e peguei uma fatia com as 6**5 palavras iniciais da lista.
Step11: Vamos conferir mais uma vez se temos a quantidade certa de palavras e nรฃo temos nenhuma repetida
Step12: Com a lista final em mรฃos, agora falta sรณ gerar as chaves de 11111 a 66666 para criar um รญndice fรกcil de usar com 5 dados de 6 faces.
Step13: O รบltimo passo รฉ escrever o arquivo 7776palavras.txt. Usando o comando wc no shell confiro se o arquivo tem o nรบmero esperado de linhas | Python Code:
6**5
Explanation: AVISO: Este projeto migrou para o repositรณrio https://github.com/ramalho/dadoware
Diceware: mรฉtodo seguro para gerar senhas
Fonte: The Diceware Passphrase Home Page
Compilaรงรฃo da lista de palavras
Neste notebook comecei com uma grande lista palavras da lรญngua portuguesa, que fui sucessivamente filtrando com diversos critรฉrios atรฉ chegar prรณximo da quantidade necessรกria para o mรฉtodo diceware. Entรฃo embaralhei as 7808 palavras restantes e fatiei a lista para ficar com a quantidade exata: 7776. No รบltimo passo, gerei o arquivo 7776palavras.txt com linhas de "11111 abaci" a "66666 zurpa".
Para comeรงar, confirmei a quantidade de palavras necessรกrias para usar 5 dados na escolha de palavras:
End of explanation
completa = [lin.strip() for lin in open('palavras.txt').readlines()]
len(completa), len(completa) == len(set(completa))
completa[:10], completa[-10:]
Explanation: O ponto de partida foi uma lista 326.716 palavras da lรญngua portuguesa que compilei a partir de vรกrias fontes. Aqui eu leio o arquivo, verifico que sรฃo todas palavras รบnicas e exibo uma amostra com as 10 primeiras e as 10 รบltimas:
End of explanation
ate5 = [p for p in completa if len(p) <= 5]
len(ate5)
Explanation: O primeiro filtro foi pegar sรณ palavras de atรฉ 5 caracteres:
End of explanation
import string
so_ascii = []
for palavra in ate5:
if all(c in string.ascii_lowercase for c in palavra):
so_ascii.append(palavra)
len(so_ascii)
so_ascii[:10], so_ascii[-10:]
Explanation: Em seguida, fiquei apenas com as palavras formadas exclusivamente por letras minรบsculas de "a" a "z", sem acentos ou hรญfens:
End of explanation
ocorrencias = set(so_ascii)
singulares = []
print('Plurais removidas:')
for pal in so_ascii:
if pal[-1] == 's' and pal[:-1] in ocorrencias:
print(pal, end=' ')
else:
singulares.append(pal)
len(singulares)
Explanation: Examinando a lista obtida atรฉ agora, percebi que haviam muitas palavras que eram apenas plurais, como "acres", "botas" etc. Com este cรณdigo eu removi as palavras que sรฃo iguais a outras palavras apenas com um "s" colado no final. Por exemplo, removi "abios" porque a lista jรก tem a palavra "abio".
End of explanation
tam4ou5 = [p for p in singulares if len(p) in (4,5)]
len(tam4ou5)
sobrando = len(tam4ou5) - 6**5
sobrando
Explanation: Estamos chegando perto das 7776 palavras que precisamos. Aqui fiquei sรณ com as palavras de 4 ou 5 letras:
End of explanation
palavroes = {lin.strip().lower() for lin in open('palavroes.txt', encoding='latin1').readlines()}
len(palavroes)
Explanation: Ainda precisamos descartar 74 palavras. Procurei na Web uma lista de palavrรตes, e achei nesta pรกgina. Carreguei as palavras, convertendo em minรบsculas e criando um set. A notaรงรฃo {โฆ for โฆ in โฆ} รฉ uma set comprehension e produz um objeto set. O resultado foi um conjunto de 301 palavrรตes.
End of explanation
limpas = list(set(tam4ou5) - palavroes)
len(limpas), len(limpas) - 6**5
Explanation: Transformando a lista tam4ou5 em um set fica muito fรกcil remover os palavrรตes fazendo apenas uma subtraรงรฃo entre conjuntos. O resultado eu converti em list novamente, para poder embaralhar a seguir.
End of explanation
import collections
mmm = []
for palavra in limpas:
if collections.Counter(palavra)['m'] > 1:
mmm.append(palavra)
mmm
Explanation: Ainda temos 32 palavras a mais. Na diagramaรงรฃo que fiz depois, a palavra "mamum" causou problemas por ter 3 "m" e entรฃo fica mais larga que as demais. Vamos todas as palavras com mais de 2 "m":
End of explanation
limpas.remove('mamum')
len(limpas), len(limpas) - 6**5
Explanation: Ok, vamos tirar esta da lista:
End of explanation
import random
random.shuffle(limpas)
final = sorted(limpas[:6**5])
len(final), final[:10], final[-10:]
Explanation: Ainda sobram 31. Para ficar sรณ com a quantidade certa eu embaralhei a lista limpas e peguei uma fatia com as 6**5 palavras iniciais da lista.
End of explanation
len(final), len(final) == len(set(final))
Explanation: Vamos conferir mais uma vez se temos a quantidade certa de palavras e nรฃo temos nenhuma repetida:
End of explanation
import itertools
dados5 = list(''.join(dados) for dados in itertools.product('123456', repeat=5))
len(dados5)
pares = list(zip(dados5, final))
pares[:10], pares[-10:]
Explanation: Com a lista final em mรฃos, agora falta sรณ gerar as chaves de 11111 a 66666 para criar um รญndice fรกcil de usar com 5 dados de 6 faces.
End of explanation
with open('7776palavras.txt', 'wt', encoding='ascii') as saida:
for par in pares:
saida.write('%s %s\n' % par)
!wc '7776palavras.txt'
Explanation: O รบltimo passo รฉ escrever o arquivo 7776palavras.txt. Usando o comando wc no shell confiro se o arquivo tem o nรบmero esperado de linhas:
End of explanation |
13,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
version 1.0.2
+
Introduction to Machine Learning with Apache Spark
Predicting Movie Ratings
One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions.
For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings.
In this lab
Step3: Part 0
Step4: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data.
We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique.
Note
Step6: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy().
Step7: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined.
Step9: Part 1
Step10: (1b) Movies with Highest Average Ratings
Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings.
The steps you should perform are
Step11: (1c) Movies with Highest Average Ratings and more than 500 reviews
Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews.
Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form
Step12: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
Part 2
Step14: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation.
(2b) Root Mean Square Error (RMSE)
In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.
The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function.
Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating)
Given two ratings RDDs, x and y of size n, we define RSME as follows
Step15: (2c) Using ALS.train()
In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
The process we will use for determining the best model is as follows
Step16: (2d) Testing Your Model
So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
The steps you should perform are
Step17: (2e) Comparing Your Model
Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
The steps you should perform are
Step18: You now have code to predict how users will rate movies!
Part 3
Step19: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
Step20: (3b) Add Your Movies to Training Dataset
Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset.
Step21: (3c) Train a Model with Your Ratings
Now, train a model with your ratings added and the parameters you used in in part (2c)
Step22: (3d) Check RMSE for the New Model with Your Ratings
Compute the RMSE for this new model on the test set.
For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form
Step23: (3e) Predict Your Ratings
So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for.
The steps you should perform are
Step24: (3f) Predict Your Ratings
We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
The steps you should perform are | Python Code:
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab4', 'small')
ratingsFilename = os.path.join(baseDir, inputPath, 'ratings.dat.gz')
moviesFilename = os.path.join(baseDir, inputPath, 'movies.dat')
Explanation: version 1.0.2
+
Introduction to Machine Learning with Apache Spark
Predicting Movie Ratings
One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions.
For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings.
In this lab:
Part 0: Preliminaries
Part 1: Basic Recommendations
Part 2: Collaborative Filtering
Part 3: Predictions for Yourself
As mentioned during the first Learning Spark lab, think carefully before calling collect() on any datasets. When you are using a small dataset, calling collect() and then using Python to get a sense for the data locally (in the driver program) will work fine, but this will not work when you are using a large dataset that doesn't fit in memory on one machine. Solutions that call collect() and do local analysis that could have been done with Spark will likely fail in the autograder and not receive full credit.
Code
This assignment can be completed using basic Python and pySpark Transformations and Actions. Libraries other than math are not necessary. With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the Spark functions you have used in prior lab exercises (although you are welcome to use more features of Spark if you like!).
End of explanation
numPartitions = 2
rawRatings = sc.textFile(ratingsFilename).repartition(numPartitions)
rawMovies = sc.textFile(moviesFilename)
def get_ratings_tuple(entry):
Parse a line in the ratings dataset
Args:
entry (str): a line in the ratings dataset in the form of UserID::MovieID::Rating::Timestamp
Returns:
tuple: (UserID, MovieID, Rating)
items = entry.split('::')
return int(items[0]), int(items[1]), float(items[2])
def get_movie_tuple(entry):
Parse a line in the movies dataset
Args:
entry (str): a line in the movies dataset in the form of MovieID::Title::Genres
Returns:
tuple: (MovieID, Title)
items = entry.split('::')
return int(items[0]), items[1]
ratingsRDD = rawRatings.map(get_ratings_tuple).cache()
moviesRDD = rawMovies.map(get_movie_tuple).cache()
ratingsCount = ratingsRDD.count()
moviesCount = moviesRDD.count()
print 'There are %s ratings and %s movies in the datasets' % (ratingsCount, moviesCount)
print 'Ratings: %s' % ratingsRDD.take(3)
print 'Movies: %s' % moviesRDD.take(3)
assert ratingsCount == 487650
assert moviesCount == 3883
assert moviesRDD.filter(lambda (id, title): title == 'Toy Story (1995)').count() == 1
assert (ratingsRDD.takeOrdered(1, key=lambda (user, movie, rating): movie)
== [(1, 1, 5.0)])
Explanation: Part 0: Preliminaries
We read in each of the files and create an RDD consisting of parsed lines.
Each line in the ratings dataset (ratings.dat.gz) is formatted as:
UserID::MovieID::Rating::Timestamp
Each line in the movies (movies.dat) dataset is formatted as:
MovieID::Title::Genres
The Genres field has the format
Genres1|Genres2|Genres3|...
The format of these files is uniform and simple, so we can use Python split() to parse their lines.
Parsing the two files yields two RDDS
For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this exercise.
For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the Genres because we do not need them for this exercise.
End of explanation
tmp1 = [(1, u'alpha'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'delta')]
tmp2 = [(1, u'delta'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'alpha')]
oneRDD = sc.parallelize(tmp1)
twoRDD = sc.parallelize(tmp2)
oneSorted = oneRDD.sortByKey(True).collect()
twoSorted = twoRDD.sortByKey(True).collect()
print oneSorted
print twoSorted
assert set(oneSorted) == set(twoSorted) # Note that both lists have the same elements
assert twoSorted[0][0] < twoSorted.pop()[0] # Check that it is sorted by the keys
assert oneSorted[0:2] != twoSorted[0:2] # Note that the subset consisting of the first two elements does not match
Explanation: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data.
We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique.
Note: It is important to use the unicode type instead of the string type as the titles are in unicode characters.
Consider the following example, and note that while the sets are equal, the printed lists are usually in different order by value, although they may randomly match up from time to time.
You can try running this multiple times. If the last assertion fails, don't worry about it: that was just the luck of the draw. And note that in some environments the results may be more deterministic.
End of explanation
def sortFunction(tuple):
Construct the sort string (does not perform actual sorting)
Args:
tuple: (rating, MovieName)
Returns:
sortString: the value to sort with, 'rating MovieName'
key = unicode('%.3f' % tuple[0])
value = tuple[1]
return (key + ' ' + value)
print oneRDD.sortBy(sortFunction, True).collect()
print twoRDD.sortBy(sortFunction, True).collect()
Explanation: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy().
End of explanation
oneSorted1 = oneRDD.takeOrdered(oneRDD.count(),key=sortFunction)
twoSorted1 = twoRDD.takeOrdered(twoRDD.count(),key=sortFunction)
print 'one is %s' % oneSorted1
print 'two is %s' % twoSorted1
assert oneSorted1 == twoSorted1
Explanation: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# First, implement a helper function `getCountsAndAverages` using only Python
def getCountsAndAverages(IDandRatingsTuple):
Calculate average rating
Args:
IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...))
Returns:
tuple: a tuple of (MovieID, (number of ratings, averageRating))
pid = IDandRatingsTuple[0]
cnt = len(IDandRatingsTuple[1])
avg = float(sum(IDandRatingsTuple[1]))/cnt
return (pid, (cnt, avg))
# TEST Number of Ratings and Average Ratings for a Movie (1a)
Test.assertEquals(getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)),
'incorrect getCountsAndAverages() with integer list')
Test.assertEquals(getCountsAndAverages((100, (10.0, 20.0, 30.0))), (100, (3, 20.0)),
'incorrect getCountsAndAverages() with float list')
Test.assertEquals(getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)),
'incorrect getCountsAndAverages() with xrange')
Explanation: Part 1: Basic Recommendations
One way to recommend movies is to always recommend the movies with the highest average rating. In this part, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and more than 500 reviews. We want to filter our movies with high ratings but fewer than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone.
(1a) Number of Ratings and Average Ratings for a Movie
Using only Python, implement a helper function getCountsAndAverages() that takes a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...)) and returns a tuple of (MovieID, (number of ratings, averageRating)). For example, given the tuple (100, (10.0, 20.0, 30.0)), your function should return (100, (3, 20.0))
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# From ratingsRDD with tuples of (UserID, MovieID, Rating) create an RDD with tuples of
# the (MovieID, iterable of Ratings for that MovieID)
movieIDsWithRatingsRDD = (ratingsRDD
.map(lambda (u_id,m_id,rating) : (m_id,rating))
.groupByKey())
print 'movieIDsWithRatingsRDD: %s\n' % movieIDsWithRatingsRDD.take(3)
# Using `movieIDsWithRatingsRDD`, compute the number of ratings and average rating for each movie to
# yield tuples of the form (MovieID, (number of ratings, average rating))
movieIDsWithAvgRatingsRDD = movieIDsWithRatingsRDD.map(getCountsAndAverages)
print 'movieIDsWithAvgRatingsRDD: %s\n' % movieIDsWithAvgRatingsRDD.take(3)
# To `movieIDsWithAvgRatingsRDD`, apply RDD transformations that use `moviesRDD` to get the movie
# names for `movieIDsWithAvgRatingsRDD`, yielding tuples of the form
# (average rating, movie name, number of ratings)
movieNameWithAvgRatingsRDD = (moviesRDD
.join(movieIDsWithAvgRatingsRDD)
.map(lambda (id, (name, (num, avg))): (avg, name, num)))
print 'movieNameWithAvgRatingsRDD: %s\n' % movieNameWithAvgRatingsRDD.take(3)
# TEST Movies with Highest Average Ratings (1b)
Test.assertEquals(movieIDsWithRatingsRDD.count(), 3615,
'incorrect movieIDsWithRatingsRDD.count() (expected 3615)')
movieIDsWithRatingsTakeOrdered = movieIDsWithRatingsRDD.takeOrdered(3)
Test.assertTrue(movieIDsWithRatingsTakeOrdered[0][0] == 1 and
len(list(movieIDsWithRatingsTakeOrdered[0][1])) == 993,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[0] (expected 993)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[1][0] == 2 and
len(list(movieIDsWithRatingsTakeOrdered[1][1])) == 332,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[1] (expected 332)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[2][0] == 3 and
len(list(movieIDsWithRatingsTakeOrdered[2][1])) == 299,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[2] (expected 299)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.count(), 3615,
'incorrect movieIDsWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.takeOrdered(3),
[(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)),
(3, (299, 3.0468227424749164))],
'incorrect movieIDsWithAvgRatingsRDD.takeOrdered(3)')
Test.assertEquals(movieNameWithAvgRatingsRDD.count(), 3615,
'incorrect movieNameWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieNameWithAvgRatingsRDD.takeOrdered(3),
[(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1),
(1.0, u'Big Squeeze, The (1996)', 3)],
'incorrect movieNameWithAvgRatingsRDD.takeOrdered(3)')
Explanation: (1b) Movies with Highest Average Ratings
Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings.
The steps you should perform are:
Recall that the ratingsRDD contains tuples of the form (UserID, MovieID, Rating). From ratingsRDD create an RDD with tuples of the form (MovieID, Python iterable of Ratings for that MovieID). This transformation will yield an RDD of the form: [(1, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7c90>), (2, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e79d0>), (3, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7610>)]. Note that you will only need to perform two Spark transformations to do this step.
Using movieIDsWithRatingsRDD and your getCountsAndAverages() helper function, compute the number of ratings and average rating for each movie to yield tuples of the form (MovieID, (number of ratings, average rating)). This transformation will yield an RDD of the form: [(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)), (3, (299, 3.0468227424749164))]. You can do this step with one Spark transformation
We want to see movie names, instead of movie IDs. To moviesRDD, apply RDD transformations that use movieIDsWithAvgRatingsRDD to get the movie names for movieIDsWithAvgRatingsRDD, yielding tuples of the form (average rating, movie name, number of ratings). This set of transformations will yield an RDD of the form: [(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1), (1.0, u'Big Squeeze, The (1996)', 3)]. You will need to do two Spark transformations to complete this step: first use the moviesRDD with movieIDsWithAvgRatingsRDD to create a new RDD with Movie names matched to Movie IDs, then convert that RDD into the form of (average rating, movie name, number of ratings). These transformations will yield an RDD that looks like: [(3.6818181818181817, u'Happiest Millionaire, The (1967)', 22), (3.0468227424749164, u'Grumpier Old Men (1995)', 299), (2.882978723404255, u'Hocus Pocus (1993)', 94)]
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Apply an RDD transformation to `movieNameWithAvgRatingsRDD` to limit the results to movies with
# ratings from more than 500 people. We then use the `sortFunction()` helper function to sort by the
# average rating to get the movies in order of their rating (highest rating first)
movieLimitedAndSortedByRatingRDD = (movieNameWithAvgRatingsRDD
.filter(lambda (avg, name, num) : num > 500)
.sortBy(sortFunction, False))
print 'Movies with highest ratings: %s' % movieLimitedAndSortedByRatingRDD.take(20)
# TEST Movies with Highest Average Ratings and more than 500 Reviews (1c)
Test.assertEquals(movieLimitedAndSortedByRatingRDD.count(), 194,
'incorrect movieLimitedAndSortedByRatingRDD.count()')
Test.assertEquals(movieLimitedAndSortedByRatingRDD.take(20),
[(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088),
(4.515798462852263, u"Schindler's List (1993)", 1171),
(4.512893982808023, u'Godfather, The (1972)', 1047),
(4.510460251046025, u'Raiders of the Lost Ark (1981)', 1195),
(4.505415162454874, u'Usual Suspects, The (1995)', 831),
(4.457256461232604, u'Rear Window (1954)', 503),
(4.45468509984639, u'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', 651),
(4.43953006219765, u'Star Wars: Episode IV - A New Hope (1977)', 1447),
(4.4, u'Sixth Sense, The (1999)', 1110), (4.394285714285714, u'North by Northwest (1959)', 700),
(4.379506641366224, u'Citizen Kane (1941)', 527), (4.375, u'Casablanca (1942)', 776),
(4.363975155279503, u'Godfather: Part II, The (1974)', 805),
(4.358816276202219, u"One Flew Over the Cuckoo's Nest (1975)", 811),
(4.358173076923077, u'Silence of the Lambs, The (1991)', 1248),
(4.335826477187734, u'Saving Private Ryan (1998)', 1337),
(4.326241134751773, u'Chinatown (1974)', 564),
(4.325383304940375, u'Life Is Beautiful (La Vita \ufffd bella) (1997)', 587),
(4.324110671936759, u'Monty Python and the Holy Grail (1974)', 759),
(4.3096, u'Matrix, The (1999)', 1250)], 'incorrect sortedByRatingRDD.take(20)')
Explanation: (1c) Movies with Highest Average Ratings and more than 500 reviews
Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews.
Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form: [(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088), (4.515798462852263, u"Schindler's List (1993)", 1171), (4.512893982808023, u'Godfather, The (1972)', 1047)]
End of explanation
trainingRDD, validationRDD, testRDD = ratingsRDD.randomSplit([6, 2, 2], seed=0L)
print 'Training: %s, validation: %s, test: %s\n' % (trainingRDD.count(),
validationRDD.count(),
testRDD.count())
print trainingRDD.take(3)
print validationRDD.take(3)
print testRDD.take(3)
assert trainingRDD.count() == 292716
assert validationRDD.count() == 96902
assert testRDD.count() == 98032
assert trainingRDD.filter(lambda t: t == (1, 914, 3.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 2355, 5.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 595, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1287, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 594, 4.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1270, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1193, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 2398, 4.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1035, 5.0)).count() == 1
Explanation: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
Part 2: Collaborative Filtering
In this course, you have learned about many of the basic transformations and actions that Spark allows us to apply to distributed datasets. Spark also exposes some higher level functionality; in particular, Machine Learning using a component of Spark called MLlib. In this part, you will learn how to use MLlib to make personalized movie recommendations using the movie data we have been analyzing.
We are going to use a technique called collaborative filtering. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. You can read more about collaborative filtering here.
The image below (from Wikipedia) shows an example of predicting of the user's rating using collaborative filtering. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. For instance, in the image below the system has made a prediction, that the active user will not like the video.
For movie recommendations, we start with a matrix whose entries are movie ratings by users (shown in red in the diagram below). Each column represents a user (shown in green) and each row represents a particular movie (shown in blue).
Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue).
We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The Alternating Least Squares algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constrant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name.
This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we "alternate" and pick the best user factors given fixed movie factors.
For a simple example of what the users and movies matrices might look like, check out the videos from Lecture 8 or the slides from Lecture 8
(2a) Creating a Training Set
Before we jump into using machine learning, we need to break up the ratingsRDD dataset into three pieces:
A training set (RDD), which we will use to train models
A validation set (RDD), which we will use to choose the best model
A test set (RDD), which we will use for our experiments
To randomly split the dataset into the multiple groups, we can use the pySpark randomSplit() transformation. randomSplit() takes a set of splits and and seed and returns multiple RDDs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
import math
def computeError(predictedRDD, actualRDD):
Compute the root mean squared error between predicted and actual
Args:
predictedRDD: predicted ratings for each movie and each user where each entry is in the form
(UserID, MovieID, Rating)
actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating)
Returns:
RSME (float): computed RSME value
# Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating)
predictedReformattedRDD = predictedRDD.map(lambda (u_Id, m_Id, rating): ((u_Id, m_Id), rating))
# Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating)
actualReformattedRDD = actualRDD.map(lambda (u_Id, m_Id, rating): ((u_Id, m_Id), rating))
# Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each
# RDD) in the reformatted RDDs using RDD transformtions - do not use collect()
squaredErrorsRDD = (predictedReformattedRDD
.join(actualReformattedRDD)
.map(lambda (k, (v1,v2)): (k, (v1-v2)**2)))
# Compute the total squared error - do not use collect()
totalError = squaredErrorsRDD.reduce(lambda (k1,v1), (k2,v2): (k1,v1+v2))
# Count the number of entries for which you computed the total squared error
numRatings = squaredErrorsRDD.count()
# Using the total squared error and the number of entries, compute the RSME
return math.sqrt(float(totalError[1])/numRatings)
# sc.parallelize turns a Python list into a Spark RDD.
testPredicted = sc.parallelize([
(1, 1, 5),
(1, 2, 3),
(1, 3, 4),
(2, 1, 3),
(2, 2, 2),
(2, 3, 4)])
testActual = sc.parallelize([
(1, 2, 3),
(1, 3, 5),
(2, 1, 5),
(2, 2, 1)])
testPredicted2 = sc.parallelize([
(2, 2, 5),
(1, 2, 5)])
testError = computeError(testPredicted, testActual)
print 'Error for test dataset (should be 1.22474487139): %s' % testError
testError2 = computeError(testPredicted2, testActual)
print 'Error for test dataset2 (should be 3.16227766017): %s' % testError2
testError3 = computeError(testActual, testActual)
print 'Error for testActual dataset (should be 0.0): %s' % testError3
# TEST Root Mean Square Error (2b)
Test.assertTrue(abs(testError - 1.22474487139) < 0.00000001,
'incorrect testError (expected 1.22474487139)')
Test.assertTrue(abs(testError2 - 3.16227766017) < 0.00000001,
'incorrect testError2 result (expected 3.16227766017)')
Test.assertTrue(abs(testError3 - 0.0) < 0.00000001,
'incorrect testActual result (expected 0.0)')
Explanation: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation.
(2b) Root Mean Square Error (RMSE)
In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.
The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function.
Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating)
Given two ratings RDDs, x and y of size n, we define RSME as follows: $ RMSE = \sqrt{\frac{\sum_{i = 1}^{n} (x_i - y_i)^2}{n}}$
To calculate RSME, the steps you should perform are:
Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 1), 5), ((1, 2), 3), ((1, 3), 4), ((2, 1), 3), ((2, 2), 2), ((2, 3), 4)]. You can perform this step with a single Spark transformation.
Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 2), 3), ((1, 3), 5), ((2, 1), 5), ((2, 2), 1)]. You can perform this step with a single Spark transformation.
Using only RDD transformations (you only need to perform two transformations), compute the squared error for each matching entry (i.e., the same (UserID, MovieID) in each RDD) in the reformatted RDDs - do not use collect() to perform this step. Note that not every (UserID, MovieID) pair will appear in both RDDs - if a pair does not appear in both RDDs, then it does not contribute to the RMSE. You will end up with an RDD with entries of the form $ (x_i - y_i)^2$ You might want to check out Python's math module to see how to compute these values
Using an RDD action (but not collect()), compute the total squared error: $ SE = \sum_{i = 1}^{n} (x_i - y_i)^2 $
Compute n by using an RDD action (but not collect()), to count the number of pairs for which you computed the total squared error
Using the total squared error and the number of pairs, compute the RSME. Make sure you compute this value as a float.
Note: Your solution must only use transformations and actions on RDDs. Do not call collect() on either RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from pyspark.mllib.recommendation import ALS
validationForPredictRDD = validationRDD.map(lambda (userID, movieID, rating) : (userID, movieID))
seed = 5L
iterations = 5
regularizationParameter = 0.1
ranks = [4, 8, 12]
errors = [0, 0, 0]
err = 0
tolerance = 0.03
minError = float('inf')
bestRank = -1
bestIteration = -1
for rank in ranks:
model = ALS.train(trainingRDD, rank, seed=seed, iterations=iterations,
lambda_=regularizationParameter)
predictedRatingsRDD = model.predictAll(validationForPredictRDD)
error = computeError(predictedRatingsRDD, validationRDD)
errors[err] = error
err += 1
print 'For rank %s the RMSE is %s' % (rank, error)
if error < minError:
minError = error
bestRank = rank
print 'The best model was trained with rank %s' % bestRank
# TEST Using ALS.train (2c)
Test.assertEquals(trainingRDD.getNumPartitions(), 2,
'incorrect number of partitions for trainingRDD (expected 2)')
Test.assertEquals(validationForPredictRDD.count(), 96902,
'incorrect size for validationForPredictRDD (expected 96902)')
Test.assertEquals(validationForPredictRDD.filter(lambda t: t == (1, 1907)).count(), 1,
'incorrect content for validationForPredictRDD')
Test.assertTrue(abs(errors[0] - 0.883710109497) < tolerance, 'incorrect errors[0]')
Test.assertTrue(abs(errors[1] - 0.878486305621) < tolerance, 'incorrect errors[1]')
Test.assertTrue(abs(errors[2] - 0.876832795659) < tolerance, 'incorrect errors[2]')
Explanation: (2c) Using ALS.train()
In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
The process we will use for determining the best model is as follows:
Pick a set of model parameters. The most important parameter to ALS.train() is the rank, which is the number of rows in the Users matrix (green in the diagram above) or the number of columns in the Movies matrix (blue in the diagram above). (In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to overfitting.) We will train models with ranks of 4, 8, and 12 using the trainingRDD dataset.
Create a model using ALS.train(trainingRDD, rank, seed=seed, iterations=iterations, lambda_=regularizationParameter) with three parameters: an RDD consisting of tuples of the form (UserID, MovieID, rating) used to train the model, an integer rank (4, 8, or 12), a number of iterations to execute (we will use 5 for the iterations parameter), and a regularization coefficient (we will use 0.1 for the regularizationParameter).
For the prediction step, create an input RDD, validationForPredictRDD, consisting of (UserID, MovieID) pairs that you extract from validationRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Using the model and validationForPredictRDD, we can predict rating values by calling model.predictAll() with the validationForPredictRDD dataset, where model is the model we generated with ALS.train(). predictAll accepts an RDD with each entry in the format (userID, movieID) and outputs an RDD with each entry in the format (userID, movieID, rating).
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in validationRDD.
Which rank produces the best model, based on the RMSE with the validationRDD dataset?
Note: It is likely that this operation will take a noticeable amount of time (around a minute in our VM); you can observe its progress on the Spark Web UI. Probably most of the time will be spent running your computeError() function, since, unlike the Spark ALS implementation (and the Spark 1.4 RegressionMetrics module), this does not use a fast linear algebra library and needs to run some Python code for all 100k entries.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myModel = ALS.train(trainingRDD, rank=bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter)
testForPredictingRDD = testRDD.map(lambda (userID, movieID, rating) : (userID, movieID))
predictedTestRDD = myModel.predictAll(testForPredictingRDD)
testRMSE = computeError(testRDD, predictedTestRDD)
print 'The model had a RMSE on the test set of %s' % testRMSE
# TEST Testing Your Model (2d)
Test.assertTrue(abs(testRMSE - 0.87809838344) < tolerance, 'incorrect testRMSE')
Explanation: (2d) Testing Your Model
So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
The steps you should perform are:
Train a model, using the trainingRDD, bestRank from part (2c), and the parameters you used in in part (2c): seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
For the prediction step, create an input RDD, testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extract from testRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Use myModel.predictAll() to predict rating values for the test dataset.
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestRDD from the model.
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in testRDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
trainingAvgRating = trainingRDD.map(lambda (userID, movieID, rating): rating).reduce(lambda r1, r2: (r1+r2))/trainingRDD.count()
print 'The average rating for movies in the training set is %s' % trainingAvgRating
testForAvgRDD = testRDD.map(lambda (userID, movieID, rating): (userID, movieID, trainingAvgRating))
testAvgRMSE = computeError(testRDD, testForAvgRDD)
print 'The RMSE on the average set is %s' % testAvgRMSE
# TEST Comparing Your Model (2e)
Test.assertTrue(abs(trainingAvgRating - 3.57409571052) < 0.000001,
'incorrect trainingAvgRating (expected 3.57409571052)')
Test.assertTrue(abs(testAvgRMSE - 1.12036693569) < 0.000001,
'incorrect testAvgRMSE (expected 1.12036693569)')
Explanation: (2e) Comparing Your Model
Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
The steps you should perform are:
Use the trainingRDD to compute the average rating across all movies in that training dataset.
Use the average rating that you just determined and the testRDD to create an RDD with entries of the form (userID, movieID, average rating).
Use your computeError function to compute the RMSE between the testRDD validation RDD that you just created and the testForAvgRDD.
End of explanation
print 'Most rated movies:'
print '(average rating, movie name, number of reviews)'
for ratingsTuple in movieLimitedAndSortedByRatingRDD.take(50):
print ratingsTuple
Explanation: You now have code to predict how users will rate movies!
Part 3: Predictions for Yourself
The ultimate goal of this lab exercise is to predict what movies to recommend to yourself. In order to do that, you will first need to add ratings for yourself to the ratingsRDD dataset.
(3a) Your Movie Ratings
To help you provide ratings for yourself, we have included the following code to list the names and movie IDs of the 50 highest-rated movies from movieLimitedAndSortedByRatingRDD which we created in part 1 the lab.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myUserID = 0
# Note that the movie IDs are the *last* number on each line. A common error was to use the number of ratings as the movie ID.
myRatedMovies = [
(0, 516, 5),
(0, 553, 5),
(0, 811, 4),
(0, 817, 2),
(0, 539, 3),
(0,848, 5),
(0, 1300, 3),
(0, 7895, 5),
(0, 551, 2),
(0, 750, 1)
# The format of each line is (myUserID, movie ID, your rating)
# For example, to give the movie "Star Wars: Episode IV - A New Hope (1977)" a five rating, you would add the following line:
# (myUserID, 260, 5),
]
myRatingsRDD = sc.parallelize(myRatedMovies)
print 'My movie ratings: %s' % myRatingsRDD.take(10)
Explanation: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
trainingWithMyRatingsRDD = trainingRDD.union(myRatingsRDD)
print ('The training dataset now has %s more entries than the original training dataset' %
(trainingWithMyRatingsRDD.count() - trainingRDD.count()))
assert (trainingWithMyRatingsRDD.count() - trainingRDD.count()) == myRatingsRDD.count()
Explanation: (3b) Add Your Movies to Training Dataset
Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myRatingsModel = ALS.train(trainingWithMyRatingsRDD, bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter)
Explanation: (3c) Train a Model with Your Ratings
Now, train a model with your ratings added and the parameters you used in in part (2c): bestRank, seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
predictedTestMyRatingsRDD = myRatingsModel.predictAll(testForPredictingRDD)
testRMSEMyRatings = computeError(testRDD, predictedTestMyRatingsRDD)
print 'The model had a RMSE on the test set of %s' % testRMSEMyRatings
Explanation: (3d) Check RMSE for the New Model with Your Ratings
Compute the RMSE for this new model on the test set.
For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form: [(1, 1287), (1, 594), (1, 1270)]
Use myRatingsModel.predictAll() to predict rating values for the testForPredictingRDD test dataset, set this as predictedTestMyRatingsRDD
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestMyRatingsRDD from the model.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated.
myUnratedMoviesRDD = (moviesRDD
.map(lambda (id, title): (0, id))
.filter(lambda (myUserID, movieID): movieID not in [movieID for (myID, movieID, rating) in myRatedMovies]))
# Use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies
predictedRatingsRDD = myRatingsModel.predictAll(myUnratedMoviesRDD)
Explanation: (3e) Predict Your Ratings
So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for.
The steps you should perform are:
Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated. This transformation will yield an RDD of the form: [(0, 1), (0, 2), (0, 3), (0, 4)]. Note that you can do this step with one RDD transformation.
For the prediction step, use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Transform movieIDsWithAvgRatingsRDD from part (1b), which has the form (MovieID, (number of ratings, average rating)), into and RDD of the form (MovieID, number of ratings)
movieCountsRDD = movieIDsWithAvgRatingsRDD.map(lambda (movieID, (num, avg)): (movieID, num))
# Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating)
predictedRDD = predictedRatingsRDD.map(lambda (myID, movieID, rating): (movieID, rating))
# Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings))
predictedWithCountsRDD = (predictedRDD
.join(movieCountsRDD))
# Use RDD transformations with PredictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings
ratingsWithNamesRDD = (predictedWithCountsRDD
.join(moviesRDD)
.map(lambda (movieID,((PredictedRating, num), name)) : (PredictedRating, name, num))
.filter(lambda (rating, name, num): num>75))
predictedHighestRatedMovies = ratingsWithNamesRDD.takeOrdered(20, key=lambda x: -x[0])
print ('My highest rated movies as predicted (for movies with more than 75 reviews):\n%s' %
'\n'.join(map(str, predictedHighestRatedMovies)))
Explanation: (3f) Predict Your Ratings
We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
The steps you should perform are:
From Parts (1b) and (1c), we know that we should look at movies with a reasonable number of reviews (e.g., more than 75 reviews). You can experiment with a lower threshold, but fewer ratings for a movie may yield higher prediction errors. Transform movieIDsWithAvgRatingsRDD from Part (1b), which has the form (MovieID, (number of ratings, average rating)), into an RDD of the form (MovieID, number of ratings): [(2, 332), (4, 71), (6, 442)]
We want to see movie names, instead of movie IDs. Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating): [(3456, -0.5501005376936687), (1080, 1.5885892024487962), (320, -3.7952255522487865)]
Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings)): [(2050, (0.6694097486155939, 44)), (10, (5.29762541533513, 418)), (2060, (0.5055259373841172, 97))]
Use RDD transformations with predictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings. For example: [(7.983121900375243, u'Under Siege (1992)'), (7.9769201864261285, u'Fifth Element, The (1997)')]
End of explanation |
13,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Setting the scale
This recipe demonstrates how the scale of the Sankey diagram is set.
By default the scale is calculated for each diagram to achieve a certain whitespace-to-flow ratio within the height that is given. But in some cases, you may want to set the scale explicitly.
For demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.
Step2: If we draw the flow for the year 2020 and the year 2025 separately, they appear the same
Step3: But in fact they have different scales
Step4: The units of the scale are units-of-value per pixel.
If we draw the Sankeys again while setting the scale, we can see that the flow indeed has changed between years | Python Code:
import pandas as pd
from io import StringIO
flows = pd.read_csv(StringIO(
year,source,target,value
2020,A,B,10
2025,A,B,20
))
flows
from floweaver import *
# Set the default size to fit the documentation better.
size = dict(width=100, height=100,
margins=dict(left=20, right=20, top=10, bottom=10))
nodes = {
'A': ProcessGroup(['A']),
'B': ProcessGroup(['B']),
}
bundles = [
Bundle('A', 'B'),
]
ordering = [['A'], ['B']]
sdd = SankeyDefinition(nodes, bundles, ordering)
Explanation: Setting the scale
This recipe demonstrates how the scale of the Sankey diagram is set.
By default the scale is calculated for each diagram to achieve a certain whitespace-to-flow ratio within the height that is given. But in some cases, you may want to set the scale explicitly.
For demonstration, the CSV data is written directly in the cell below -- in practice you would want to load data a file.
End of explanation
w1 = weave(sdd, flows.query('year == 2020')).to_widget(**size)
w1
w2 = weave(sdd, flows.query('year == 2025')).to_widget(**size)
w2
Explanation: If we draw the flow for the year 2020 and the year 2025 separately, they appear the same:
End of explanation
w1.scale, w2.scale
Explanation: But in fact they have different scales:
End of explanation
SCALE = 2.0
from ipywidgets import HBox
w1 = weave(sdd, flows.query('year == 2020')).to_widget(**size)
w2 = weave(sdd, flows.query('year == 2025')).to_widget(**size)
w1.scale = w2.scale = SCALE
HBox([w1, w2])
Explanation: The units of the scale are units-of-value per pixel.
If we draw the Sankeys again while setting the scale, we can see that the flow indeed has changed between years:
End of explanation |
13,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
Step1: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
Step2: Next, we need to tell REBOUND about this function.
Step3: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
Step4: And let's plot the result.
Step5: You can see that the eccentricity is oscillating between 0 and almost 1.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
Step6: But we change the additional force to be
Step7: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
Step8: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity. | Python Code:
import rebound
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
Explanation: Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
End of explanation
ps = sim.particles
c = 0.01
def starkForce(reb_sim):
ps[1].ax += c
Explanation: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
End of explanation
sim.additional_forces = starkForce
Explanation: Next, we need to tell REBOUND about this function.
End of explanation
import numpy as np
Nout = 1000
es = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
es[i] = sim.calculate_orbits()[0].e
Explanation: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.plot(times, es);
Explanation: And let's plot the result.
End of explanation
sim = rebound.Simulation()
sim.integrator = "ias15"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
Explanation: You can see that the eccentricity is oscillating between 0 and almost 1.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
End of explanation
ps = sim.particles
tau = 1000.
def migrationForce(reb_sim):
ps[1].ax -= ps[1].vx/tau
ps[1].ay -= ps[1].vy/tau
ps[1].az -= ps[1].vz/tau
Explanation: But we change the additional force to be
End of explanation
sim.additional_forces = migrationForce
sim.force_is_velocity_dependent = 1
Explanation: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
End of explanation
Nout = 1000
a_s = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
a_s[i] = sim.calculate_orbits()[0].a
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a_s);
Explanation: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity.
End of explanation |
13,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Parametric Regression
Notebook version
Step1: 1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
NOTE
Step2: 2.2. Summary
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following
Step3: Fit a Bayesian linear regression model assuming ${\bf z}={\bf x}$ and
Step4: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
Step5: Exercise 2
Step6: 3.5 Maximum likelihood vs Bayesian Inference. Making predictions
Following an <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as
Step7: Posterior distribution of the target
Since $f^ = f({\bf x}^) = {\bf w}^\top{\bf z}$, $f^*$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows
Step8: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
4 Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues
Step9: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for M=6 | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
import time
Explanation: Bayesian Parametric Regression
Notebook version: 1.3 (Sep 26, 2016)
Author: Jerรณnimo Arenas Garcรญa ([email protected])
Jesรบs Cid-Sueiro ([email protected])
Changes: v.1.0 - First version
v.1.1 - ML Model selection included
v.1.2 - Some typos corrected
v.1.3 - Rewriting text, reorganizing content, some exercises.
Pending changes: * Include regression on the stock data
End of explanation
n_points = 20
n_grid = 200
frec = 3
std_n = 0.2
degree = 3
nplots = 20
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = 0.03 ### Try increasing this value
var_w = sigma_p**2 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
xmin = np.min(X_tr)
xmax = np.max(X_tr)
X_grid = np.linspace(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin),n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
for k in range(nplots):
#Draw weigths fromt the prior distribution
w_iter = np.random.multivariate_normal(mean_w, var_w)
S_grid_iter = np.polyval(w_iter,X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
ax.set_xlim(xmin-0.2*(xmax-xmin), xmax+0.2*(xmax-xmin))
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.set_xlabel('$x$')
ax.set_ylabel('$s$')
plt.show()
Explanation: 1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}^{(k)}, s^{(k)}}_{k=1}^K$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
NOTE: In the following, we will use capital letters, ${\bf X}$, $S$, ..., to denote random variables, and lower-case letters ${\bf x}$, s, ..., to the denote the values they can take. When there is no ambigรผity, we will remove subindices of the density functions, $p_{{\bf X}, S}({\bf x}, s)= p({\bf x}, s)$ to simplify the mathematical notation.
1.2. Model-based parametric regression
Model-based regression methods assume that all data in the training and test dataset habe been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown.
In particular, in this notebook we will assume the target variables in all pairs $({\bf x}^{(k)}, s^{(k)})$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$.
Once $p(s|{\bf x},{\bf w})$ is known or can be estimated, Estimation Theory can be applied to estimate $s$ for any input ${\bf x}$. For instance, any of these classical estimates can be used:
Maximum A Posterior (MAP): $\qquad\hat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$
Minimum Mean Square Error (MSE): $\qquad\hat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}, {\bf w}}$
<img src="figs/ParametricReg.png", width=300>
1.3.1. Maximum Likelihood (ML) parameter estimation
One way to estimate ${\bf w}$ is to apply the maximum likelihood principle: take the value ${\bf w}_\text{ML}$ maximizing the joint distribution of the target variables given the inputs and given ${\bf w}$, i.e.
$$
{\bf w}\text{ML} = \arg\max{\bf w} p({\bf s}|{\bf X}, {\bf w})
$$
where ${\bf s} = \left(s^{(1)}, \dots, s^{(K)}\right)^\top$ is the vector of target variables and ${\bf X} = \left({\bf x}^{(1)}, \dots, {\bf x}^{(K)}\right)^\top$ is the input matrix.
NOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely.
1.3.2. The Gaussian case
A particularly interesting case arises when the data model is Gaussian:
$$p(s|{\bf x}, {\bf w}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model.
In that case, it can be shown that the likelihood function $p({\bf s}| {\bf w})$ ($\equiv p({\bf s}| {\bf X}, {\bf w})$) is given by
$$
p({\bf s}| {\bf w})
= \left(\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}\right)^K
\exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)
$$
which is maximum for the Least Squares solution
$$
{\bf w}_{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
1.4. Limitations of the ML estimators.
Since the ML estimation is equivalent to the LS solution under a Gaussian data model, it has the same drawbacks of LS regression. In particular, ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, som cross validation procedures is required to keep the complexity of the predictor function under control depending on the size of the training set.
2. Bayesian Regression
One of the reasons why the ML estimate is prone to overfitting is that the prediction function uses ${\bf w}_\text{ML}$ without taking into account how much uncertain the true value of ${\bf w}$ is.
Bayesian methods utilize such information but considering ${\bf w}$ as a random variable with some prior distribution $p({\bf w})$. The posterior distribution $p({\bf w}|{\bf s})$ will be our measure of the uncertainty about the true value of the model parameters.
In fact, this posterior distribution is a key component of the predictor function. Indeed, the minimum MSE estimate can be computed as
$$
\hat{s}_\text{MSE}
= \mathbb{E}{s|{\bf s}, {\bf x}}
= \int \mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Since the samples are i.i.d. $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}} = \mathbb{E}{s|{\bf w}, {\bf x}}$ and, thus
$$
\hat{s}_\text{MSE}
= \int \mathbb{E}{s|{\bf w}, {\bf x}} p({\bf w}|{\bf s}) d{\bf w}
$$
Noting that $\mathbb{E}{s|{\bf w}, {\bf s}, {\bf x}}$ is the minimum MSE prediction for a given value of ${\bf w}$, we observe that the Bayesian predictor is a weighted sum of these predictions, weighted by its posterior probability (density) of being the correct one.
2.1. Posterior weight distribution
We will express our <i>a priori</i> belief of models using a prior distribution $p({\bf w})$. Then we can infer the <i>a posteriori</i> distribution using Bayes' rule:
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Where:
- $p({\bf s}|{\bf w})$: is the likelihood function
- $p({\bf w})$: is the <i>prior</i> distribution of the weights (assumptions are needed here)
- $p({\bf s})$: is the <i>marginal</i> distribution of the observed data, which could be obtained integrating the expression in the numerator
The previous expression can be interpreted in a rather intuitive way:
Since ${\bf w}$ are the parameters of the model, $p({\bf w})$ express our belief about which models should be preferred over others before we see any data. For instance, since parameter vectors with small norms produce smoother curves, we could assign (<i>a priori</i>) a larger pdf value to models with smaller norms
The likelihood function $p({\bf s}|{\bf w})$ tells us how well the observations can be explained by a particular model
Finally, the posterior distribution $p({\bf w}|{\bf s})$ expresses the estimated goodness of each model (i.e., each parameter vector ${\bf w}$) taking into consideration both the prior and the likelihood of $\bf w$. Thus, a model with large $p({\bf w})$ would have a low posterior value if it offers a poor explanation of the data (i.e., if $p({\bf s}|{\bf w})$ is small), whereas models that fit well with the observations would get emphasized
The posterior distribution of weights opens the door to working with several models at once. Rather thank keeping the estimated best model according to a certain criterion, we can now use all models parameterized by ${\bf w}$, assigning them different degrees of confidence according to $p({\bf w}|{\bf s})$.
2.1.1. A Gaussian Prior
Since each value of ${\bf w}$ determines a regression functions, by stating a prior distributions over the weights we state also a prior distribution over the space of regression functions.
For instance, we will consider a particular example in which we assume a Gaussian prior for the weights given by:
$${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$
Example
Assume that the true target variable is related to the input observations through the equation
$$
s = {\bf w}^\top{\bf z} + \varepsilon
$$
where ${\bf z} = T({\bf x})$ is a polynomial transformation of the input, $\varepsilon$ is a Gaussian noise variable and ${\bf w}$ some unknown parameter vector.
Assume a Gausian prior weigh distribution, ${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$. For each parameter vector ${\bf w}$, there is a polynomial $f({\bf x}) = {\bf w}^\top {\bf z}$ associated to it. Thus, by drawing samples from $p({\bf w})$ we can generate and plot their associated polynomial functions. This is carried out in the following example.
You can check the effect of modifying the variance of the prior distribution.
End of explanation
# True data parameters
w_true = 3
std_n = 0.4
# Generate the whole dataset
n_max = 64
X_tr = 3 * np.random.random((n_max,1)) - 0.5
S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1)
Explanation: 2.2. Summary
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following:
Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$.
Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$.
Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$.
Compute the MSE estimate of $s$ given ${\bf x}$.
3. Bayesian regression for a Gaussian model.
We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model.
3.1. Step 1: The Gaussian model.
Let as assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2.
$$
s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 {\bf I} \right)
$$
and that the prior is also Gaussian
$$
{\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)
$$
3.2. Step 2: Complete data likelihood
Using the i.i.d. assumption,
$$
{\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)
$$
3.3. Step 3: Posterior weight distribution
The posterior distribution of the weights can be computed using the Bayes rule
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore,
$${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}\text{MSE}, {\bf V}{\bf w}\right)$$
After some algebra, it can be shown that mean and the covariance matrix of the distribution are:
$${\bf V}{\bf w} = \left[\frac{1}{\sigma\varepsilon^2} {\bf Z}^{\top}{\bf Z}
+ {\bf V}_p^{-1}\right]^{-1}$$
$${\bf w}\text{MSE} = {\sigma\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
Exercise 1:
Consider the dataset with one-dimensional inputs given by
End of explanation
# Model parameters
sigma_eps = 0.4
mean_w = np.zeros((1,))
sigma_p = 1e6
Var_p = sigma_p**2* np.eye(1)
Explanation: Fit a Bayesian linear regression model assuming ${\bf z}={\bf x}$ and
End of explanation
# No. of points to analyze
n_points = [1, 2, 4, 8, 16, 32, 64]
# Prepare plots
w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis
plt.figure()
# Compute the prior distribution over the grid points in w_grid
# p = <FILL IN>
p = 1.0/(sigma_p*np.sqrt(2*np.pi)) * np.exp(-(w_grid**2)/(2*sigma_p**2))
plt.plot(w_grid, p,'g-')
for k in n_points:
# Select the first k samples
Zk = X_tr[0:k, :]
Sk = S_tr[0:k]
# Parameters of the posterior distribution
# 1. Compute the posterior variance.
# (Make sure that the resulting variable, Var_w, is a 1x1 numpy array.)
# Var_w = <FILL IN>
Var_w = np.linalg.inv(np.dot(Zk.T, Zk)/(sigma_eps**2) + np.linalg.inv(Var_p))
# 2. Compute the posterior mean.
# (Make sure that the resulting variable, w_MSE, is a scalar)
# w_MSE = <FILL IN>
w_MSE = (Var_w.dot(Zk.T).dot(Sk)/(sigma_eps**2)).flatten()
# Compute the posterior distribution over the grid points in w_grid
sigma_w = np.sqrt(Var_w.flatten()) # First we take a scalar standard deviation
# p = <FILL IN>
p = 1.0/(sigma_w*np.sqrt(2*np.pi)) * np.exp(-((w_grid-w_MSE)**2)/(2*sigma_w**2))
plt.plot(w_grid, p,'g-')
plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=1, antialiased=True)
plt.xlim(w_grid[0], w_grid[-1])
plt.ylim(0, np.max(p))
plt.xlabel('$w$')
plt.ylabel('$p(w|s)$')
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(2.0)
# Remove the temporary plots and fix the last one
display.clear_output(wait=True)
plt.show()
Explanation: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
End of explanation
# <SOL>
x = np.array([-1.0, 3.0])
s_pred = w_MSE * x
plt.figure()
plt.plot(X_tr, S_tr,'b.')
plt.plot(x, s_pred)
plt.show()
# </SOL>
Explanation: Exercise 2:
Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation sigma_n which is exactly equal to the value assumed by the model, stored in variable sigma_eps. Check what happens if we take sigma_eps=4*sigma_n or sigma_eps=sigma_n/4.
Does the algorithm fails in that cases?
What differences can you observe with respect to the ideal case sigma_eps=sigma_n?
3.4. Step 4: MSE estimate
Noting that
$$
\mathbb{E}{s|{\bf w}, {\bf x}} = {\bf w}^\top {\bf z}
$$
we can write
$$
\hat{s}\text{MSE}
= \int {\bf w}^\top {\bf z} p({\bf w}|{\bf s}) d{\bf w}
= \left(\int {\bf w} p({\bf w}|{\bf s}) d{\bf w}\right)^\top {\bf z}
= {\bf w}\text{MSE}^\top {\bf z}
$$
where
$$
{\bf w}\text{MSE}
= \int {\bf w} p({\bf w}|{\bf s}) d{\bf w}
= {\sigma\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}
$$
Therefore, in the Gaussian case, the weighted integration of prediction function is equivalent to apply a unique model, with weights ${\bf w}_\text{MSE}$.
Exercise 3:
Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3].
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
# Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5
Var_p = sigma_p**2 * np.eye(degree+1)
# Data generation
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-.5,2.5,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z = np.asmatrix(Z)
#Compute posterior distribution parameters
Var_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(Var_p))
posterior_mean = Var_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
for k in range(nplots):
# Draw weights from the posterior distribution
w_iter = np.random.multivariate_normal(posterior_mean, Var_w)
# Note that polyval assumes the first element of weight vector is the coefficient of
# the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(w_iter[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
# We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid, S_grid_iter, 'm-', label='LS regression')
ax.set_xlim(-.5, 2.5)
ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2)
ax.legend(loc='best')
plt.show()
Explanation: 3.5 Maximum likelihood vs Bayesian Inference. Making predictions
Following an <b>ML approach</b>, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as:
$$p({s^}|{\bf w}_{ML},{\bf x}^) $$
For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is:
$$p({s^}|{\bf w}_{ML},{\bf x}^) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}_{ML}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$$
* The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model).
* If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction.
Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^ = s({\bf x}^)$ is carried out by mixing all models, according to the weights given by the posterior distribution.
\begin{align}p({s^}|{\bf x}^,{\bf s})
& = \int p({s^}~|~{\bf w},{\bf x}^) p({\bf w}~|~{\bf s}) d{\bf w}\end{align}
where:
* $p({s^*}|{\bf w},{\bf x}^*) = \displaystyle\frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^* - {\bf w}^\top {\bf z}^*\right)^2}{2 \sigma_\varepsilon^2} \right)$
* $p({\bf w}~|~{\bf s})$: Is the posterior distribution of the weights, that can be computed using Bayes' Theorem.
The following fragment of code draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 12
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
Explanation: Posterior distribution of the target
Since $f^ = f({\bf x}^) = {\bf w}^\top{\bf z}$, $f^*$ is also a Gaussian variable whose posterior mean and variance can be calculated as follows:
$$\mathbb{E}{{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^} =
{{\bf z}^}^\top \mathbb{E}{{\bf w}|{\bf s}} =
{\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
$$\text{Cov}\left[{{\bf z}^}^\top {\bf w}~|~{\bf s}, {\bf z}^\right] =
{{\bf z}^}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {{\bf z}^} =
{{\bf z}^}^\top {\bf V}_{\bf w} {{\bf z}^}$$
Therefore, $f^~|~{\bf s}, {\bf x}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}_{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^*} \right)$
Finally, for $s^ = f^ + \varepsilon^$, the posterior distribution is $s^~|~{\bf s}, {\bf z}^ \sim {\cal N}\left({\sigma_\varepsilon^{-2}} {{\bf z}^}^\top {\pmb\Sigma}{\bf w} {\bf Z}^\top {\bf s}, {{\bf z}^}^\top {\pmb \Sigma}_{\bf w} {{\bf z}^} + \sigma\varepsilon^2\right)$
End of explanation
from math import pi
n_points = 15
frec = 3
std_n = 0.2
max_degree = 12
#Prior distribution parameters
sigma_eps = 0.2
mean_w = np.zeros((degree+1,))
sigma_p = 0.5
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Evaluate the posterior evidence
logE = []
for deg in range(max_degree):
Z_iter = Z[:,:deg+1]
logE_iter = -((deg+1)*np.log(2*pi)/2) \
-np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \
-S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2
logE.append(logE_iter[0,0])
plt.plot(np.array(range(max_degree))+1,logE)
plt.xlabel('Polynomia degree')
plt.ylabel('log evidence')
plt.show()
Explanation: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
4 Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues:
For a given degree, how do we choose the weights?
Should we focus on just one model, or can we use several models at once?
However, we still needed some assumptions: a parametric model (i.e., polynomial function and <i>a priori</i> degree selection) and several parameters needed to be adjusted.
Though we can recur to cross-validation, Bayesian inference opens the door to other strategies.
We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models)
We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion
4.1 Model evidence
The evidence of a model is defined as
$$L = p({\bf s}~|~{\cal M})$$
where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the <i>a priori</i> covariance matrix of the weights
Applying the Theorem of Total probability, we can compute the evidence of the model as
$$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$
For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as
$$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$
It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore:
$p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$
$p({\bf w}~|~{\cal M})$ is the <i>a priori</i> distribution of the weights
4.2 Model selection via evidence maximization
As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived)
Alternatively, maximizing the evidence is normally good enough
$${\cal M}{ML} = \arg\max{\cal M} p(s~|~{\cal M})$$
Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model
4.3 Example: Selection of the degree of the polynomia
For the previous example we had (we consider a spherical Gaussian for the weights):
${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)$
${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},\sigma_p^2 {\bf I} \right)$
In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that
$L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$
If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence
$$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$
where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1).
The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 5 #M-1
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
plt.show()
Explanation: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for M=6
End of explanation |
13,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial queries
Step1: Let's check if those points are within the polygon
Step2: Okey, so we can see that the first point seems to be inside that polygon and the other one doesn't.
In fact, the first point is close to the center of the polygon as we can see
Step3: It is also possible to do PIP other way around, i.e. to check if polygon contains a point
Step4: Thus, both ways has the same results.
Which one should you use then? Well, it depends
Step5: Let's see if they intersect
Step6: Do they also touch each other?
Step7: Indeed, they do and we can see this by plotting the features together
Step8: Thus, the line_b continues from the same node ( (1,1) ) where line_a ends.
However, if the lines overlap fully, they don't touch, as we can see | Python Code:
from shapely.geometry import Point, Polygon
# Create Point objects
p1 = Point(24.952242, 60.1696017)
p2 = Point(24.976567, 60.1612500)
# Create a Polygon
coords = [(24.950899, 60.169158), (24.953492, 60.169158), (24.953510, 60.170104), (24.950958, 60.169990)]
poly = Polygon(coords)
# Let's check what we have
print(p1)
print(p2)
print(poly)
Explanation: Spatial queries: Point in Polygon & Intersect
Finding out if a certain point is located inside or outside of an area, or finding out if a line intersects with another line or polygon are fundamental geospatial operations that are often used e.g. to select data based on location. Such spatial queries are one of the typical first steps of the workflow when doing spatial analysis. Performing a spatial join (will be introduced later) between two spatial datasets is one of the most typical applications where Point in Polygon (PIP) query is used.
How to check if point is inside a polygon?
Computationally, detecting if a point is inside a Polygon is most commonly done using a specific formula called Ray Casting algorithm. Luckily, we do not need to create such a function ourselves for conducting the Point in Polygon (PIP) query. Instead, we can take advantage of Shapely's binary predicates that can evaluate the topolocical relationships between geographical objects, such as the PIP as we're interested here.
There are basically two ways of conducting PIP in Shapely:
using a function called .within() that checks if a point is within a polygon
using a function called .contains() that checks if a polygon contains a point
Notice: even though we are talking here about Point in Polygon operation, it is also possible to check if a LineString or Polygon is inside another Polygon.
Let's first create a Polygon using a list of coordinate-tuples and a couple of Point objects
End of explanation
# Check if p1 is within the polygon using the within function
p1_within = p1.within(poly)
# Check if p2 is within the polygon
p2_within = p2.within(poly)
# Print the results
print("Is p1 within the polygon?: ", p1_within)
print("Is p2 within the polygon?: ", p2_within)
Explanation: Let's check if those points are within the polygon
End of explanation
print(p1)
print(poly.centroid)
Explanation: Okey, so we can see that the first point seems to be inside that polygon and the other one doesn't.
In fact, the first point is close to the center of the polygon as we can see:
End of explanation
# Does polygon contain point 1
print("Does polygon contain p1?: ", poly.contains(p1))
# What about the other point?
print("Does polygon contain p2?: ", poly.contains(p2))
Explanation: It is also possible to do PIP other way around, i.e. to check if polygon contains a point:
End of explanation
from shapely.geometry import LineString, MultiLineString
# Create two lines
line_a = LineString([(0, 0), (1, 1)])
line_b = LineString([(1, 1), (0, 2)])
Explanation: Thus, both ways has the same results.
Which one should you use then? Well, it depends:
if you have many points and just one polygon and you try to find out which one of them is inside the polygon:
you need to iterate over the points and check one at a time if it is within() the polygon specified
if you have many polygons and just one point and you want to find out which polygon contains the point
you need to iterate over the polygons until you find a polygon that contains() the point specified (assuming there are no overlapping polygons)
Intersect
Another typical geospatial operation is to see if a geometry intersect or touches another one. The difference between these two is that:
if objects intersect, the boundary and interior of an object needs to intersect in any way with those of the other.
If an object touches the other one, it is only necessary to have (at least) a single point of their boundaries in common but their interiors shoud NOT intersect.
Let's try these out.
Let's create two LineStrings
End of explanation
line_a.intersects(line_b)
Explanation: Let's see if they intersect
End of explanation
line_a.touches(line_b)
Explanation: Do they also touch each other?
End of explanation
# Create a MultiLineString
multi_line = MultiLineString([line_a, line_b])
multi_line
Explanation: Indeed, they do and we can see this by plotting the features together
End of explanation
# Check if line_a touches itself
print("Touches?: ", line_a.touches(line_a))
# However, it does intersect
print("Intersects?: ", line_a.intersects(line_a))
Explanation: Thus, the line_b continues from the same node ( (1,1) ) where line_a ends.
However, if the lines overlap fully, they don't touch, as we can see:
End of explanation |
13,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Document retrieval from wikipedia data
Fire up GraphLab Create
Step1: Load some text data - from wikipedia, pages on people
Step2: Data contains
Step3: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
Step4: Exploring the entry for actor George Clooney
Step5: Get the word counts for Obama article
Step6: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
Step7: Sorting the word counts to show most common words at the top
Step8: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
Step9: Examine the TF-IDF for the Obama article
Step10: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
Step11: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
Step12: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
Step13: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval | Python Code:
import graphlab
Explanation: Document retrieval from wikipedia data
Fire up GraphLab Create
End of explanation
people = graphlab.SFrame('people_wiki.gl/')
Explanation: Load some text data - from wikipedia, pages on people
End of explanation
people.head()
len(people)
Explanation: Data contains: link to wikipedia article, name of person, text of article.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
Explanation: Explore the dataset and checkout the text it contains
Exploring the entry for president Obama
End of explanation
clooney = people[people['name'] == 'George Clooney']
clooney['text']
Explanation: Exploring the entry for actor George Clooney
End of explanation
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
print obama['word_count']
Explanation: Get the word counts for Obama article
End of explanation
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
Explanation: Sort the word counts for the Obama article
Turning dictonary of word counts into a table
End of explanation
obama_word_count_table.head()
obama_word_count_table.sort('count',ascending=False)
Explanation: Sorting the word counts to show most common words at the top
End of explanation
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
people.head()
tfidf = graphlab.text_analytics.tf_idf(people['word_count'])
# Earlier versions of GraphLab Create returned an SFrame rather than a single SArray
# This notebook was created using Graphlab Create version 1.7.1
if graphlab.version <= '1.6.1':
tfidf = tfidf['docs']
tfidf
people['tfidf'] = tfidf
people.head()
Explanation: Most common words include uninformative words like "the", "in", "and",...
Compute TF-IDF for the corpus
To give more weight to informative words, we weigh them by their TF-IDF scores.
End of explanation
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
Explanation: Examine the TF-IDF for the Obama article
End of explanation
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
Explanation: Words with highest TF-IDF are much more informative.
Manually compute distances between a few people
Let's manually compare the distances between the articles for a few famous people.
End of explanation
graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
Explanation: Is Obama closer to Clinton than to Beckham?
We will use cosine distance, which is given by
(1-cosine_similarity)
and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham.
End of explanation
knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name')
knn_model.summary()
Explanation: Build a nearest neighbor model for document retrieval
We now create a nearest-neighbors model and apply it to document retrieval.
End of explanation
knn_model.query(obama)
Explanation: Applying the nearest-neighbors model for retrieval
Who is closest to Obama?
End of explanation
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
elton = people[people['name'] == 'Elton John']
elton
elton[['word_count']].stack('word_count', new_column_name = ['word','count']).sort('count',ascending=False)
elton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
victoria = people[people['name'] == 'Victoria Beckham']
graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0])
paul = people[people['name'] == 'Paul McCartney']
graphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0])
knn_model_counts_cosine = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name',distance='cosine')
knn_model_tfidf_cosine = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name',distance='cosine')
knn_model_counts_cosine.query(elton)
knn_model_tfidf_cosine.query(elton)
knn_model_counts_cosine.query(victoria)
knn_model_tfidf_cosine.query(victoria)
Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians.
Other examples of document retrieval
End of explanation |
13,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: This notebook is to test the inheritance machinery in gemini.
Families are drawn in an image where
Affection
white fill means unaffected
gray fill means unknown
black fill means affected
genotype status is labelled in the image.
See this issue for discussion along with this gist
Step2: Autosomal Recessive
Step3: Autosomal Dominant
Step5: with unknowns
Step6: De Novo | Python Code:
import gemini.tests
from gemini.tests.test_inheritance import TestFamily, family
Sample = family.Sample
HOM_REF, HET, UNKNOWN, HOM_ALT = range(4)
fam = TestFamily(
#family_id sample_id paternal_id maternal_id sex phenotype
1 dad 0 0 1 1
1 mom grandpa grandma 2 1
1 kid dad mom 1 2
1 kid2 dad mom 1 1
1 grandma 0 0 2 1
1 grandpa 0 0 1 1)
fam.gt_types = [HET, HET, HOM_ALT, HET, HET, HET]
fam.gt_depths = [20, 20, 20, 20, 20, 20]
fam.draw(tests=("auto_rec", ))
Explanation: This notebook is to test the inheritance machinery in gemini.
Families are drawn in an image where
Affection
white fill means unaffected
gray fill means unknown
black fill means affected
genotype status is labelled in the image.
See this issue for discussion along with this gist
End of explanation
# if we set anyone else to HOM_ALT, it's no longer auto_rec unless only-affected is False:
# set grandpa to HOM_ALT
fam.gt_types[5] = HOM_ALT
fam.draw(tests=("auto_rec"))
# set grandpa back to het
fam.gt_types[5] = HET
# can require a greater read depth (note we set all samples to have depth 20 above)
fam.auto_rec(min_depth=12), fam.auto_rec(min_depth=22)
# if we set someone else to affected... the sibling. it can never be auto_rec:
fam.subjects[3].affected = True
fam.draw(tests=("auto_rec",))
Explanation: Autosomal Recessive
End of explanation
fam.gt_types = [HET, HET, HOM_ALT, HET, HOM_REF, HOM_REF]
fam.draw(tests=("auto_dom"))
# parents arent affected...
fam.de_novo(strict=False) # even though mom is a de-novo?
for s in fam.subjects: s.affected = True
fam.draw(tests=("auto_dom"))
fam.subjects[5].affected = fam.subjects[4].affected = False
fam.draw()
# still not auto_dom because affected is homalt
fam.gt_types[2] = HET
fam.draw(tests=("auto_dom",))
# TODO: check this
ffd = family.Family([Sample("mom", True, "female"), Sample("dad", False, "male"),
Sample("kid", True, "male")], "fam")
ffd.subjects[2].mom = ffd.subjects[0]
ffd.subjects[2].dad = ffd.subjects[1]
ffd = TestFamily(ffd)
ffd.gt_types = [HET, HOM_ALT, HET]
# dad is homalt, so only works under only_affected=False
ffd.draw(tests=("auto_dom",))
ffd.gt_types = [HET, HOM_REF, HET]
ffd.draw(tests=("auto_dom",))
Explanation: Autosomal Dominant
End of explanation
fam3 = TestFamily(
#family_id sample_id paternal_id maternal_id sex phenotype
1 dad 0 0 1 2
1 mom grandpa grandma 2 -9
1 kid dad mom 1 2
)
fam3.gt_types = [HET, HET, HET]
fam3.gt_depths = [20, 20, 20]
fam3.draw(tests=("auto_dom",))
Explanation: with unknowns
End of explanation
# not a de_novo because parents have it.
fam3.draw(tests=("de_novo",))
# should never be de_novo, because mom has it
fam3.gt_types = [HOM_REF, HOM_ALT, HET]
fam3.subjects[0].affected = False
fam3.subjects[1].affected = False
fam3.subjects[2].affected = True
fam3.draw(tests=("de_novo",))
# should never be auto_dom (at least not this generation. since it's de_novo).
fam3.gt_types = [HOM_REF, HOM_REF, HET]
fam3.draw(tests=("auto_dom",))
ff = family.Family([Sample("mom", False, "female"), Sample("dad", False, "male"),
Sample("kid", True, "male")], "fam")
ff.subjects[2].mom = ff.subjects[0]
ff.subjects[2].dad = ff.subjects[1]
ff = TestFamily(ff)
ff.gt_types = [HOM_ALT, HOM_ALT, HET]
ff.draw(tests=("de_novo",))
# unaffected sibling is a HET.
ff = family.Family([Sample("mom", False, "female"), Sample("dad", False, "male"),
Sample("kid", True, "male"), Sample("kid2", False, "male")], "fam")
ff.subjects[2].mom = ff.subjects[0]
ff.subjects[2].dad = ff.subjects[1]
ff.subjects[3].mom = ff.subjects[0]
ff.subjects[3].dad = ff.subjects[1]
ff = TestFamily(ff)
ff.gt_types = [HOM_ALT, HOM_ALT, HET, HET]
ff.draw(tests=("de_novo",))
Explanation: De Novo
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.