Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Critical Radii
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Detached Systems
Detached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.
Step3: We can see that the default system is well within this critical value by printing all radii and critical radii.
Step4: If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute(). | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Critical Radii: Detached Systems
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b['requiv_max@component@primary']
b['requiv_max@constraint@primary']
Explanation: Detached Systems
Detached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.
End of explanation
print(b.filter(qualifier='requiv*', context='component'))
Explanation: We can see that the default system is well within this critical value by printing all radii and critical radii.
End of explanation
b['requiv@primary'] = 2.2
print(b.run_checks())
Explanation: If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute().
End of explanation |
9,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering with K-means
In the unsupervised setting, one of the most straightforward tasks we can perform is to find groups of data instances which are similar between each other. We call such groups of data points clusters.
We position ourselves in the setting where we have access to a dataset $D$ that consists of instances $x \in \mathbb{R}^n$. For example, if our instances have two features $x_1$ and $x_2$ we are in the $\mathbb{R}^2$ space. For simplicity and visualization purposes in this session, we assume our data to be 2-dimensional. That said, the method (as well as the implementation) generalizes to more dimensions in a straightforward way.
$k$-Means is one of the most popular and representative "clustering" algorithms. $k$-means stores $k$ centroids, that is points in the $n$-dimensional space which are then used to define clusters. A point is considered to be in a particular cluster if it is closer to that cluster's centroid than any other centroid.
The optimization algorithm
The most common algorithm uses an iterative refinement technique. $k$-means is a ubiquitous case of the Expectation Maximization algorithm for clustering; it is also referred to as Lloyd's algorithm.
Given an initial set of $k$ centroids $m_1(1), \ldots, m_k(1)$ , the algorithm proceeds by alternating between two steps
Step1: Visualisation with PCA
Wikipedia
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of distinct principal components is equal to the smaller of the number of original variables or the number of observations minus one. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set.
Basically we will only use PCA for a visualisation purpose. Our goal is to get a 2D visualisation of a 5717-dimensional dataset. Just keep in mind that PCA relies on the assumption that the data are linearly separable, but it's not always the case!
TO DO
Step2: Running K-means
We are now ready to run Kmeans! To this end, we will use the implementation given by scikit learn.
As for the previous session, we first initialise the algorithm then fit it to our data.
TO DO
Step3: Initialisation
Step4: How to set k ?
The Elbow method
The Elbow method is a method of interpretation and validation of consistency within cluster analysis designed to help finding the appropriate number of clusters in a dataset. This method looks at the percentage of variance explained as a function of the number of clusters. One should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data.
If one plots the percentage of variance explained by the clusters against the number of clusters the first clusters will add much information (explain a lot of variance), but at some point the marginal gain in explained variance will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion"
Step6: Gap Statistic
The Gap Statistic was developped by reasearchers from stanford, and relies on the variation of the within-cluster variation. To have more details, you
To compute it we will use the implementation of https
Step7: What about mixture model ?
In the probabilistic framework of mixture models, we assume that the data are generated according to a mixture of probability density functions, with cluster-specific parameters.
$$ p(x,\theta) = \sum_k \pi_k f(x,\theta_k)$$
where, $\pi_k$ can be interpreted as the proportion of each cluster and $\theta_k$ is the set of parameters. For instance, in the case of a Gaussian mixture we have $\theta_k = (\mu_k,\sigma_k)$.
Then, the goal is to estimate the set of parameters $\theta_k$ and to compute the partition of the objects which is assumed to a hidden variable of the model. | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('NAm2.txt', sep=" ")
print(df.head())
print(df.shape)
# List of populations/tribes
tribes = df.Pop.unique()
country = df.Country.unique()
print(tribes)
print(country)
# The features that we need for clustering starts from the 9th one
# Subset of the dataframe
df_micro = df.iloc[0:494,8:5717]
df_micro.shape
Explanation: Clustering with K-means
In the unsupervised setting, one of the most straightforward tasks we can perform is to find groups of data instances which are similar between each other. We call such groups of data points clusters.
We position ourselves in the setting where we have access to a dataset $D$ that consists of instances $x \in \mathbb{R}^n$. For example, if our instances have two features $x_1$ and $x_2$ we are in the $\mathbb{R}^2$ space. For simplicity and visualization purposes in this session, we assume our data to be 2-dimensional. That said, the method (as well as the implementation) generalizes to more dimensions in a straightforward way.
$k$-Means is one of the most popular and representative "clustering" algorithms. $k$-means stores $k$ centroids, that is points in the $n$-dimensional space which are then used to define clusters. A point is considered to be in a particular cluster if it is closer to that cluster's centroid than any other centroid.
The optimization algorithm
The most common algorithm uses an iterative refinement technique. $k$-means is a ubiquitous case of the Expectation Maximization algorithm for clustering; it is also referred to as Lloyd's algorithm.
Given an initial set of $k$ centroids $m_1(1), \ldots, m_k(1)$ , the algorithm proceeds by alternating between two steps:
Assignment step: Assign each observation to the cluster whose mean yields the least within-cluster sum of squares (WCSS). Since the sum of squares is the squared Euclidean distance, this is intuitively the "nearest" mean.
Update step: Calculate the new means to be the centroids of the observations in the new clusters. Since the arithmetic mean is a least-squares estimator, this also minimizes the within-cluster sum of squares (WCSS) objective.
The algorithm has converged when the assignments no longer change. Since both steps optimize the WCSS objective, and there only exists a finite number of such partitionings, the algorithm must converge to a (local) optimum. There is no guarantee that the global optimum is found using this algorithm.
The algorithm is often presented as assigning objects to the nearest cluster by distance. The standard algorithm aims at minimizing the WCSS objective, and thus assigns by "least sum of squares", which is exactly equivalent to assigning by the smallest Euclidean distance. Using a different distance function other than (squared) Euclidean distance may stop the algorithm from converging.
Illustration of training
To make it easier to understand, the figure belows illustrates the process.
The figure depicts the k-means algorithm (Images courtesy of Michael Jordan and adapted from http://stanford.edu/~cpiech/cs221/handouts/kmeans.html). The training examples are shown as dots, and the cluster centroids are shown as crosses. (a) the dataset, (b) random initial cluster centroids -- one may initialize the algorithm using data points as centroids also, (c-f) illustration of running two iterations of k-means. In each iteration, we assign each training example to the closest cluster centroid (shown by "painting" the training examples the same color as the cluster centroid to which is assigned); then we move each cluster centroid to the mean of the points assigned to it.
Today
Our goal today, is to run K-means on a real dataset. This dataset was first created to study genetic diversity accross America and consists of 494 individuals coming from 27 different tribes (across 10 countries). These individuals are described by their genetic profil in terms of micro-satellites. In addition we have information about the precise location of the tribes, given by the latitude and longitude features.
TO DO :
Import the data
* import the data NAm2.txt using pandas dataframe that you will name df.
* print the first line of df and its dimension
* Create two lists containing the name of the tribes (Pop) and the country (Country). -> see unique() from pandas
Pre-processing
* create a subset of df by only keeping genetic features. This new dataframe is name df_micro.
* do you need to scale the data?
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
Y_sklearn = pca.fit(df_micro)
projected = pca.fit_transform(df_micro)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 6)
plt.scatter(projected[:, 0], projected[:, 1],
c=df.Pop.astype('category').cat.codes, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
Explanation: Visualisation with PCA
Wikipedia
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of distinct principal components is equal to the smaller of the number of original variables or the number of observations minus one. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors are an uncorrelated orthogonal basis set.
Basically we will only use PCA for a visualisation purpose. Our goal is to get a 2D visualisation of a 5717-dimensional dataset. Just keep in mind that PCA relies on the assumption that the data are linearly separable, but it's not always the case!
TO DO : execute the following code!
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=10)
res = kmeans.fit(df_micro)
labels = res.labels_
res.cluster_centers_
plt.scatter(projected[:, 0], projected[:, 1],
c=labels, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
Explanation: Running K-means
We are now ready to run Kmeans! To this end, we will use the implementation given by scikit learn.
As for the previous session, we first initialise the algorithm then fit it to our data.
TO DO :
* run Kmeans and set the number of clusters to 10
* describe the obtained labels (distribution of objects among them)
* print the obtained centroids
* use the pca plot to visualise the labels obtained with Kmeans. To this end, you just need to change the parameters c in the previous scatter plot and to replace the current one with the obtained labels.
End of explanation
from sklearn import metrics
# 1 random initisalition
kmeans = KMeans(n_clusters=10, init='random')
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='random')
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
# 50 random initialisations
kmeans = KMeans(n_clusters=10, init='random', n_init=50)
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='random',n_init=50)
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
# 50 initisalition and improve strategy
kmeans = KMeans(n_clusters=10, init='k-means++', n_init=50)
res = kmeans.fit(df_micro)
label_1 = res.labels_
centroids_1 = res.cluster_centers_
kmeans = KMeans(n_clusters=10, init='k-means++',n_init=50)
res = kmeans.fit(df_micro)
label_2 = res.labels_
centroids_2 = res.cluster_centers_
metrics.adjusted_rand_score(label_1, label_2)
Explanation: Initialisation : be careful!
The initialisation step requires the to set the number of clusters K. To this end, one can either use a priori information and set it manually but there also exists several approach to determine it, including for instance, the Elbow method and the Gap Statistic.
In addition one need to initialise the centroid. Commonly used initialization methods are Forgy and Random Partition. The Forgy method randomly chooses $k$ observations from the data set and uses them as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds to the update step, thus computing the initial mean to be the centroid of the cluster's randomly assigned points. The Forgy method tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. For expectation maximization and standard k-means algorithms, the Forgy method of initialization is preferable.
TO DO :
* run Kmeans twice (with K=10 to speed up things) on the df_micro data with random intialisations then compare the obtained labels from both runs with the adjusted rand index from the metric library (available in sklearn).
same than before but this time set the number of initialisation to 50.
swith the initialisation method to Kmeans++ and run previous experiments once again
End of explanation
cluster_range = range(1,50)
cluster_errors = []
for num_clusters in cluster_range:
clust = KMeans(n_clusters=num_clusters, random_state=0, n_init=10)
clust.fit(df_micro)
cluster_errors.append(clust.inertia_)
clusters_df = pd.DataFrame( { "num_clusters":cluster_range, "cluster_errors": cluster_errors } )
clusters_df[0:10]
plt.figure(figsize=(12,6))
plt.plot( clusters_df.num_clusters, clusters_df.cluster_errors, marker = "o" )
Explanation: How to set k ?
The Elbow method
The Elbow method is a method of interpretation and validation of consistency within cluster analysis designed to help finding the appropriate number of clusters in a dataset. This method looks at the percentage of variance explained as a function of the number of clusters. One should choose a number of clusters so that adding another cluster doesn't give much better modeling of the data.
If one plots the percentage of variance explained by the clusters against the number of clusters the first clusters will add much information (explain a lot of variance), but at some point the marginal gain in explained variance will drop, giving an angle in the graph. The number of clusters is chosen at this point, hence the "elbow criterion"
End of explanation
def optimalK(data, nrefs=3, maxClusters=15):
Calculates KMeans optimal K using Gap Statistic from Tibshirani, Walther, Hastie
Params:
data: ndarry of shape (n_samples, n_features)
nrefs: number of sample reference datasets to create
maxClusters: Maximum number of clusters to test for
Returns: (gaps, optimalK)
gaps = np.zeros((len(range(1, maxClusters)),))
resultsdf = pd.DataFrame({'clusterCount':[], 'gap':[]})
for gap_index, k in enumerate(range(1, maxClusters)):
# Holder for reference dispersion results
refDisps = np.zeros(nrefs)
# For n references, generate random sample and perform kmeans getting resulting dispersion of each loop
for i in range(nrefs):
# Create new random reference set
randomReference = np.random.random_sample(size=data.shape)
# Fit to it
km = KMeans(k)
km.fit(randomReference)
refDisp = km.inertia_
refDisps[i] = refDisp
# Fit cluster to original data and create dispersion
km = KMeans(k)
km.fit(data)
origDisp = km.inertia_
# Calculate gap statistic
gap = np.log(np.mean(refDisps)) - np.log(origDisp)
# Assign this loop's gap statistic to gaps
gaps[gap_index] = gap
resultsdf = resultsdf.append({'clusterCount':k, 'gap':gap}, ignore_index=True)
return (gaps.argmax() + 1, resultsdf) # Plus 1 because index of 0 means 1 cluster is optimal, index 2 = 3 clusters are optimal
k, gapdf = optimalK(df_micro, nrefs=5, maxClusters=30)
print ('Optimal k is: ', k)
plt.plot(gapdf.clusterCount, gapdf.gap, linewidth=3)
plt.scatter(gapdf[gapdf.clusterCount == k].clusterCount, gapdf[gapdf.clusterCount == k].gap, s=250, c='r')
plt.xlabel('Cluster Count')
plt.ylabel('Gap Value')
plt.title('Gap Values by Cluster Count')
plt.show()
Explanation: Gap Statistic
The Gap Statistic was developped by reasearchers from stanford, and relies on the variation of the within-cluster variation. To have more details, you
To compute it we will use the implementation of https://github.com/milesgranger/gap_statistic
TO DO
* Install the library gap statistic
* Run the optimalK function on df_micro with a maximum number of clusters set to 30
End of explanation
# 1 run of the Gaussian mixture models
from sklearn import mixture
gmm = mixture.GaussianMixture(n_components=10).fit(df_micro)
labels_gmm = gmm.predict(df_micro)
Explanation: What about mixture model ?
In the probabilistic framework of mixture models, we assume that the data are generated according to a mixture of probability density functions, with cluster-specific parameters.
$$ p(x,\theta) = \sum_k \pi_k f(x,\theta_k)$$
where, $\pi_k$ can be interpreted as the proportion of each cluster and $\theta_k$ is the set of parameters. For instance, in the case of a Gaussian mixture we have $\theta_k = (\mu_k,\sigma_k)$.
Then, the goal is to estimate the set of parameters $\theta_k$ and to compute the partition of the objects which is assumed to a hidden variable of the model.
End of explanation |
9,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a machine learning model with scikit-learn
From the video series
Step1: scikit-learn 4-step modeling pattern
Step 1
Step2: Step 2
Step3: Name of the object does not matter
Can specify tuning parameters (aka "hyperparameters") during this step
All parameters not specified are set to their defaults
Step4: Step 3
Step5: Step 4
Step6: Returns a NumPy array
Can predict for multiple observations at once
Step7: Using a different value for K
Step8: Using a different classification model | Python Code:
# import load_iris function from datasets module
from sklearn.datasets import load_iris
# save "bunch" object containing iris dataset and its attributes
iris = load_iris()
# store feature matrix in "X"
X = iris.data
# store response vector in "y"
y = iris.target
# print the shapes of X and y
print X.shape
print y.shape
Explanation: Training a machine learning model with scikit-learn
From the video series: Introduction to machine learning with scikit-learn
jupyter notebook 04_model_training.ipynb
Agenda
What is the K-nearest neighbors classification model?
What are the four steps for model training and prediction in scikit-learn?
How can I apply this pattern to other machine learning models?
K-nearest neighbors (KNN) classification
Pick a value for K.
Search for the K observations in the training data that are "nearest" to the measurements of the unknown iris.
Use the most popular response value from the K nearest neighbors as the predicted response value for the unknown iris.
Example training data
KNN classification map (K=1)
KNN classification map (K=5)
Image Credits: Data3classes, Map1NN, Map5NN by Agor153. Licensed under CC BY-SA 3.0
Loading the data
End of explanation
from sklearn.neighbors import KNeighborsClassifier
Explanation: scikit-learn 4-step modeling pattern
Step 1: Import the class you plan to use
End of explanation
knn = KNeighborsClassifier(n_neighbors=1)
Explanation: Step 2: "Instantiate" the "estimator"
"Estimator" is scikit-learn's term for model
"Instantiate" means "make an instance of"
End of explanation
print knn
Explanation: Name of the object does not matter
Can specify tuning parameters (aka "hyperparameters") during this step
All parameters not specified are set to their defaults
End of explanation
knn.fit(X, y)
Explanation: Step 3: Fit the model with data (aka "model training")
Model is learning the relationship between X and y
Occurs in-place
End of explanation
print(knn.predict([[3, 5, 4, 2]]))
Explanation: Step 4: Predict the response for a new observation
New observations are called "out-of-sample" data
Uses the information it learned during the model training process
End of explanation
X_new = [[3, 5, 4, 2], [5, 4, 3, 2]]
knn.predict(X_new)
Explanation: Returns a NumPy array
Can predict for multiple observations at once
End of explanation
# instantiate the model (using the value K=5)
knn = KNeighborsClassifier(n_neighbors=5)
# fit the model with data
knn.fit(X, y)
# predict the response for new observations
knn.predict(X_new)
Explanation: Using a different value for K
End of explanation
# import the class
from sklearn.linear_model import LogisticRegression
# instantiate the model (using the default parameters)
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X, y)
# predict the response for new observations
logreg.predict(X_new)
Explanation: Using a different classification model
End of explanation |
9,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
Step1: <tt>birthord</tt>에 대한 빈도수를 출력하고 codebook 게시된 결과값과 비교하시오.
Step2: <tt>prglngth</tt>에 대한 빈도수를 출력하고 codebook 게시된 결과값과 비교하시오.
Step3: <tt>agepreg</tt>에 대한 빈도수를 출력하고 codebook에 게시된 결과값과 비교하시오.
이 데이터를 살펴보고, 응답자에 대한 존경과 맥락을 고려해서 데이터에 접근하는에 필요한 의무에 관해서 저자가 언급한 논평을 기억하라.
Step4: 평균 출생체중(birthweight)을 계산하시오.
Step5: 킬로그램으로 출생체중 정보를 담는 <tt>totalwgt_kg</tt>로 불리는 새로운 칼럼을 생성하라. 평균도 계산하시오. 새로운 칼럼을 생성할 때, 점표기법이 아닌 딕셔너리 구문을 사용하는 것을 기억하라.
Step6: 코드북(codebook)을 살펴보고 책에서 언급된 것이 아닌 본인이 관심있는 변수를 찾아내라. 그리고 그 변수의 빈도수, 평균, 다른 통계량을 계산하시오.
Step7: 부울 시리즈(boolean Series)를 생성하시오.
Step8: 부울 시리즈를 사용해서 정상출산 임신에 대한 레코드를 선택하시오.
Step9: <tt>birthwgt_lb</tt> 변수에 0 에서 5 파운드(0과 5도 모두 포함) 사이 정상출산 빈도수를 계산하시오. 결과는 1125 가 되어야만 된다.
Step10: <tt>birthwgt_lb</tt> 변수에 9 에서 95 파운드(9과 95도 모두 포함) 사이 정상출산 빈도수를 계산하시오. 결과는 798 가 되어야만 된다.
Step11: <tt>birthord</tt> 변수를 사용해서, 첫번째 아이와 첫째가 아닌 아이에 대한 레코드를 선택하시오. 첫번째 아이와 첫째가 아닌 아이는 얼마나 되는가?
Step12: 첫번째 아이와 첫째가 아닌 아이에 대한 평균 체중을 계산하시오.
Step13: 변수 <tt>prglngth</tt>으로 첫째 아이와 첫째가 아닌 아이에 대한 평균임신기간을 계산하시오. 시간으로 표시된, 평균에 차이를 계산하시오. | Python Code:
import nsfg
df = nsfg.ReadFemPreg()
df
Explanation: 통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
End of explanation
df.birthord.value_counts().sort_index()
Explanation: <tt>birthord</tt>에 대한 빈도수를 출력하고 codebook 게시된 결과값과 비교하시오.
End of explanation
df.prglngth.value_counts().sort_index()
Explanation: <tt>prglngth</tt>에 대한 빈도수를 출력하고 codebook 게시된 결과값과 비교하시오.
End of explanation
df.agepreg.value_counts().sort_index()
Explanation: <tt>agepreg</tt>에 대한 빈도수를 출력하고 codebook에 게시된 결과값과 비교하시오.
이 데이터를 살펴보고, 응답자에 대한 존경과 맥락을 고려해서 데이터에 접근하는에 필요한 의무에 관해서 저자가 언급한 논평을 기억하라.
End of explanation
df.totalwgt_lb.mean()
Explanation: 평균 출생체중(birthweight)을 계산하시오.
End of explanation
df['totalwgt_kg'] = df.totalwgt_lb / 2.2
df.totalwgt_kg.mean()
Explanation: 킬로그램으로 출생체중 정보를 담는 <tt>totalwgt_kg</tt>로 불리는 새로운 칼럼을 생성하라. 평균도 계산하시오. 새로운 칼럼을 생성할 때, 점표기법이 아닌 딕셔너리 구문을 사용하는 것을 기억하라.
End of explanation
df.columns
Explanation: 코드북(codebook)을 살펴보고 책에서 언급된 것이 아닌 본인이 관심있는 변수를 찾아내라. 그리고 그 변수의 빈도수, 평균, 다른 통계량을 계산하시오.
End of explanation
print('Count:', df.npostsmk.value_counts().sort_index()) ## 임신기간동안 흡연
print('Mean:', df.npostsmk.mean())
Explanation: 부울 시리즈(boolean Series)를 생성하시오.
End of explanation
live = df[df.outcome == 1]
len(live)
Explanation: 부울 시리즈를 사용해서 정상출산 임신에 대한 레코드를 선택하시오.
End of explanation
len(live[(live.birthwgt_lb >= 0) & (live.birthwgt_lb <= 5)])
Explanation: <tt>birthwgt_lb</tt> 변수에 0 에서 5 파운드(0과 5도 모두 포함) 사이 정상출산 빈도수를 계산하시오. 결과는 1125 가 되어야만 된다.
End of explanation
len(live[(live.birthwgt_lb >=9)&(live.birthwgt_lb <=95)])
Explanation: <tt>birthwgt_lb</tt> 변수에 9 에서 95 파운드(9과 95도 모두 포함) 사이 정상출산 빈도수를 계산하시오. 결과는 798 가 되어야만 된다.
End of explanation
firsts = df[df.birthord==1]
others = df[df.birthord>1]
len(firsts), len(others)
Explanation: <tt>birthord</tt> 변수를 사용해서, 첫번째 아이와 첫째가 아닌 아이에 대한 레코드를 선택하시오. 첫번째 아이와 첫째가 아닌 아이는 얼마나 되는가?
End of explanation
firsts.totalwgt_lb.mean()
others.totalwgt_lb.mean()
Explanation: 첫번째 아이와 첫째가 아닌 아이에 대한 평균 체중을 계산하시오.
End of explanation
print('Firsts Mean: ', firsts.prglngth.mean())
print('Others Mean: ', others.prglngth.mean())
print('Diff: ', firsts.prglngth.mean()-others.prglngth.mean())
Explanation: 변수 <tt>prglngth</tt>으로 첫째 아이와 첫째가 아닌 아이에 대한 평균임신기간을 계산하시오. 시간으로 표시된, 평균에 차이를 계산하시오.
End of explanation |
9,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of @map_e, @fmap_e and map_element
This notebook had examples of map from IoTPy/IoTPy/agent_types/op.py
You can create an agent that maps an input stream to an output stream using map_element, or the decorators @map_e or @fmap_e.
Note
Step1: Specify streams
<b>x = Stream('x')</b>
<br>
specifies a stream x called 'x'.
Specify terminating function that is wrapped to create non-terminating agent
<b>def f(v)
Step2: Example illustrating the decorator @map_e
The undecorated function f takes a single argument and returns a single value.
The decorated function f has two arguments, an in_stream and an out_stream, and may have additional keyword arguments.
Step3: Example illustrating the decorator @fmap_e
This is the functional form of @map_e
The output stream y doesn't have to be declared. f(x) creates and returns a stream.
Step4: Building Networks of Agents by Connecting Output Streams to Input Streams
You can build networks of agents by connecting the output stream of agents to input streams of agents as shown in the next example.
Step5: A Network of Agents is an Agent
You can use functions, in the usual way, to specify a function consisting of a network of agents. This function is itself a persistent agent
Step6: Function Composition of Agents
You can use fmap_e for a functional form, e.g., g(f(x)) in the exampe below.
Step7: Keyword arguments of an Agent
(Note
Step8: State of an Agent
The agent saves its state between successive calls to its wrapped function.
In the next example, the function f has two arguments, an input element and the state. The function may have additional keyword arguments. The function returns an output element and the next state. The initial state is specified in the call to map_element. In this example, the initial state is 0 because of the call map_element(func=f, in_stream=x, out_stream=y, state=0). Note that the call to map_element must have the keyword argument 'state'.
Example
Step9: Example of decorator @map_e with state
Step10: Example of function composition using decorator @fmap_e
Step11: Agents with both State and Keyword Arguments
The function that is encapsulated can have both state and additional keyword arguments. Note that the call to map_element must have keyword arguments 'state' and the additional keywords. In the following example the call to map_element specifies the initial state (state=0) and the value of the keyword argument (POWER=2).
Step12: Saving the State of an Agent in an Argument of the Function
In the following example, the state of the agent is stored in a dict, s. The output of the example is the Fibonacci sequence. In this example, s[0] is the next output of the sequence and s[1] is the element following s[0].
Step13: Storing the State and Arguments of an Agent in a Class
The next example shows how you can save the state and arguments in a class. In this example, the state is running_sum which is the sum of the values read on the input stream, and multiplicand is an argument. | Python Code:
import os
import sys
sys.path.append("../")
from IoTPy.core.stream import Stream, run
from IoTPy.agent_types.op import map_element
from IoTPy.helper_functions.recent_values import recent_values
Explanation: Examples of @map_e, @fmap_e and map_element
This notebook had examples of map from IoTPy/IoTPy/agent_types/op.py
You can create an agent that maps an input stream to an output stream using map_element, or the decorators @map_e or @fmap_e.
Note: The function map_element and the decorators @map_e and @fmap_e are essentially equivalent. Use the form that you find convenient.
End of explanation
def simple_example_of_map_element():
# Specify encapsulated functions
def f(v): return v+10
# Specify streams
x = Stream('x')
y = Stream('y')
# Create agent with input stream x and output stream y.
map_element(func=f, in_stream=x, out_stream=y)
# y[n] = f(x[n])
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
simple_example_of_map_element()
Explanation: Specify streams
<b>x = Stream('x')</b>
<br>
specifies a stream x called 'x'.
Specify terminating function that is wrapped to create non-terminating agent
<b>def f(v): return 2 * v </b>
<br>
takes a single input argument, <i>v</i> returns a single value --- <i>2*v</i> and terminates.
Create a non-terminating agent that wraps f and reads stream x and extends stream y.
<b>y[n] = f(x[n]), all n</b>
<br>
<b>map_element(func=f, in_stream=x, out_stream=y)</b>
Example of map_element
End of explanation
from IoTPy.agent_types.basics import map_e
def simple_example_of_map_e():
# Decorate terminating function to specify non-terminating agent.
@map_e
def f(v): return v + 10
# Specify streams
x = Stream(name='x')
y = Stream(name='y')
# Create agent with input stream x and output stream y
f(in_stream=x, out_stream=y)
# y[n] = x[n]+10
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
simple_example_of_map_e()
Explanation: Example illustrating the decorator @map_e
The undecorated function f takes a single argument and returns a single value.
The decorated function f has two arguments, an in_stream and an out_stream, and may have additional keyword arguments.
End of explanation
from IoTPy.agent_types.basics import fmap_e
def simple_example_of_fmap_e():
# Specify streams
x = Stream('x')
# Decorate terminating function to specify non-terminating agent.
@fmap_e
def f(v): return v+10
# Create agent with input stream x and output stream y
y=f(x)
# y[n] = x[n]+10
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
simple_example_of_fmap_e()
Explanation: Example illustrating the decorator @fmap_e
This is the functional form of @map_e
The output stream y doesn't have to be declared. f(x) creates and returns a stream.
End of explanation
def example_of_concatentation_with_map_element():
# Specify streams.
x = Stream('x')
y = Stream('y')
w = Stream('w')
# Specify encapsulated functions
def f(v): return v+10
def g(w): return w*2
# Create agent with input stream x and output stream w.
map_element(func=f, in_stream=x, out_stream=w)
# y[n] = x[n]+10
# Create agent with input stream w and output stream y
map_element(func=g, in_stream=w, out_stream=y)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_concatentation_with_map_element()
Explanation: Building Networks of Agents by Connecting Output Streams to Input Streams
You can build networks of agents by connecting the output stream of agents to input streams of agents as shown in the next example.
End of explanation
def example_of_network_of_agents_is_an_agent():
# Specify an agent h with is a network of two agents
# h has an input stream x, and an output stream y.
def h(x, y):
# Specify encapsulated functions local to h
def f(v): return v+10
def g(w): return w*2
# Specify an internal stream of h
w = Stream('w')
# Specify agents local to h
map_element(f, x, w)
map_element(g, w, y)
# Specify streams.
x = Stream('x')
y = Stream('y')
# Create agent h which is a network of agents
h(x, y)
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_network_of_agents_is_an_agent()
Explanation: A Network of Agents is an Agent
You can use functions, in the usual way, to specify a function consisting of a network of agents. This function is itself a persistent agent: It reads its input streams and extends its output streams.
In the example below, we specify an agent h with two parameters, x and y, where x is an input stream and y is an output stream. Agent h is composed of two agents --- map_element(f, x, w) and map_element(g, w, y).
End of explanation
def example_of_concatenating_fmap_e():
# Specify streams
x = Stream('x')
# Decorate terminating function to specify non-terminating agent.
@fmap_e
def f(v): return v+10
@fmap_e
def g(w): return w * 2
# Create agent with input stream x and output stream y
y=g(f(x))
# y[n] = (v+10)*2
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_concatenating_fmap_e()
Explanation: Function Composition of Agents
You can use fmap_e for a functional form, e.g., g(f(x)) in the exampe below.
End of explanation
def example_of_keyword_arg_with_map_element():
# Specify streams
x = Stream('x')
y = Stream('y')
# Specify encapsulated functions
# This function operates on a variable v and has an
# additional argument ADDEND which will be a keyword
# argument to specify the agent.
def add_constant(v, ADDEND): return v + ADDEND
# Specify agents with keyword arguments
map_element(func=add_constant, in_stream=x, out_stream=y,
ADDEND=10)
# y[n] = x[n] + 10
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_keyword_arg_with_map_element()
def example_of_keyword_arg_with_map_e():
# Specify streams
x = Stream('x')
y = Stream('y')
# Decorate terminating function to specify non-terminating agent.
@map_e
def add_constant(v, ADDEND): return v + ADDEND
# Create agent with input stream x and output stream y with keyword
# argument
add_constant(in_stream=x, out_stream=y, ADDEND=10)
# y[n] = x[n] + 10
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_keyword_arg_with_map_e()
Explanation: Keyword arguments of an Agent
(Note: You can also use a class to store keyword arguments and the state.)
In the example below, the function add_constant has two parameters, v and ADDEND where v is an element of the input stream of the agent and ADDEND is a keyword parameter. The function returns a single value which is an element of the output stream of the agent. The call to map_element must have the keyword parameter, ADDEND.
End of explanation
def example_of_state_with_map_element():
# Specify encapsulated functions
def f(input_element, state):
next_output = input_element - state
next_state = input_element
return next_output, next_state
# Specify streams
x = Stream(name='x')
y = Stream(name='y')
# Create agents with input stream x and output stream y
# and initial state of 0
map_element(func=f, in_stream=x, out_stream=y, state=0)
# state[0] = 0, state[n+1] = x[n]
# y[0] = x[0], y[n+1] = x[n+1] - x[n]
# Put test values in the input streams.
x.extend([10, 20, 40, 80])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_map_element()
Explanation: State of an Agent
The agent saves its state between successive calls to its wrapped function.
In the next example, the function f has two arguments, an input element and the state. The function may have additional keyword arguments. The function returns an output element and the next state. The initial state is specified in the call to map_element. In this example, the initial state is 0 because of the call map_element(func=f, in_stream=x, out_stream=y, state=0). Note that the call to map_element must have the keyword argument 'state'.
Example: map_element with state
End of explanation
def example_of_state_with_map_e():
# Decorate encapsulated functions
@map_e
def f(input_element, state):
next_output = input_element - state
next_state = input_element
return next_output, next_state
# Specify streams
x = Stream(name='x')
y = Stream(name='y')
# Create agents with input stream x and output stream y
# and initial state of 0
f(in_stream=x, out_stream=y, state=0)
# state[0] = 0, state[n+1] = x[n] - state[n]
# y[0] = x[0], y[n+1] = x[n+1] - x[n]
# Put test values in the input streams.
x.extend([10, 20, 40, 80])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_map_e()
Explanation: Example of decorator @map_e with state
End of explanation
def example_of_state_with_concatenation_of_fmap_e():
# Decorate encapsulated functions
@fmap_e
def f(input_element, state):
next_output = input_element - state
next_state = input_element
return next_output, next_state
@fmap_e
def g(v): return v*2
# Specify streams
x = Stream('x')
# Create agents with input stream x and output stream y
# and initial state of 0
# Example of function composition
y = g(f(x, state=0))
# state[0] = 0, state[n+1] = x[n] - state[n]
# y[0] = x[0], y[n+1] = x[n+1] - x[n]
# Put test values in the input streams.
x.extend([10, 20, 40, 80])
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_concatenation_of_fmap_e()
Explanation: Example of function composition using decorator @fmap_e
End of explanation
def example_of_state_with_keyword_arg_with_map_element():
# Specify streams
x = Stream(name='x')
y = Stream(name='y')
# Specify encapsulated functions
def f(input_element, state, POWER):
next_output = input_element**POWER + state
next_state = input_element + state
return next_output, next_state
# Create agents with input stream x and output stream y
# and initial state of 0, and keyword arg POWER with value 2.
map_element(func=f, in_stream=x, out_stream=y, state=0, POWER=2)
# state[0] = 0, state[n+1] = x[0] + ... + x[n]
# y[0] = x[0]**2, y[n+1] = x[n+1]**2 + state[n]
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_keyword_arg_with_map_element()
def example_of_state_with_keyword_arg_with_map_e():
# Specify streams
x = Stream(name='x')
y = Stream(name='y')
# Decorate encapsulated functions
@map_e
def f(input_element, state, POWER):
next_output = input_element**POWER + state
next_state = input_element + state
return next_output, next_state
# Create agents with input stream x and output stream y
# and initial state of 0, and keyword arg POWER with value 2.
f(in_stream=x, out_stream=y, state=0, POWER=2)
# state[0] = 0, state[n+1] = x[0] + ... + x[n]
# y[0] = x[0]**2, y[n+1] = x[n+1]**2 + state[n]
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_keyword_arg_with_map_e()
def example_of_state_with_keyword_arg_with_fmap_e():
# Specify streams
x = Stream('x')
# Decorate encapsulated functions
@fmap_e
def f(input_element, state, POWER):
next_output = input_element**POWER + state
next_state = input_element + state
return next_output, next_state
# Create agents with input stream x and output stream y
# and initial state of 0, and keyword arg POWER with value 2.
y = f(x, state=0, POWER=2)
# state[0] = 0, state[n+1] = x[0] + ... + x[n]
# y[0] = x[0]**2, y[n+1] = x[n+1]**2 + state[n]
# Put test values in the input streams.
x.extend(list(range(5)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
example_of_state_with_keyword_arg_with_fmap_e()
Explanation: Agents with both State and Keyword Arguments
The function that is encapsulated can have both state and additional keyword arguments. Note that the call to map_element must have keyword arguments 'state' and the additional keywords. In the following example the call to map_element specifies the initial state (state=0) and the value of the keyword argument (POWER=2).
End of explanation
def example_of_saving_state_in_argument():
# Specify streams
x = Stream('x')
y = Stream('y')
s = {'a':0, 'b':1}
# Specify encapsulated functions
def f(v, s):
final, prefinal = s['a'], s['b']
post_final = final + prefinal
# In the next state: prefinal becomes final
# final becomes next_output
s['a'], s['b'] = post_final, final
return final
# Create agent with input stream x and output stream y and
# keyword argument s
map_element(f, x, y, s=s)
# Put test values in the input stream.
# The values of x aren't relevant in this example
# because they merely drive the next step of the agent.
x.extend(list(range(10)))
# Execute a step
run()
# Look at recent values of streams.
print ('recent values of stream y are')
print (recent_values(y))
assert recent_values(y) == [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
example_of_saving_state_in_argument()
Explanation: Saving the State of an Agent in an Argument of the Function
In the following example, the state of the agent is stored in a dict, s. The output of the example is the Fibonacci sequence. In this example, s[0] is the next output of the sequence and s[1] is the element following s[0].
End of explanation
def example_class_to_save_state_and_args():
class example(object):
def __init__(self, multiplicand):
self.multiplicand = multiplicand
self.running_sum = 0
def step(self, v):
result = v * self.multiplicand + self.running_sum
self.running_sum += v
return result
x = Stream()
y = Stream()
eg = example(multiplicand=2)
map_element(func=eg.step, in_stream=x, out_stream=y)
x.extend(list(range(5)))
run()
print ('recent values of stream y are')
print (recent_values(y))
example_class_to_save_state_and_args()
from IoTPy.agent_types.op import filter_element
def examples_filter_element():
x = Stream('x')
#----------------------------------------------------------------
# Filter to only have even numbers
#----------------------------------------------------------------
even = Stream()
filter_element(func=lambda v: not v%2, in_stream=x, out_stream=even)
# Example: If x = [0, 1, 2, 3, ... ] then y is [0, 2, 4, ...]
#----------------------------------------------------------------
# Filter to only have odd numbers
#----------------------------------------------------------------
odd = Stream()
filter_element(func=lambda v: v%2, in_stream=x, out_stream=odd)
#----------------------------------------------------------------
# Filter to only have negative numbers
#----------------------------------------------------------------
neg = Stream('negative')
filter_element(func=lambda v: v < 0, in_stream=x, out_stream=neg)
#----------------------------------------------------------------
# Filter to only have non_negativenumbers
#----------------------------------------------------------------
non_neg = Stream('non_negative')
filter_element(func=lambda v: v >= 0, in_stream=x, out_stream=non_neg)
#----------------------------------------------------------------
# filter_element with state and no additional arguments
#----------------------------------------------------------------
def less_than_n(v, state):
next_output_element = (v <= state)
next_state = state+1
return next_output_element, next_state
y = Stream('y')
less = Stream()
filter_element(func=less_than_n, in_stream=y, out_stream=less, state=0)
# State on j-th step is j.
# less_than_n(v, state) returns (v < j) on the j-th step.
# less filters out all elements v for which v > j
# So if y is [1, 5, 0, 2, 6, 3] then since states are [ 0, 1, 2, 3, 4,..]
# then since not(y[0] <= 0), not(y[1] <= 1),
# y[2] <= 2, y[3] <=3, .... the sequence of outputs of the function
# less_than_v are [(False, 0), (False, 1), (True, 2), (True, 3), ...]. So
# the output stream contains y[2], y[3], ... or [0, 2, ...]
#----------------------------------------------------------------
# filter_element with state and with additional keyword arguments
#----------------------------------------------------------------
# The keyword argument is addend.
def less_than_n_plus_addend(v, state, addend):
# return pair: boolean filter, next state
return v <= state+addend, state+1
z = Stream('z')
less_addend = Stream()
filter_element(func=less_than_n_plus_addend, in_stream=z,
out_stream=less_addend, state=0, addend=3)
# State on j-th step is j.
# Stream less contains z[j] if and only if z[j] <= j+3
# For example, if z = [2, 3, 3, 4, 10, 15, 7, .....] then the
# output stream is [2, 3, 3, 4, 7, ...]
#----------------------------------------------------------------
# filter out numbers above the threshold
#----------------------------------------------------------------
def threshold(v, threshold): return v > threshold
above_threshold = Stream('above threshold')
filter_element(func=threshold, in_stream=x,
out_stream=above_threshold, threshold=0)
# Put data into input streams and run.
DATA_x = list(range(-5, 5, 1))
x.extend(DATA_x)
DATA_y = [1, 5, 0, 2, 6, 3]
y.extend(DATA_y)
DATA_z = [2, 3, 3, 4, 10, 15, 7]
z.extend(DATA_z)
run()
# Inspect output
assert recent_values(even) == [-4, -2, 0, 2, 4]
assert recent_values(odd) == [-5, -3, -1, 1, 3]
assert recent_values(non_neg) == [0, 1, 2, 3, 4]
assert recent_values(neg) == [-5, -4, -3, -2, -1]
assert recent_values(less) == [0, 2, 3]
assert recent_values(less_addend) == [2, 3, 3, 4, 7]
assert recent_values(above_threshold) == [1, 2, 3, 4]
print (recent_values(even))
print (recent_values(odd))
print (recent_values(non_neg))
print (recent_values(neg))
print (recent_values(less))
print (recent_values(less_addend))
print (recent_values(above_threshold))
examples_filter_element()
Explanation: Storing the State and Arguments of an Agent in a Class
The next example shows how you can save the state and arguments in a class. In this example, the state is running_sum which is the sum of the values read on the input stream, and multiplicand is an argument.
End of explanation |
9,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http
Step1: Procesamiento de texto
Tabulación
El string \t corresponde a un único carácter, que representa una tabulación.
Step2: Procesamiento de texto
Importante
Step3: Procesamiento de texto
Separar un string
Para separar un string tenemos 2 opciones
Step4: Procesamiento de texto
Unir una lista de strings
Para unir una lista de strings es necesario utilizar el método join
Step5: Unir una lista de strings
Observación
Step6: Procesamiento de texto
Unir una secuencia de valores (no strings) v2
También es posible utilizar map que aplica genera una nueva lista aplicando a cada elemento de la lista original la función pasada como argumento.
Step7: Procesamiento de texto
Interpolación de valores por posición
Step8: Procesamiento de texto
Interpolación de valores por nombre
Step9: Procesamiento de texto
Mayusculas y Minúsculas
Para cambiar la capitalización de un string, es posible utilizar los siguientes métodos
Step10: Procesamiento de texto
Ejemplo de Motivación
Queremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación.
t = 'El sobre, en el aula, esta sobre el pupitre.'
contar_palabras(t)
{'el'
Step11: Procesamiento de texto
Motivación
Step12: Procesamiento de texto
Ejercicio 2
Escriba un programa que tenga el siguiente comportamiento
Step13: Procesamiento de texto
Procesamiento de ADN
Una cadena de ADN es una secuencia de bases nitrogenadas llamadas adenina, citosina, timina y guanina.
En un programa, una cadena se representa como un string de caracteres 'a', 'c', 't' y 'g'.
A cada cadena, le corresponde una cadena complementaria, que se obtiene intercambiando las adeninas con las timinas, y las citosinas con las guaninas
Step14: Procesamiento de ADN
1.1 Solución Secuencia aleatoria v1
Step15: Procesamiento de ADN
1.1 Solución Secuencia aleatoria v2
Step16: Procesamiento de texto
Procesamiento de ADN
Step17: Procesamiento de texto
Solución Secuencia complementaria v1
Step18: Procesamiento de texto
Solución Secuencia complementaria v2
Step19: Procesamiento de texto
Solución Secuencia complementaria v3 | Python Code:
print len("\n")
a1 = 'casa\narbol\npatio'
print a1
print len(a1)
a2 = '''casa
arbol
patio'''
print a2
print len(a2)
print a1==a2
b = 'a\nb\nc'
print b
print len(b)
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
http://progra.usm.cl/
https://www.github.com/usantamaria/iwi131
Fechas
Actividad 05: Miércoles 6 Enero 2016 (8:00).
Certamen 3: Viernes 8 Enero 2016 (15:30).
Certamen Recuperativo: Lunes 18 Enero 2016 (8:00).
Clases
Mie 23 Dic 2016: Procesamiento de Texto.
Lun 28 Dic 2016: Escribir y leer archivos.
Mie 30 Dic 2016: Ejercicios tipo certamen.
Lun 04 Ene 2016: Ejercicios tipo certamen.
Mie 06 Ene 2016: Actividad 5.
Consejo: Baje el libro del curso, lea, aprenda y practique.
¿Qué contenido aprenderemos?
Procesamiento de texto
¿Porqué aprenderemos ese contenido?
Procesamiento de texto
Habilidad crucial para resolver una gran variedad de problemas.
Motivación
Queremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación.
t = 'El sobre, en el aula, esta sobre el pupitre.'
contar_palabras(t)
{'el': 3, 'en': 1, 'esta': 1, 'aula': 1,
'sobre': 2, 'pupitre': 1}
¿Cómo realizaría usted esta difícil tarea?
Consejos
El procesamiento de texto utiliza:
Reconocimiento de patrones: usted debe reconocer que patrones se repiten y puede explotar para procesar el texto.
Utilización de funciones específicas: el tipo de dato string posee una rica colección de métodos que debe manejar para simplificar la tarea de procesamiento de texto.
Recuerde que todo string es inmutable, por lo que al aplicar diversas funciones se obtiene siempre un nuevo string.
Procesamiento de texto
Salto de línea
El string \n corresponde a un único carácter, que representa el salto de línea.
End of explanation
print len("\t")
a = 'casa\n\tarbol\n\tpatio'
print a
b = 'a\tb\tc'
print b
print len(b)
Explanation: Procesamiento de texto
Tabulación
El string \t corresponde a un único carácter, que representa una tabulación.
End of explanation
palabra = 'cara'
palabra2 = palabra.replace('r', 's')
print palabra
print palabra2
print palabra2.replace('ca', 'pa')
print palabra2.replace('a', 'e', 1)
print palabra2.replace('c', '').replace('a', 'o') # Encadenamiento de metodos
print palabra
Explanation: Procesamiento de texto
Importante: \n y \t aparecen frecuentemente cuando analicemos archivos leídos del disco duro.
Procesamiento de texto
Reemplazar secciones de un string
La función mi_string.replace(s1, s2) busca cada ocurrencia del substring s1 en mi_string, y lo reemplaza por s2.
La función mi_string.replace(s1, s2,n) busca las primeras n ocurrencias del substring s1 en mi_string, y lo reemplaza por s2.
La función mi_string.replace(s1, s2) regresa un nuevo string, el string original no es modificado.
End of explanation
oracion = 'taca taca'
print list(oracion)
print set(oracion)
print oracion.split()
print oracion.split("a")
print oracion.split("t")
print oracion.split("ac")
Explanation: Procesamiento de texto
Separar un string
Para separar un string tenemos 2 opciones:
* Separar en caracteres, utilizando list(mi_string), que genera una lista con los carácteres de mi_string en orden.
* Separar en palabras, utilizando mi_string.split(s), que generar una lista de "palabras" que han sido separadas por el string s. El string s no estará en ninguno de los substrings de la lista. Por defecto, s es el caracter espacio " ".
End of explanation
mi_lista = ['Ex', 'umbra', 'in', 'solem']
print ' '.join(mi_lista)
print ''.join(mi_lista)
print ' -> '.join(mi_lista)
mi_conjunto = {'Ex', 'umbra', 'in', 'solem'}
print mi_conjunto
print ' '.join(mi_conjunto)
print ''.join(mi_conjunto)
print ' -> '.join(mi_conjunto)
Explanation: Procesamiento de texto
Unir una lista de strings
Para unir una lista de strings es necesario utilizar el método join:
Python
s.join(lista_de_strings)
Regresa un único string donde los elementos del string han sido "pegados" utilizando el string s.
End of explanation
lista_de_strings = ["1", "2", "3"]
print ", ".join(lista_de_strings)
lista_de_ints = [1, 2, 3]
print ", ".join(lista_de_ints)
lista_de_ints = range(10)
lista_de_strings = []
for x in lista_de_ints:
lista_de_strings.append(str(x))
print ", ".join(lista_de_strings)
Explanation: Unir una lista de strings
Observación: join funciona sólo sobre una lista de strings. Si quiere pegar números, debe convertirlos a strings antes.
End of explanation
numeros = range(10)
print numeros
def f(x):
return 2.*x + 1./(x+1)
print map(str, numeros)
print map(float, numeros)
print map(f, numeros)
print ', '.join(map(str, numeros))
#
print "-"join("1,2,3,4".split(","))
Explanation: Procesamiento de texto
Unir una secuencia de valores (no strings) v2
También es posible utilizar map que aplica genera una nueva lista aplicando a cada elemento de la lista original la función pasada como argumento.
End of explanation
s = 'Soy {0} y vivo en {1} {2}'
print s.format('Perico', 'Valparaiso')
print s.format('Erika', 'Berlin')
print s.format('Wang Dawei', 'Beijing')
Explanation: Procesamiento de texto
Interpolación de valores por posición
End of explanation
s = '{nombre} estudia en la {u}'
# Datos pueden pasarse ordenados
print s.format(nombre='Perico', u='UTFSM')
print s.format(nombre='Fulana', u='PUCV')
# También es posible cambiar el orden
print s.format(u='UPLA', nombre='Yayita')
# O con magia (conocimiento avanzado)
d = {"nombre":"Mago Merlin", "u":"Camelot University"}
print s.format(**d)
Explanation: Procesamiento de texto
Interpolación de valores por nombre
End of explanation
palabra = '1. raMo de ProGra'
print palabra.upper()
print palabra.lower()
print palabra.swapcase()
print palabra.capitalize()
Explanation: Procesamiento de texto
Mayusculas y Minúsculas
Para cambiar la capitalización de un string, es posible utilizar los siguientes métodos:
.upper(): TODO EN MAYUSCULA.
.lower(): todo en minuscula
.swapcase(): cambia el order que tenia la capitalización.
.capitalize(): Coloca únicamente mayuscula en la primera letra del string.
End of explanation
def contar_palabras(s):
return s
t = 'El sobre, en el aula, esta sobre el pupitre.'
contar_palabras(t)
Explanation: Procesamiento de texto
Ejemplo de Motivación
Queremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación.
t = 'El sobre, en el aula, esta sobre el pupitre.'
contar_palabras(t)
{'el': 3, 'en': 1, 'esta': 1, 'aula': 1, 'sobre': 2, 'pupitre': 1}
¿Cómo realizaría ahora usted esta difícil tarea?
Procesamiento de texto
Consejos
Subdividir en tareas menores:
* ¿Cómo sacar los simbolos indeseados?
* ¿Cómo separar las palabras?
* ¿Cómo contar las palabras?
End of explanation
def contar_palabras(s):
s = s.lower()
for signo in [",",".",";","!","?","'",'"']:
s = s.replace(signo,"")
palabras = s.split()
contador = {}
for palabra_sucia in palabras:
palabra = palabra_sucia
if palabra in contador:
contador[palabra] += 1 # Aumentamos
else:
contador[palabra] = 1 # Inicializamos
return contador
t = 'El sobre, en el aula, !! Esta sobre el pupitre.'
contar_palabras(t)
Explanation: Procesamiento de texto
Motivación: Solución
INPUT:
t = 'El sobre, en el aula, esta sobre el pupitre.'
contar_palabras(t)
OUTPUT:
{'el': 3, 'en': 1, 'esta': 1, 'aula': 1,
'sobre': 2, 'pupitre': 1}
End of explanation
# Solución Alumnos
# Solución
# Guardar datos
N = int(raw_input("Numero de alumnos: "))
notas_alumnos = []
for i in range(N):
nombre = raw_input("Nombre alumno {0}:".format(i+1))
nombre_pila = nombre.split(" ")[0]
notas_str = raw_input("Ingrese las notas de {0}: ".format(nombre_pila))
notas_int = []
for nota in notas_str.split(" "):
notas_int.append(int(nota))
promedio = sum(notas_int)/float(len(notas_int))
notas_alumnos.append( (nombre_pila, promedio) )
# Imprimir promedios
for nombre, promedio in notas_alumnos:
print "El promedio de {0} es {1:.2f}".format(nombre, promedio)
Explanation: Procesamiento de texto
Ejercicio 2
Escriba un programa que tenga el siguiente comportamiento:
INPUT:
Numero de alumnos: 3
Nombre alumno 1: Isaac Newton
Ingrese las notas de Isaac: 98 94 77
Nombre alumno 2: Nikola Tesla
Ingrese las notas de Nikola: 100 68 94 88
Nombre alumno 3: Albert Einstein
Ingrese las notas de Albert: 83 85
OUTPUT:
El promedio de Isaac es 89.67
El promedio de Nikola es 87.50
El promedio de Albert es 84.00
Procesamiento de texto
Ejercicio 2: Análisis
¿Cuáles son las tareas necesarias?
Procesamiento de texto
Ejercicio 1: Solución
Las tareas a realizar son:
* Leer número de alumnos
* Para cada alumno, leer nombre y notas.
* Procesar notas para obtener el promedio.
* Almacenar nombre y notas.
* Separar nombre de apellido.
* Imprimir resultados apropiadamente.
End of explanation
# Definicion de funcion
from random import choice
def cadena_al_azar(n):
bases_n=''
for i in range(n):
base=choice('atgc')
bases_n+=base
return bases_n
# Casos de uso
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
Explanation: Procesamiento de texto
Procesamiento de ADN
Una cadena de ADN es una secuencia de bases nitrogenadas llamadas adenina, citosina, timina y guanina.
En un programa, una cadena se representa como un string de caracteres 'a', 'c', 't' y 'g'.
A cada cadena, le corresponde una cadena complementaria, que se obtiene intercambiando las adeninas con las timinas, y las citosinas con las guaninas:
cadena = 'cagcccatgaggcagggtg'
complemento = 'gtcgggtactccgtcccac'
Procesamiento de ADN
1.1 Procesamiento de ADN: Secuencia aleatoria
Escriba la función cadena_al_azar(n) que genere una cadena aleatoria de ADN de largo n:
Ejemplo de uso:
cadena_al_azar(10)
puede regresar 'acgtccgcct', 'tgttcgcatt', etc.
Pista:
from random import choice
choice('atcg') regresa al azar una de las letras de "atcg"
Procesamiento de ADN
1.1 Secuencia aleatoria: Análisis
¿Que tareas son necesarias?
End of explanation
from random import choice
# Definicion de funcion
def cadena_al_azar(n):
adn = ""
for i in range(n):
adn += choice("acgt")
return adn
# Casos de uso
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
Explanation: Procesamiento de ADN
1.1 Solución Secuencia aleatoria v1
End of explanation
from random import choice
# Definicion de funcion
def cadena_al_azar(n):
bases = []
for i in range(n):
bases.append(choice("acgt"))
adn = "".join(bases)
return adn
# Casos de uso
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(1)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
print cadena_al_azar(10)
Explanation: Procesamiento de ADN
1.1 Solución Secuencia aleatoria v2
End of explanation
# Solucion estudiantes
def cadena_(n):
adn = ""
for i in range(n):
adn += choice("acgt")
return adn
Explanation: Procesamiento de texto
Procesamiento de ADN: Secuencia complementaria
Escriba la función complementaria(s) que regrese la cadena complementaria de c: el complementario de "a" es "t" (y viceversa), y el complementario de "c" es "g" (y viceversa).
Python
cadena = 'cagcccatgaggcagggtg'
print complementaria(cadena)
'gtcgggtactccgtcccac'
Procesamiento de texto
Procesamiento de ADN: Secuencia complementaria
¿Tareas?
End of explanation
def complementaria(adn):
rna = ""
for base in adn:
if base=="a":
rna += "t"
elif base=="t":
rna += "a"
elif base=="c":
rna += "g"
else:
rna += "c"
return rna
adn = cadena_al_azar(20)
print adn
print complementaria(adn)
Explanation: Procesamiento de texto
Solución Secuencia complementaria v1
End of explanation
def complementaria(adn):
pares = {"a":"t", "t":"a", "c":"g", "g":"c"}
rna = ""
for base in adn:
rna += pares[base]
return rna
adn = cadena_al_azar(20)
print adn
print complementaria(adn)
Explanation: Procesamiento de texto
Solución Secuencia complementaria v2
End of explanation
def complementaria(adn):
rna = adn.replace("a","T").replace("t","A").replace("c","G").replace("g","C")
return rna.lower()
adn = cadena_al_azar(20)
print adn
print complementaria(adn)
Explanation: Procesamiento de texto
Solución Secuencia complementaria v3
End of explanation |
9,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning
This notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence
Step1: Contents
Review
Explanations of learning module
Practical Machine Learning Task
MNIST handwritten digits classification
Loading and Visualising digits data
kNN classifier
Review
Native implementation from Learning module
Faster implementation using NumPy
Overfitting and how to avoid it
Train-Test split
Crossvalidation
Regularisation
Sub-sampling
Fine tuning parameters to get better results
Introduction to Scikit-Learn
Email spam detector
Review
In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.
An agent is learning if it improves its performance on future tasks after making observations about the world.
There are three types of feedback that determine the three main types of learning
Step2: The function load_MNIST() loads MNIST data from files saved in aima-data/MNIST. It returns four numpy arrays that we are gonna use to train & classify hand-written digits in various learning approaches.
Step3: Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Each 28x28 pixel image is flattened to 784x1 array and we should have 60,000 of them in training data. Similarly we should have 10,000 of those 784x1 arrays in testing data.
Step4: Visualizing MNIST digits data
To get a better understanding of the dataset, let's visualize some random images for each class from training & testing datasets.
Step5: Let's have a look at average of all the images of training and testing data.
Step6: k-Nearest Neighbours (kNN) classifier
Review
k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are gonna use this to classify MNIST handwritten digits. More about kNN on Scholarpedia.
Let's see how kNN works with a simple plot shown in the above picture. There are two classes named Class A yellow color dots and Class B violet color dots. Every point in this plot has two features i.e. (X<sub>2</sub>, X<sub>1</sub>) values of that particular point which we used to plot. Now, let's say we have a new point, a red star and we want to know which class this red star belongs. Solving this problem by predicting the class of this new red star is out current classification problem.
We have co-ordinates (we call them features in ML) of this red star and we need to predict its class using kNN algorithm. In this algorithm, the value of k is arbitary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics.
Let's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into majority class. Observe that smaller circle which containg 3 points other that test point (red star). As there are two violet points, which is majority, we predict the class of red star as violet- Class B.
Similarly if we put k = 5, you can observe that there are 4 yellow points, which is majority. So, we classify our test point as yellow- Class A.
In practical tasks, we iterate through a bunch of values for k (like [1, 2, 5, 10, 20, 50, 100]) and see how it performs and select the best one.
Native implementations from Learning module
Let's classify MNIST data in this method. Similar to these points, our images in MNIST data also have features. These points have two features as (2, 3) which represents co-ordinates of the point in 2-dimentional plane. Our images have 28x28 pixel values and we treat them as features for this particular task.
Next couple of cells help you understand some useful definitions from learning module.
Step7: class DataSet explanation goes here
Step8: Nearest NeighborLearner explanation goes here
Now, let us convert this raw data into Dataset.examples to run our NearestNeighborLearner(dataset, k=1) defined in learning.py. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module.
Step9: Now, we will initialize DataSet with our training examples. Call NearestNeighbor Learner on this dataset. Predict the class of a test image.
Step10: Choose a number from 0 to 9999 for test_img_choice and we are going to predict the class of that test image.
Step11: To make sure that the output we got is correct, let's plot that image along with its label.
Step12: Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset. Let's try with a different test image and hope we get it this time.
You might have recognized that our algorithm took ~20 seconds to predict a single image. How would we even predict all 10,000 test images? Yeah, the implementations we have in our learning module are not optimized to run on this particular dataset. We will have an optimised version below in NumPy which is nearly ~50-100 times faster than our native implementation.
Faster implementation using NumPy
Here we calculate manhattan distance between two images faster than our native implementation. Which in turn make predicting labels for test images far efficient.
Step13: Let's print the shapes of data to make sure everything's on track.
Step14: Let us predict the classes of first 100 test images.
Step15: Let's compare the performances of both implementations. It took 20 Secs. to predict one image using our native implementations and 17 Secs. to predict 100 images in faster implementations. That's 110 times faster.
Now, test the accuracy of our predictions
Step16: Introduction to Scikit-Learn
In this section we will solve this MNIST problem using Scikit-Learn. Learn more about Scikit-Learn here. As we are using this library, we don't need to define our own functions (kNN or Support Vector Machines aka SVMs) to classify digits.
Let's start by importing necessary modules for kNN and SVM. | Python Code:
from learning import *
Explanation: Learning
This notebook serves as supporting material for topics covered in Chapter 18 - Learning from Examples , Chapter 19 - Knowledge in Learning, Chapter 20 - Learning Probabilistic Models from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from learning.py. Let's start by importing everything from learning module.
End of explanation
import os, struct
import array
import numpy as np
import matplotlib.pyplot as plt
from collections import Counter
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
def load_MNIST(path="aima-data/MNIST"):
"helper function to load MNIST data"
train_img_file = open(os.path.join(path, "train-images-idx3-ubyte"), "rb")
train_lbl_file = open(os.path.join(path, "train-labels-idx1-ubyte"), "rb")
test_img_file = open(os.path.join(path, "t10k-images-idx3-ubyte"), "rb")
test_lbl_file = open(os.path.join(path, 't10k-labels-idx1-ubyte'), "rb")
magic_nr, tr_size, tr_rows, tr_cols = struct.unpack(">IIII", train_img_file.read(16))
tr_img = array.array("B", train_img_file.read())
train_img_file.close()
magic_nr, tr_size = struct.unpack(">II", train_lbl_file.read(8))
tr_lbl = array.array("b", train_lbl_file.read())
train_lbl_file.close()
magic_nr, te_size, te_rows, te_cols = struct.unpack(">IIII", test_img_file.read(16))
te_img = array.array("B", test_img_file.read())
test_img_file.close()
magic_nr, te_size = struct.unpack(">II", test_lbl_file.read(8))
te_lbl = array.array("b", test_lbl_file.read())
test_lbl_file.close()
# print(len(tr_img), len(tr_lbl), tr_size)
# print(len(te_img), len(te_lbl), te_size)
train_img = np.zeros((tr_size, tr_rows*tr_cols), dtype=np.int16)
train_lbl = np.zeros((tr_size,), dtype=np.int8)
for i in range(tr_size):
train_img[i] = np.array(tr_img[i*tr_rows*tr_cols : (i+1)*tr_rows*tr_cols]).reshape((tr_rows*te_cols))
train_lbl[i] = tr_lbl[i]
test_img = np.zeros((te_size, te_rows*te_cols), dtype=np.int16)
test_lbl = np.zeros((te_size,), dtype=np.int8)
for i in range(te_size):
test_img[i] = np.array(te_img[i*te_rows*te_cols : (i+1)*te_rows*te_cols]).reshape((te_rows*te_cols))
test_lbl[i] = te_lbl[i]
return(train_img, train_lbl, test_img, test_lbl)
Explanation: Contents
Review
Explanations of learning module
Practical Machine Learning Task
MNIST handwritten digits classification
Loading and Visualising digits data
kNN classifier
Review
Native implementation from Learning module
Faster implementation using NumPy
Overfitting and how to avoid it
Train-Test split
Crossvalidation
Regularisation
Sub-sampling
Fine tuning parameters to get better results
Introduction to Scikit-Learn
Email spam detector
Review
In this notebook, we learn about agents that can improve their behavior through diligent study of their own experiences.
An agent is learning if it improves its performance on future tasks after making observations about the world.
There are three types of feedback that determine the three main types of learning:
Supervised Learning:
In Supervised Learning the agent observeses some example input-output pairs and learns a function that maps from input to output.
Example: Let's think of an agent to classify images containing cats or dogs. If we provide an image containing a cat or a dog, this agent should output a string "cat" or "dog" for that particular image. To teach this agent, we will give a lot of input-output pairs like {cat image-"cat"}, {dog image-"dog"} to the agent. The agent then learns a function that maps from an input image to one of those strings.
Unsupervised Learning:
In Unsupervised Learning the agent learns patterns in the input even though no explicit feedback is supplied. The most common type is clustering: detecting potential useful clusters of input examples.
Example: A taxi agent would develop a concept of good traffic days and bad traffic days without ever being given labeled examples.
Reinforcement Learning:
In Reinforcement Learning the agent learns from a series of reinforcements—rewards or punishments.
Example: Let's talk about an agent to play the popular Atari game—Pong. We will reward a point for every correct move and deduct a point for every wrong move from the agent. Eventually, the agent will figure out its actions prior to reinforcement were most responsible for it.
Explanations of learning module goes here
Practical Machine Learning Task
MNIST handwritten digits calssification
The MNIST database, available from this page is a large database of handwritten digits that is commonly used for training & testing/validating in Machine learning.
The dataset has 60,000 training images each of size 28x28 pixels with labels and 10,000 testing images of size 28x28 pixels with labels.
In this section, we will use this database to compare performances of these different learning algorithms:
* kNN (k-Nearest Neighbour) classifier
* Single-hidden-layer Neural Network classifier
* SVMs (Support Vector Machines)
It is estimates that humans have an error rate of about 0.2% on this problem. Let's see how our algorithms perform!
Loading MNIST digits data
Let's start by loading MNIST data into numpy arrays.
End of explanation
train_img, train_lbl, test_img, test_lbl = load_MNIST()
Explanation: The function load_MNIST() loads MNIST data from files saved in aima-data/MNIST. It returns four numpy arrays that we are gonna use to train & classify hand-written digits in various learning approaches.
End of explanation
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Training labels size:", test_lbl.shape)
Explanation: Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Each 28x28 pixel image is flattened to 784x1 array and we should have 60,000 of them in training data. Similarly we should have 10,000 of those 784x1 arrays in testing data.
End of explanation
classes = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
num_classes = len(classes)
def show_MNIST(dataset, samples=8):
if dataset == "training":
labels = train_lbl
images = train_img
elif dataset == "testing":
labels = test_lbl
images = test_img
else:
raise ValueError("dataset must be 'testing' or 'training'!")
for y, cls in enumerate(classes):
idxs = np.nonzero([i == y for i in labels])
idxs = np.random.choice(idxs[0], samples, replace=False)
for i , idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples, num_classes, plt_idx)
plt.imshow(images[idx].reshape((28, 28)))
plt.axis("off")
if i == 0:
plt.title(cls)
plt.show()
# takes 5-10 secs. to execute the cell
show_MNIST("training")
# takes 5-10 secs. to execute the cell
show_MNIST("testing")
Explanation: Visualizing MNIST digits data
To get a better understanding of the dataset, let's visualize some random images for each class from training & testing datasets.
End of explanation
classes = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
num_classes = len(classes)
def show_ave_MNIST(dataset):
if dataset == "training":
print("Average of all images in training dataset.")
labels = train_lbl
images = train_img
elif dataset == "testing":
print("Average of all images in testing dataset.")
labels = test_lbl
images = test_img
else:
raise ValueError("dataset must be 'testing' or 'training'!")
for y, cls in enumerate(classes):
idxs = np.nonzero([i == y for i in labels])
print("Digit", y, ":", len(idxs[0]), "images.")
ave_img = np.mean(np.vstack([images[i] for i in idxs[0]]), axis = 0)
# print(ave_img.shape)
plt.subplot(1, num_classes, y+1)
plt.imshow(ave_img.reshape((28, 28)))
plt.axis("off")
plt.title(cls)
plt.show()
show_ave_MNIST("training")
show_ave_MNIST("testing")
Explanation: Let's have a look at average of all the images of training and testing data.
End of explanation
%psource DataSet
Explanation: k-Nearest Neighbours (kNN) classifier
Review
k-Nearest Neighbors algorithm is a non-parametric method used for classification and regression. We are gonna use this to classify MNIST handwritten digits. More about kNN on Scholarpedia.
Let's see how kNN works with a simple plot shown in the above picture. There are two classes named Class A yellow color dots and Class B violet color dots. Every point in this plot has two features i.e. (X<sub>2</sub>, X<sub>1</sub>) values of that particular point which we used to plot. Now, let's say we have a new point, a red star and we want to know which class this red star belongs. Solving this problem by predicting the class of this new red star is out current classification problem.
We have co-ordinates (we call them features in ML) of this red star and we need to predict its class using kNN algorithm. In this algorithm, the value of k is arbitary. k is one of the hyper parameters for kNN algorithm. We choose this number based on our dataset and choosing a particular number is known as hyper parameter tuning/optimising. We learn more about this in coming topics.
Let's put k = 3. It means you need to find 3-Nearest Neighbors of this red star and classify this new point into majority class. Observe that smaller circle which containg 3 points other that test point (red star). As there are two violet points, which is majority, we predict the class of red star as violet- Class B.
Similarly if we put k = 5, you can observe that there are 4 yellow points, which is majority. So, we classify our test point as yellow- Class A.
In practical tasks, we iterate through a bunch of values for k (like [1, 2, 5, 10, 20, 50, 100]) and see how it performs and select the best one.
Native implementations from Learning module
Let's classify MNIST data in this method. Similar to these points, our images in MNIST data also have features. These points have two features as (2, 3) which represents co-ordinates of the point in 2-dimentional plane. Our images have 28x28 pixel values and we treat them as features for this particular task.
Next couple of cells help you understand some useful definitions from learning module.
End of explanation
%psource NearestNeighborLearner
Explanation: class DataSet explanation goes here
End of explanation
print(train_img.shape, train_lbl.shape)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
print(training_examples.shape)
Explanation: Nearest NeighborLearner explanation goes here
Now, let us convert this raw data into Dataset.examples to run our NearestNeighborLearner(dataset, k=1) defined in learning.py. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module.
End of explanation
# takes ~8 Secs. to execute this cell
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
kNN_Learner = NearestNeighborLearner(MNIST_DataSet)
Explanation: Now, we will initialize DataSet with our training examples. Call NearestNeighbor Learner on this dataset. Predict the class of a test image.
End of explanation
# takes ~20 Secs. to execute this cell
test_img_choice = 2311
predicted_class = kNN_Learner(test_img[test_img_choice])
print("Predicted class of test image:", predicted_class)
Explanation: Choose a number from 0 to 9999 for test_img_choice and we are going to predict the class of that test image.
End of explanation
print("Actual class of test image:", test_lbl[test_img_choice])
plt.imshow(test_img[test_img_choice].reshape((28,28)))
Explanation: To make sure that the output we got is correct, let's plot that image along with its label.
End of explanation
class kNN_learner:
"Simple kNN learner with manhattan distance"
def __init__(self):
pass
def train(self, train_img, train_lbl):
self.train_img = train_img
self.train_lbl = train_lbl
def predict_labels(self, test_img, k=1, distance="manhattan"):
if distance == "manhattan":
distances = self.compute_manhattan_distances(test_img)
num_test = distances.shape[0]
predictions = np.zeros(num_test, dtype=np.uint8)
for i in range(num_test):
k_best_labels = self.train_lbl[np.argsort(distances[i])].flatten()[:k]
predictions[i] = mode(k_best_labels)
return predictions
def compute_manhattan_distances(self, test_img):
num_test = test_img.shape[0]
num_train = self.train_img.shape[0]
# print(num_test, num_train)
dists = np.zeros((num_test, num_train))
for i in range(num_test):
dists[i] = np.sum(abs(self.train_img - test_img[i]), axis = 1)
return(dists)
Explanation: Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset. Let's try with a different test image and hope we get it this time.
You might have recognized that our algorithm took ~20 seconds to predict a single image. How would we even predict all 10,000 test images? Yeah, the implementations we have in our learning module are not optimized to run on this particular dataset. We will have an optimised version below in NumPy which is nearly ~50-100 times faster than our native implementation.
Faster implementation using NumPy
Here we calculate manhattan distance between two images faster than our native implementation. Which in turn make predicting labels for test images far efficient.
End of explanation
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Training labels size:", test_lbl.shape)
learner = kNN_learner()
learner.train(train_img, train_lbl)
Explanation: Let's print the shapes of data to make sure everything's on track.
End of explanation
# takes ~17 Secs. to execute this cell
num_test = 100
predictions = learner.predict_labels(test_img[:num_test], k=3)
Explanation: Let us predict the classes of first 100 test images.
End of explanation
# print(predictions)
# print(test_lbl[:num_test])
num_correct = np.sum([predictions == test_lbl[:num_test]])
num_accuracy = (float(num_correct) / num_test) * 100
print("Accuracy of predictions:", num_accuracy, "%")
Explanation: Let's compare the performances of both implementations. It took 20 Secs. to predict one image using our native implementations and 17 Secs. to predict 100 images in faster implementations. That's 110 times faster.
Now, test the accuracy of our predictions:
End of explanation
from sklearn.neighbors import NearestNeighbors
from sklearn import svm
# takes ~3 mins to execute the cell
SVMclf = svm.LinearSVC()
SVMclf.fit(train_img, train_lbl)
predictions = SVMclf.predict(test_img)
num_correct = np.sum(predictions == test_lbl)
num_accuracy = (float(num_correct)/len(test_lbl)) * 100
print("Accuracy of predictions:", num_accuracy, "%")
Explanation: Introduction to Scikit-Learn
In this section we will solve this MNIST problem using Scikit-Learn. Learn more about Scikit-Learn here. As we are using this library, we don't need to define our own functions (kNN or Support Vector Machines aka SVMs) to classify digits.
Let's start by importing necessary modules for kNN and SVM.
End of explanation |
9,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
INS-GPS Integration
This notebook shows an idealized example of loose INS-GPS integration.
Creating a trajectory and generating inertial readings
First we need to generate a trajectory. To keep things simple we generate sort of a random walk trajectory by summing random displacements.
Step1: Assume that each step is done in 10 seconds and define time stamps
Step2: Add displacements to initial latitude and longitude
Step3: We also append 20 minutes of INS being at rest
Step4: Set pitch and roll angles to zeros
Step5: Sensor sampling period is set to 0.1
Step6: Run the simulation routine which will interpolate the trajectory and generate inertial readings
Step7: The final trajectory is drawn below, the initial point is marked with a cross.
Step8: Integrating ideal data
Just to check that everything is correct we want to integrate the generated gyro and accel readings.
Step9: First we apply coning and sculling corrections
Step10: And the run the integration.
Step11: Compute integration error using a convenience function
Step12: We see that attitude and velocity errors are vanishingly small. The position errors are less than 3 meters during 3 hours of operations, which is completely negligible compared to errors of even the most accurate INS.
Step13: Integrating "real" data
Now we will run the navigation using inertial sensors with errors.
The error will be a sum of a random bias and additive white noise. We define magnitudes typical for moderately accurate navigation grade sensors.
Step14: Compute biases as a random constants. To avoid a "bad case" in this example we generated biases uniformly within $[-2 \sigma, 2 \sigma]$.
Step15: Now we apply errors to inertial readings
Step16: Compute coning and sculling corrections
Step17: An INS operation have to start with the self alignment. We devote 15 minutes of the initial rest for it
Step18: Split the readings into alignment and navigation parts
Step19: Compare estimated attitude angles with the true angles.
Step20: Assume that the initial position is known with the accuracy typical to GPS receivers
Step21: Assume that it is known that the navigation starts at rest ans set initial velocities to 0
Step22: Now we can run the integration
Step23: We see that even with very accurate gyros pure INS performance is not that good.
Step24: Aiding from GPS
Now we will use idealize GPS position observations for INS errors estimation and correction.
We assume that GPS is available every second and use known exact values of latitude and longitude
Step25: We will use an idealized model that GPS observations contain only additive normal errors with a standard deviation of 10 meters (note that in reality errors in outputs from GPS receivers behave much worse).
Step26: To use GPS measurements in a navigation Kalman filter we wrap this data into a special object
Step27: Also define gyro and accelerometer models using parameters defined above
Step28: Now we can run a navigation Kalman filter which will blend INS and GPS data. In this example INS errors didn't grow very large, thus we can use a feedforward filter.
Step29: We create a filter by passing sampling period and computed trajectory. To initialize the covariance matrix we pass standard deviations of the initial errors.
Currently the covariance matrix is initialized as diagonal, even though it can be done more rigorously, i.e consider correlations between sensor biases and attitude errors. But my view is that a reliable filter should not depend on such fine details, otherwise it is likely to fail in real conditions. So for the sake of simplicity it is implemented like this for now (can be changed later).
Theoretical attitude accuracy (sd values) from static gyrocompassing in our case is
Step30: We run the filter and pass available measurements to it. The return value is the INS trajectory corrected by estimated errors.
Step31: Now we want to investigate errors in the filtered trajectory.
Step32: Obviously performance in terms of position and velocity accuracy is very good, but this is sort of expected because GPS provides coordinates directly.
Attitude angle errors are generally decreasing as well, but the picture is less clear. We want to plot their standard deviation bounds estimated by the filter as well.
Step33: The return value of FeedforwardFilter contains attributes err, sd, gyro_err, gyro_sd, accel_err, accel_sd for estimated trajectory errors and inertial sensor states and their standard deviations. Below we plot true errors for heading, pitch and roll with their 1-sigma bounds provided by the filter.
Generally we see that the filter's performance is adequate. It can be measured more precisely by Monte-Carlo simulation, but this will not be included in this example.
Step34: Also it is interesting to assess the filter's sensor bias estimation. Plots below show $\pm \sigma$ bands of gyro bias estimates, the straight line depicts the true value. We see that estimation of gyro biases is quite successful.
Step35: Below the same done for accelerometer biases. Horizontal accelerometer biases are less observable on the given trajectory than gyro biases, and the vertical bias is not observable at all because pitch and roll are held zero. | Python Code:
from pyins import sim
from pyins.coord import perturb_ll
def generate_trajectory(n_points, min_step, max_step, angle_spread, random_state=0):
rng = np.random.RandomState(random_state)
xy = [np.zeros(2)]
angle = rng.uniform(2 * np.pi)
heading = [90 - angle]
angle_spread = np.deg2rad(angle_spread)
for i in range(n_points - 1):
step = rng.uniform(min_step, max_step)
xy.append(xy[-1] + step * np.array([np.cos(angle), np.sin(angle)]))
angle += rng.uniform(-angle_spread, angle_spread)
heading.append(90 - angle)
return np.asarray(xy), np.asarray(heading)
xy, h = generate_trajectory(1000, 70, 100, 20, random_state=1)
Explanation: INS-GPS Integration
This notebook shows an idealized example of loose INS-GPS integration.
Creating a trajectory and generating inertial readings
First we need to generate a trajectory. To keep things simple we generate sort of a random walk trajectory by summing random displacements.
End of explanation
t = np.arange(1000) * 10
Explanation: Assume that each step is done in 10 seconds and define time stamps:
End of explanation
lat0 = 58
lon0 = 56
lat, lon = perturb_ll(lat0, lon0, xy[:, 1], xy[:, 0])
Explanation: Add displacements to initial latitude and longitude:
End of explanation
t = np.hstack((-1200, t))
lat = np.hstack((lat[0], lat))
lon = np.hstack((lon[0], lon))
h = np.hstack((h[0], h))
Explanation: We also append 20 minutes of INS being at rest:
End of explanation
p = np.zeros_like(h)
r = np.zeros_like(h)
Explanation: Set pitch and roll angles to zeros:
End of explanation
dt = 0.1
Explanation: Sensor sampling period is set to 0.1:
End of explanation
traj_ref, gyro, accel = sim.from_position(dt, lat, lon, t, h=h, p=p, r=r)
Explanation: Run the simulation routine which will interpolate the trajectory and generate inertial readings:
End of explanation
plt.plot(traj_ref.lon, traj_ref.lat)
plt.plot(traj_ref.lon[0], traj_ref.lat[0], 'kx', markersize=12)
plt.xlabel("lon, deg")
plt.ylabel("lat, deg")
Explanation: The final trajectory is drawn below, the initial point is marked with a cross.
End of explanation
from pyins.integrate import coning_sculling, integrate
from pyins.filt import traj_diff
Explanation: Integrating ideal data
Just to check that everything is correct we want to integrate the generated gyro and accel readings.
End of explanation
theta, dv = coning_sculling(gyro, accel)
Explanation: First we apply coning and sculling corrections:
End of explanation
traj_ideal = integrate(dt, *traj_ref.iloc[0], theta, dv)
Explanation: And the run the integration.
End of explanation
err_ideal = traj_diff(traj_ideal, traj_ref)
def plot_errors(dt, err, step=1000):
plt.figure(figsize=(15, 10))
plt.subplot(331)
err = err.iloc[::step]
t = err.index * dt / 3600
plt.plot(t, err.lat, label='lat')
plt.xlabel("time, h")
plt.ylabel("m")
plt.legend(loc='best')
plt.subplot(334)
plt.plot(t, err.lon, label='lon')
plt.xlabel("time, h")
plt.ylabel("m")
plt.legend(loc='best')
plt.subplot(332)
plt.plot(t, err.VE, label='VE')
plt.xlabel("time, h")
plt.ylabel("m/s")
plt.legend(loc='best')
plt.subplot(335)
plt.plot(t, err.VN, label='VN')
plt.xlabel("time, h")
plt.ylabel("m/s")
plt.legend(loc='best')
plt.subplot(333)
plt.plot(t, err.h, label='heading')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.subplot(336)
plt.plot(t, err.p, label='pitch')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.subplot(339)
plt.plot(t, err.r, label='roll')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.legend(loc='best')
plt.tight_layout()
Explanation: Compute integration error using a convenience function:
End of explanation
plot_errors(dt, err_ideal)
Explanation: We see that attitude and velocity errors are vanishingly small. The position errors are less than 3 meters during 3 hours of operations, which is completely negligible compared to errors of even the most accurate INS.
End of explanation
gyro_bias_sd = np.deg2rad(0.05) / 3600 # 0.05 d/h
accel_bias_sd = 5e-3
gyro_bias_sd
gyro_noise = 1e-6 # rad / s^0.5
accel_noise = 3e-4 # m / s^1.5
Explanation: Integrating "real" data
Now we will run the navigation using inertial sensors with errors.
The error will be a sum of a random bias and additive white noise. We define magnitudes typical for moderately accurate navigation grade sensors.
End of explanation
np.random.seed(1)
gyro_bias = gyro_bias_sd * np.random.uniform(-2, 2, 3)
accel_bias = accel_bias_sd * np.random.uniform(-2, 2, 3)
gyro_bias, accel_bias
from pyins import earth
Explanation: Compute biases as a random constants. To avoid a "bad case" in this example we generated biases uniformly within $[-2 \sigma, 2 \sigma]$.
End of explanation
gyro_e = gyro + gyro_bias * dt + gyro_noise * np.random.randn(*gyro.shape) * dt**0.5
accel_e = accel + accel_bias * dt + accel_noise * np.random.randn(*accel.shape) * dt**0.5
Explanation: Now we apply errors to inertial readings:
End of explanation
theta, dv = coning_sculling(gyro_e, accel_e)
Explanation: Compute coning and sculling corrections:
End of explanation
t_align = 15 * 60
align_samples = int(t_align / dt)
Explanation: An INS operation have to start with the self alignment. We devote 15 minutes of the initial rest for it:
End of explanation
theta_align = theta[:align_samples]
theta_nav = theta[align_samples:]
dv_align = dv[:align_samples]
dv_nav = dv[align_samples:]
from pyins.align import align_wahba
(h0, p0, r0), P_align = align_wahba(dt, theta_align, dv_align, 58)
Explanation: Split the readings into alignment and navigation parts:
End of explanation
h0 - traj_ref.h.loc[align_samples], p0 - traj_ref.p.loc[align_samples], r0 - traj_ref.r.loc[align_samples]
Explanation: Compare estimated attitude angles with the true angles.
End of explanation
lat0, lon0 = perturb_ll(traj_ref.lat.loc[align_samples], traj_ref.lon.loc[align_samples],
10 * np.random.randn(1), 10 * np.random.randn(1))
Explanation: Assume that the initial position is known with the accuracy typical to GPS receivers:
End of explanation
VE0 = 0
VN0 = 0
Explanation: Assume that it is known that the navigation starts at rest ans set initial velocities to 0:
End of explanation
traj_real = integrate(dt, lat0, lon0, VE0, VN0, h0, p0, r0, theta_nav, dv_nav, stamp=align_samples)
traj_error = traj_diff(traj_real, traj_ref)
Explanation: Now we can run the integration:
End of explanation
plot_errors(dt, traj_error)
Explanation: We see that even with very accurate gyros pure INS performance is not that good.
End of explanation
gps_data = pd.DataFrame(index=traj_ref.index[::10])
gps_data['lat'] = traj_ref.lat[::10]
gps_data['lon'] = traj_ref.lon[::10]
Explanation: Aiding from GPS
Now we will use idealize GPS position observations for INS errors estimation and correction.
We assume that GPS is available every second and use known exact values of latitude and longitude:
End of explanation
gps_pos_sd = 10
gps_data['lat'], gps_data['lon'] = perturb_ll(gps_data.lat, gps_data.lon,
gps_pos_sd * np.random.randn(*gps_data.lat.shape),
gps_pos_sd * np.random.randn(*gps_data.lon.shape))
Explanation: We will use an idealized model that GPS observations contain only additive normal errors with a standard deviation of 10 meters (note that in reality errors in outputs from GPS receivers behave much worse).
End of explanation
from pyins.filt import LatLonObs
gps_obs = LatLonObs(gps_data, gps_pos_sd)
Explanation: To use GPS measurements in a navigation Kalman filter we wrap this data into a special object:
End of explanation
from pyins.filt import InertialSensor
gyro_model = InertialSensor(bias=gyro_bias_sd, noise=gyro_noise)
accel_model = InertialSensor(bias=accel_bias_sd, noise=accel_noise)
Explanation: Also define gyro and accelerometer models using parameters defined above:
End of explanation
from pyins.filt import FeedforwardFilter
Explanation: Now we can run a navigation Kalman filter which will blend INS and GPS data. In this example INS errors didn't grow very large, thus we can use a feedforward filter.
End of explanation
ff_filt = FeedforwardFilter(dt, traj_real,
pos_sd=10, vel_sd=0.1, azimuth_sd=0.5, level_sd=0.05,
gyro_model=gyro_model, accel_model=accel_model)
Explanation: We create a filter by passing sampling period and computed trajectory. To initialize the covariance matrix we pass standard deviations of the initial errors.
Currently the covariance matrix is initialized as diagonal, even though it can be done more rigorously, i.e consider correlations between sensor biases and attitude errors. But my view is that a reliable filter should not depend on such fine details, otherwise it is likely to fail in real conditions. So for the sake of simplicity it is implemented like this for now (can be changed later).
Theoretical attitude accuracy (sd values) from static gyrocompassing in our case is: 0.35 deg for heading (azimuth_sd) and 0.03 deg for pitch and roll (level_sd). Here we set values slightly higher to account for a non-perfect alignment:
End of explanation
ff_res = ff_filt.run(observations=[gps_obs])
Explanation: We run the filter and pass available measurements to it. The return value is the INS trajectory corrected by estimated errors.
End of explanation
filt_error = traj_diff(ff_res.traj, traj_ref)
Explanation: Now we want to investigate errors in the filtered trajectory.
End of explanation
plot_errors(dt, filt_error, step=10)
Explanation: Obviously performance in terms of position and velocity accuracy is very good, but this is sort of expected because GPS provides coordinates directly.
Attitude angle errors are generally decreasing as well, but the picture is less clear. We want to plot their standard deviation bounds estimated by the filter as well.
End of explanation
plt.figure(figsize=(15, 5))
t_plot = filt_error.index * dt / 3600
plt.subplot(131)
plt.plot(t_plot, filt_error.h, 'b')
plt.plot(t_plot, ff_res.sd.h, 'b--')
plt.plot(t_plot, -ff_res.sd.h, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("heading error")
plt.subplot(132)
plt.plot(t_plot, filt_error.p, 'b')
plt.plot(t_plot, ff_res.sd.p, 'b--')
plt.plot(t_plot, -ff_res.sd.p, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("pitch error")
plt.subplot(133)
plt.plot(t_plot, filt_error.r, 'b')
plt.plot(t_plot, ff_res.sd.r, 'b--')
plt.plot(t_plot, -ff_res.sd.r, 'b--')
plt.xlabel("time, h")
plt.ylabel("deg")
plt.title("roll error")
plt.tight_layout()
Explanation: The return value of FeedforwardFilter contains attributes err, sd, gyro_err, gyro_sd, accel_err, accel_sd for estimated trajectory errors and inertial sensor states and their standard deviations. Below we plot true errors for heading, pitch and roll with their 1-sigma bounds provided by the filter.
Generally we see that the filter's performance is adequate. It can be measured more precisely by Monte-Carlo simulation, but this will not be included in this example.
End of explanation
plt.figure(figsize=(15, 5))
t_plot = filt_error.index[::10] * dt / 3600
gyro_err = ff_res.gyro_err.iloc[::10]
gyro_sd = ff_res.gyro_sd.iloc[::10]
plt.subplot(131)
plt.plot(t_plot, gyro_err.BIAS_1 + gyro_sd.BIAS_1, 'b')
plt.plot(t_plot, gyro_err.BIAS_1 - gyro_sd.BIAS_1, 'b')
plt.hlines(gyro_bias[0], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 1 bias")
plt.subplot(132)
plt.plot(t_plot, gyro_err.BIAS_2 + gyro_sd.BIAS_2, 'b')
plt.plot(t_plot, gyro_err.BIAS_2 - gyro_sd.BIAS_2, 'b')
plt.hlines(gyro_bias[1], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 2 bias")
plt.subplot(133)
plt.plot(t_plot, gyro_err.BIAS_3 + gyro_sd.BIAS_3, 'b')
plt.plot(t_plot, gyro_err.BIAS_3 - gyro_sd.BIAS_3, 'b')
plt.hlines(gyro_bias[2], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Gyro 3 bias")
plt.tight_layout()
Explanation: Also it is interesting to assess the filter's sensor bias estimation. Plots below show $\pm \sigma$ bands of gyro bias estimates, the straight line depicts the true value. We see that estimation of gyro biases is quite successful.
End of explanation
plt.figure(figsize=(15, 5))
t_plot = filt_error.index[::10] * dt / 3600
accel_err = ff_res.accel_err.iloc[::10]
accel_sd = ff_res.accel_sd.iloc[::10]
plt.subplot(131)
plt.plot(t_plot, accel_err.BIAS_1 + accel_sd.BIAS_1, 'b')
plt.plot(t_plot, accel_err.BIAS_1 - accel_sd.BIAS_1, 'b')
plt.hlines(accel_bias[0], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 1 bias")
plt.subplot(132)
plt.plot(t_plot, accel_err.BIAS_2 + accel_sd.BIAS_2, 'b')
plt.plot(t_plot, accel_err.BIAS_2 - accel_sd.BIAS_2, 'b')
plt.hlines(accel_bias[1], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 2 bias")
plt.subplot(133)
plt.plot(t_plot, accel_err.BIAS_3 + accel_sd.BIAS_3, 'b')
plt.plot(t_plot, accel_err.BIAS_3 - accel_sd.BIAS_3, 'b')
plt.hlines(accel_bias[2], *plt.xlim())
plt.xlabel("time, h")
plt.ylabel("rad/s")
plt.title("Accel 3 bias")
plt.tight_layout()
Explanation: Below the same done for accelerometer biases. Horizontal accelerometer biases are less observable on the given trajectory than gyro biases, and the vertical bias is not observable at all because pitch and roll are held zero.
End of explanation |
9,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tufte
A Jupyter notebook with examples of how to use tufte.
Introduction
Currently, there are four supported plot types
Step1: tufte plots can take inputs of several types
Step2: You'll notice that the default Tufte line style includes circle markers with gaps between line segments. You are also able to specify the figure size directly to the line function.
There are several other differences. We'll create another plot below as an example.
Step3: First, we use Tufte's range-frame concept, which aims to make the frame (axis) lines "effective data-communicating element[s]" by showing the minimum and maximum values in each axis. This way, the tick labels are more informative. In this example, the range of the outcome variable is 96.9 units (407.1 - 310.2). Similarly, this data covers the years 1967 through 1977, inclusive.
The range-frame is applied to both axes for line and scatter plots.
Step4: You'll also notice that tufte.scatter() returns figure and axis objects. This is true for all tufte plots. With this, we can add a title to the figure and a label to the x-axis, for example. tufte plots are meant to be able to interact with matplotlib functions and methods.
When you need to create a bar plot, do the following.
Step5: A feature of the bar() function is the ability for x-axis labels to auto-rotate. We can see this when we change the one of the labels.
Step6: Tufte's boxplot is, perhaps, the most radical redesign of an existing plot. His approach is to maximize data-ink, the "non-erasable core of a graphic," by removing unnecessary elements. The boxplot removes boxes (which is why we refer to it as bplot()) and caps and simply shows a dot between two lines. This plot currently only takes a list, np.ndarray, or pd.DataFrame.
Let's create a DataFrame.
Step7: The dot represents the median and the lines correspond to the top and bottom 25% of the data. The empty space between the lines is the interquartile range.
Issues
Range-Frame
You may have noticed—if you cloned this repo and ran the notebook—that the range-frame feature isn;t perfect. It is possible, for example, for a minimum or maximum value to be too close to an existing tick label, causing overlap.
Additionally, in cases where the data in a given dimension (x or y) contains float values, the tick labels are converted to float. (This isn't the issue.) | Python Code:
%matplotlib inline
import string
import random
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import tufte
Explanation: Tufte
A Jupyter notebook with examples of how to use tufte.
Introduction
Currently, there are four supported plot types:
* bar
* boxplot
* line
* scatter
The designs are based on Edward R. Tufte's designs in The Visual Display of Quantitative Information.
This module is built on top of matplotlib, which means that it's possible to use those functions or methods in conjunction with tufte plots. In addition, an effort has been made to keep most changes to matplotlibrc properties contained within the module. That is, we try not to make global changes that will affect other plots.
Use
Let's start by importing several libraries.
End of explanation
tufte.line(range(3), range(3), figsize=(5, 5))
Explanation: tufte plots can take inputs of several types: list, np.ndarray, pd.Series, and, in some cases, pd.DataFrame.
To create a line plot, do the following. (Note: if you'd like higher resolution plots, use mpl.rc('savefig', dpi=200).)
End of explanation
x = range(1967, 1977 + 1)
y = [310.2, 330, 375, 385, 385.6, 395, 387.5, 380, 392, 407.1, 380]
tufte.line(x, y, figsize=(8, 4))
Explanation: You'll notice that the default Tufte line style includes circle markers with gaps between line segments. You are also able to specify the figure size directly to the line function.
There are several other differences. We'll create another plot below as an example.
End of explanation
np.random.seed(8675309)
fig, ax = tufte.scatter(np.random.randint(5, 95, 100), np.random.randint(1000, 1234, 100), figsize=(8, 4))
plt.title('Title')
ax.set_xlabel('x-axis')
Explanation: First, we use Tufte's range-frame concept, which aims to make the frame (axis) lines "effective data-communicating element[s]" by showing the minimum and maximum values in each axis. This way, the tick labels are more informative. In this example, the range of the outcome variable is 96.9 units (407.1 - 310.2). Similarly, this data covers the years 1967 through 1977, inclusive.
The range-frame is applied to both axes for line and scatter plots.
End of explanation
np.random.seed(8675309)
tufte.bar(range(10),
np.random.randint(1, 25, 10),
label=['First', 'Second', 'Third', 'Fourth', 'Fifth',
'Sixth', 'Seventh', 'Eight', 'Ninth', 'Tenth'],
figsize=(8, 4))
Explanation: You'll also notice that tufte.scatter() returns figure and axis objects. This is true for all tufte plots. With this, we can add a title to the figure and a label to the x-axis, for example. tufte plots are meant to be able to interact with matplotlib functions and methods.
When you need to create a bar plot, do the following.
End of explanation
np.random.seed(8675309)
tufte.bar(range(10),
np.random.randint(1, 25, 10),
label=['First', 'Second', 'Third', 'Fourth', 'Fifth',
'Sixth', 'Lucky 7th', 'Eight', 'Ninth', 'Tenth'],
figsize=(8, 4))
Explanation: A feature of the bar() function is the ability for x-axis labels to auto-rotate. We can see this when we change the one of the labels.
End of explanation
n_cols = 10 # Must be less than or equal to 26
size = 100
letters = string.ascii_lowercase
df_dict = defaultdict(list)
for c in letters[:n_cols]:
df_dict[c] = np.random.randint(random.randint(25, 50), random.randint(75, 100), size)
df = pd.DataFrame(df_dict)
tufte.bplot(df, figsize=(8, 4))
Explanation: Tufte's boxplot is, perhaps, the most radical redesign of an existing plot. His approach is to maximize data-ink, the "non-erasable core of a graphic," by removing unnecessary elements. The boxplot removes boxes (which is why we refer to it as bplot()) and caps and simply shows a dot between two lines. This plot currently only takes a list, np.ndarray, or pd.DataFrame.
Let's create a DataFrame.
End of explanation
np.random.seed(8675309)
tufte.scatter(np.random.randn(100), np.random.randn(100), figsize=(8, 4))
Explanation: The dot represents the median and the lines correspond to the top and bottom 25% of the data. The empty space between the lines is the interquartile range.
Issues
Range-Frame
You may have noticed—if you cloned this repo and ran the notebook—that the range-frame feature isn;t perfect. It is possible, for example, for a minimum or maximum value to be too close to an existing tick label, causing overlap.
Additionally, in cases where the data in a given dimension (x or y) contains float values, the tick labels are converted to float. (This isn't the issue.)
End of explanation |
9,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Image_Augmentation
The following function takes the 8bit grayscale images that we are using and performs a series of affine transformations to the images. There are vertical and horizontal flips along with rotations of 90, 270, 15, 30, and 45 degrees. Also included is a function for generating the rotations. Image augmentation needs to be performed before runnin the VGG_Prep function.
When calling the Image_Augmentation function setting the various flags to True will cause the transformation to be performed.
Step4: VGG_Prep
The following function takes the 8bit grayscale images that we are using and converts them to 8bit rgb while at the same time changing the pixles to a scale of 0 to 255. These image parameters are required by the VGG_16 model.
Step5: VGG_16 Bottleneck
The following function leverages Daniel's image loader function and performs the following
Step6: Running the model on the Train, Test, and Validation Data
1) The first test is on the rescaled and squared off images maintaining aspect ratio without the artifacts removed.
Step7: Train Top Model
This function takes the bottleneck features from the bottleneck function and applies a shallow CNN to these features to classify the images. The function needs to be pointed at the locations of the training and test features along with the training and test labels. You can use the epoch and batch size variables to control the number of images to show to the model and the number of training epochs. The model save variabler alows for saving of the final model weights.
Step8: Confusion Matrix
The function below takes a data set that has been run through the VGG16 model, the corresponding labels, and a pre-trained weights file and creates a confusion matrix using Daniel's helper function.
Step9: Running the Top Model
The following runs the top model classifier on the bottleneck features.
Step10: Running the Top Model on the Fully Augmented Data
In this run we will be using the bottleneck features from taking the training data and augmenting it with the followin transformations; Vertical Flip, Horizontal Flip, 90 and 270 degree rotation, and 15, 30, and 45 degree rotation in both directions.
Step11: Running the Top Model at 224x224
In this next experiment we run the model with transformations on the data at a size of 224x224
Step12: Generating the Confusion Matrix for the Balanced 224x224 Run
Step13: 224x224 With Flips
Step14: Thresholded Images at 224x224 with no Augmentations
Step15: 224x224 DDSM - Two Categories
Attempting to learn the difference between normal annd abnormal.
Step16: 224x224 DDSM Thresholded Images - Two Categories | Python Code:
# Function for rotating the image files.
def Image_Rotate(img, angle):
Rotates a given image the requested angle. Returns the rotated image.
rows,cols = img.shape
M = cv2.getRotationMatrix2D((cols/2,rows/2), angle, 1)
return(cv2.warpAffine(img,M,(cols,rows)))
# Function for augmenting the images
def Image_Augment(X, Y, vflip=False, hflip=False, major_rotate=False, minor_rotate=False):
:param X np.array of images
Y np.array of labels
vflip, hflip, major_rotate, minor_rotate set to True to perform the augmentations
:return The set of augmented iages and their corresponding labels
if len(X) != len(Y):
print('Data and Label arrays not of the same length.')
n = vflip + hflip + 2*major_rotate + 6*minor_rotate
augmented = np.zeros([len(X) + n*len(X), X.shape[1], X.shape[2]])
label = np.zeros([len(Y) + n*len(Y), 1])
count = 0
for i in range(0, len(X)):
augmented[count] = X[i]
label[count] = Y[i]
count += 1
if vflip:
aug = cv2.flip(X[i], 0)
augmented[count] = aug
label[count] = Y[i]
count += 1
if hflip:
aug = cv2.flip(X[i], 1)
augmented[count] = aug
label[count] = Y[i]
count +=1
if major_rotate:
angles = [90, 270]
for angle in angles:
aug = Image_Rotate(X[i], angle)
augmented[count] = aug
label[count] = Y[i]
count += 1
if minor_rotate:
angles = [-45,-30,-15,15,30,45]
for angle in angles:
aug = Image_Rotate(X[i], angle)
augmented[count] = aug
label[count] = Y[i]
count += 1
return(augmented, label)
Explanation: Image_Augmentation
The following function takes the 8bit grayscale images that we are using and performs a series of affine transformations to the images. There are vertical and horizontal flips along with rotations of 90, 270, 15, 30, and 45 degrees. Also included is a function for generating the rotations. Image augmentation needs to be performed before runnin the VGG_Prep function.
When calling the Image_Augmentation function setting the various flags to True will cause the transformation to be performed.
End of explanation
def VGG_Prep(img_data):
:param img_data: training or test images of shape [#images, height, width]
:return: the array transformed to the correct shape for the VGG network
shape = [#images, height, width, 3] transforms to rgb and reshapes
images = np.zeros([len(img_data), img_data.shape[1], img_data.shape[2], 3])
for i in range(0, len(img_data)):
im = 255 - (img_data[i] * 255) # Orginal imagnet images were not rescaled
im = color.gray2rgb(im)
images[i] = im
return(images)
Explanation: VGG_Prep
The following function takes the 8bit grayscale images that we are using and converts them to 8bit rgb while at the same time changing the pixles to a scale of 0 to 255. These image parameters are required by the VGG_16 model.
End of explanation
def vgg16_bottleneck(trainPath, testPath, imagePath, modelPath, size, balance = True, verbose = True,
verboseFreq = 50, valPath = 'None', transform = False, binary = False):
categories = bc.bcNormVsAbnormNumerics()
# Loading data
metaTr, metaTr2, mCountsTr = bc.load_training_metadata(trainPath, balance, verbose)
lenTrain = len(metaTr)
X_train, Y_train = bc.load_data(trainPath, imagePath, maxData = lenTrain,
categories=categories,
verboseFreq = verboseFreq,
imgResize=size,
normalVsAbnormal=binary)
metaTest, meataT2, mCountsT = bc.load_training_metadata(testPath, balance, verbose)
lenTest = len(metaTest)
X_test, Y_test = bc.load_data(testPath, imagePath, maxData = lenTrain,
categories=categories,
verboseFreq = verboseFreq,
imgResize=size,
normalVsAbnormal=binary)
if transform:
print('Transforming the Training Data')
X_train, Y_train = Image_Augment(X=X_train, Y=Y_train, hflip=True, vflip=True, minor_rotate=False, major_rotate=False)
print('Preparing the Training Data for the VGG_16 Model.')
X_train = VGG_Prep(X_train)
print('Preparing the Test Data for the VGG_16 Model')
X_test = VGG_Prep(X_test)
print('Loading the VGG_16 Model')
model = applications.VGG16(include_top=False, weights='imagenet')
# Generating the bottleneck features for the training data
print('Evaluating the VGG_16 Model on the Training Data')
bottleneck_features_train = model.predict(X_train)
# Saving the bottleneck features for the training data
featuresTrain = os.path.join(modelPath, 'bottleneck_features_train.npy')
labelsTrain = os.path.join(modelPath, 'labels_train.npy')
print('Saving the Training Data Bottleneck Features.')
np.save(open(featuresTrain, 'wb'), bottleneck_features_train)
np.save(open(labelsTrain, 'wb'), Y_train)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Test Data')
bottleneck_features_test = model.predict(X_test)
# Saving the bottleneck features for the test data
featuresTest = os.path.join(modelPath, 'bottleneck_features_test.npy')
labelsTest = os.path.join(modelPath, 'labels_test.npy')
print('Saving the Test Data Bottleneck Feaures.')
np.save(open(featuresTest, 'wb'), bottleneck_features_test)
np.save(open(labelsTest, 'wb'), Y_test)
if valPath != 'None':
metaVal, metaV2, mCountsV = bc.load_training_metadata(valPath, verbose = verbose, balanceViaRemoval = False)
lenVal = len(metaVal)
X_val, Y_val = bc.load_data(valPath, imagePath, maxData = lenVal, verboseFreq = verboseFreq, imgResize=size)
X_val = VGG_Prep(X_val)
# Generating the bottleneck features for the test data
print('Evaluating the VGG_16 Model on the Validataion Data')
bottleneck_features_val = model.predict(X_val)
# Saving the bottleneck features for the test data
featuresVal = os.path.join(modelPath, 'bottleneck_features_validation.npy')
labelsVal = os.path.join(modelPath, 'labels_validation.npy')
print('Saving the Validation Data Bottleneck Features.')
np.save(open(featuresVal, 'wb'), bottleneck_features_val)
np.save(open(labelsVal, 'wb'), Y_val)
Explanation: VGG_16 Bottleneck
The following function leverages Daniel's image loader function and performs the following:
1. Loads in the images using the train, test, and validation csv files.
2. Prepares the images using the VGG_Prep function
3. Loads the VGG_16 model with the cassification layers removed.
4. Runs each of the images for the training, test, and validation sets (if included) through the model.
5. Saves out .npy files containing the bottleneck features from the VGG_16 model predictions and the corresponding labels.
End of explanation
# global variables for loading the data
imagePath = '../images/threshold/DDSM/'
trainDataPath = '../images/ddsm/ddsm_train.csv'
testDataPath = '../images/ddsm/ddsm_test.csv'
valDataPath = '../images/ddsm/ddsm_val.csv'
imgResize = (224, 224) # can go up to (224, 224)
modelPath = '../model/'
vgg16_bottleneck(trainDataPath, testDataPath, imagePath, modelPath, imgResize,
balance = True, verbose = True, verboseFreq = 50, valPath = valDataPath,
transform = False, binary = True)
class LossHistory(cb.Callback):
def on_train_begin(self, logs={}):
self.losses = []
def on_batch_end(self, batch, logs={}):
batch_loss = logs.get('loss')
self.losses.append(batch_loss)
Explanation: Running the model on the Train, Test, and Validation Data
1) The first test is on the rescaled and squared off images maintaining aspect ratio without the artifacts removed.
End of explanation
def train_top_model(train_feats, train_lab, test_feats, test_lab, model_path, model_save, epoch = 50, batch = 64):
train_bottleneck = os.path.join(model_path, train_feats)
train_labels = os.path.join(model_path, train_lab)
test_bottleneck = os.path.join(model_path, test_feats)
test_labels = os.path.join(model_path, test_lab)
history = LossHistory()
X_train = np.load(train_bottleneck)
Y_train = np.load(train_labels)
#Y_train = np_utils.to_categorical(Y_train, nb_classes=3)
Y_train = np_utils.to_categorical(Y_train, nb_classes=2)
X_test = np.load(test_bottleneck)
Y_test = np.load(test_labels)
#Y_test = np_utils.to_categorical(Y_test, nb_classes=3)
Y_test = np_utils.to_categorical(Y_test, nb_classes=2)
print(X_train.shape)
noise = 0.01
model = Sequential()
model.add( GaussianNoise(noise, input_shape=X_train.shape[1:]))
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dropout(0.7))
model.add( Dense(256, activation = 'relu') )
model.add(Dropout(0.5))
#model.add(Dense(3))
model.add(Dense(2))
model.add(Activation('softmax'))
#loss = 'categorical_crossentropy'
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, Y_train,
nb_epoch=epoch,
batch_size=batch,
callbacks=[history],
validation_data=(X_test, Y_test),
verbose=2)
score = model.evaluate(X_test, Y_test, batch_size=16, verbose=0)
print "Network's test score [loss, accuracy]: {0}".format(score)
model.save_weights(os.path.join(model_path, model_save))
Explanation: Train Top Model
This function takes the bottleneck features from the bottleneck function and applies a shallow CNN to these features to classify the images. The function needs to be pointed at the locations of the training and test features along with the training and test labels. You can use the epoch and batch size variables to control the number of images to show to the model and the number of training epochs. The model save variabler alows for saving of the final model weights.
End of explanation
def cf_Matrix(data, label, weights, path, save):
data = os.path.join(path, data)
label = os.path.join(path, label)
categories = bc.bcNormVsAbnormNumerics()
X = np.load(data)
Y = np.load(label)
#Y = np_utils.to_categorical(Y, nb_classes=3)
# Loading and preping the model
model = Sequential()
model.add(Flatten(input_shape=X.shape[1:]))
model.add(Dropout(0.7))
model.add(Dense(256))
model.add(Activation('relu'), constraint= maxnorm(3.))
model.add(Dropout(0.5))
#model.add(Dense(3))
model.add(Dense(2))
model.add(Activation('softmax'))
model.load_weights(os.path.join('../model/', weights))
# try Adadelta and Adam
model.compile(optimizer='adadelta',
loss='categorical_crossentropy',
metrics=['accuracy'])
predictOutput = model.predict(X, batch_size=64, verbose=2)
#numBC = bc.numericBC()
numBC = bc.reverseDict(categories)
predClasses = []
for i in range(len(predictOutput)):
arPred = np.array(predictOutput[i])
predictionProb = arPred.max()
predictionNdx = arPred.argmax()
predClassName = numBC[predictionNdx]
predClasses.append(predictionNdx)
# Use sklearn's helper method to generate the confusion matrix
cnf_matrix = skm.confusion_matrix(Y, predClasses)
# Ploting the confusion matrix
class_names = numBC.values()
np.set_printoptions(precision=2)
fileCfMatrix = '../figures/confusion_matrix-' + save + '.png'
plt.figure()
bc.plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, \n' + save)
plt.savefig(fileCfMatrix)
plt.show()
Explanation: Confusion Matrix
The function below takes a data set that has been run through the VGG16 model, the corresponding labels, and a pre-trained weights file and creates a confusion matrix using Daniel's helper function.
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train.npy'
train_labels = 'labels_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights02.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150run2/top_weights02.h5'
saveFile = 'balanced150'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: Running the Top Model
The following runs the top model classifier on the bottleneck features.
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_150fulltrans_train.npy'
train_labels = 'labels_150fulltrans_train.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_150fulltrans.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '150_test_val/bottleneck_features_test.npy'
feats_labs = '150_test_val/labels_test.npy'
weight = 'balanced150FullTrans/top_weights_150fulltrans.h5'
saveFile = 'balanced150FullTrans'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: Running the Top Model on the Fully Augmented Data
In this run we will be using the bottleneck features from taking the training data and augmenting it with the followin transformations; Vertical Flip, Horizontal Flip, 90 and 270 degree rotation, and 15, 30, and 45 degree rotation in both directions.
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224.npy'
train_labels = 'labels_train_224.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
Explanation: Running the Top Model at 224x224
In this next experiment we run the model with transformations on the data at a size of 224x224
End of explanation
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224/top_weights_224.h5'
saveFile = 'balanced224'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: Generating the Confusion Matrix for the Balanced 224x224 Run
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224flips/bottleneck_features_train_224flip.npy'
train_labels = 'Balanced224flips/labels_train_224flip.npy'
test_bottleneck = '224_test_val/bottleneck_features_test.npy'
test_labels = '224_test_val/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224flips/top_weights_224flip.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_test_val/bottleneck_features_test.npy'
feats_labs = '224_test_val/labels_test.npy'
weight = 'balanced224flips/top_weights_224flip.h5'
saveFile = 'balanced224flip'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: 224x224 With Flips
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th.npy'
train_labels = 'labels_train_224th.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_threshold/bottleneck_features_test.npy'
feats_labs = '224_threshold/labels_test.npy'
weight = 'balanced224Threshold/top_weights_224th.h5'
saveFile = 'balanced224Threshold'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: Thresholded Images at 224x224 with no Augmentations
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'Balanced224Binary/bottleneck_features_train_224twoclass.npy'
train_labels = 'Balanced224Binary/labels_train_224twoclass.npy'
test_bottleneck = '224_binary/bottleneck_features_test.npy'
test_labels = '224_binary/labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'Balanced224Binary/top_weights_224twoclass.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path, epoch = 100)
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Binary/top_weights_224twoclass.h5'
saveFile = 'balanced224Twoclass'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: 224x224 DDSM - Two Categories
Attempting to learn the difference between normal annd abnormal.
End of explanation
# Locations for the bottleneck and labels files that we need
modelPath = '../model/'
train_bottleneck = 'bottleneck_features_train_224th_twoclass.npy'
train_labels = 'labels_train_224th_twoclass.npy'
test_bottleneck = 'bottleneck_features_test.npy'
test_labels = 'labels_test.npy'
validation_bottleneck = 'bottleneck_features_valdation.npy'
validation_label = 'labels_validation.npy'
top_model_weights_path = 'top_weights_224th_twoclass.h5'
train_top_model(train_feats=train_bottleneck, train_lab=train_labels, test_feats=test_bottleneck, test_lab=test_labels,
model_path=modelPath, model_save=top_model_weights_path)
feats_loc = '224_binary/bottleneck_features_test.npy'
feats_labs = '224_binary/labels_test.npy'
weight = 'balanced224Th_Binary/top_weights_224th_twoclass.h5'
saveFile = 'balanced224Th_Twoclass'
cf_Matrix(data=feats_loc, label=feats_labs, weights=weight, path=modelPath, save=saveFile)
Explanation: 224x224 DDSM Thresholded Images - Two Categories
End of explanation |
9,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Choose a model_spec that represents a model for question answer
Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.
BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.
Step4: Load Input Data Specific to an On-device ML App and Preprocess the Data
The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by
Step5: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https
Step6: Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The create function comprises the following steps
Step7: Have a look at the detailed model structure.
Step8: Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
Step9: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration
Step10: Export the quantized TFLite model according to the quantization config with metadata. The default TFLite model filename is model.tflite.
Step11: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following
Step12: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
Step13: Advanced Usage
The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQAModelSpec class is currently supported. There are 2 models | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install tflite-model-maker
Explanation: BERT Question Answer with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task.
Introduction to BERT Question Answer Task
The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer.
<p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p>
<p align="center">
<em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em>
</p>
As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.
The size of input could be set and adjusted according to the length of passage and question.
End-to-End Overview
The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format.
```python
Chooses a model specification that represents the model.
spec = model_spec.get('mobilebert_qa')
Gets the training data and validation data.
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
Fine-tunes the model.
model = question_answer.create(train_data, model_spec=spec)
Gets the evaluation result.
metric = model.evaluate(validation_data)
Exports the model to the TensorFlow Lite format with metadata in the export directory.
model.export(export_dir)
```
The following sections explain the code in more detail.
Prerequisites
To run this example, install the required packages, including the Model Maker package from the GitHub repo.
End of explanation
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import ExportFormat
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker import QuestionAnswerDataLoader
Explanation: Import the required packages.
End of explanation
spec = model_spec.get('mobilebert_qa_squad')
Explanation: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Choose a model_spec that represents a model for question answer
Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.
Supported Model | Name of model_spec | Model Description
--- | --- | ---
MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.
MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.
BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.
In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.
End of explanation
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
Explanation: Load Input Data Specific to an On-device ML App and Preprocess the Data
The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.
To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by:
* Skipping the samples that couldn't find any answer in the context document;
* Getting the original answer in the context without uppercase or lowercase.
Download the archived version of the already converted dataset.
End of explanation
train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False)
Explanation: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your data to the cloud, you can also run the library offline by following the guide.
Use the QuestionAnswerDataLoader.from_squad method to load and preprocess the SQuAD format data according to a specific model_spec. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter version_2_with_negative as True means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, version_2_with_negative is False.
End of explanation
model = question_answer.create(train_data, model_spec=spec)
Explanation: Customize the TensorFlow Model
Create a custom question answer model based on the loaded data. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model. The default epochs and the default batch size are set according to two variables default_training_epochs and default_batch_size in the model_spec object.
End of explanation
model.summary()
Explanation: Have a look at the detailed model structure.
End of explanation
model.evaluate(validation_data)
Explanation: Evaluate the Customized Model
Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.
End of explanation
config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY])
config._experimental_new_quantizer = True
Explanation: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application.
Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration:
End of explanation
model.export(export_dir='.', quantization_config=config)
Explanation: Export the quantized TFLite model according to the quantization config with metadata. The default TFLite model filename is model.tflite.
End of explanation
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
Explanation: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
The allowed export formats can be one or a list of the following:
ExportFormat.TFLITE
ExportFormat.VOCAB
ExportFormat.SAVED_MODEL
By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:
End of explanation
model.evaluate_tflite('model.tflite', validation_data)
Explanation: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
End of explanation
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
Explanation: Advanced Usage
The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQAModelSpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps:
Creates the model for question answer according to model_spec.
Train the question answer model.
This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc.
Adjust the model
You can adjust the model infrastructure like parameters seq_len and query_len in the BertQAModelSpec class.
Adjustable parameters for model:
seq_len: Length of the passage to feed into the model.
query_len: Length of the question to feed into the model.
doc_stride: The stride when doing a sliding window approach to take chunks of the documents.
initializer_range: The stdev of the truncated_normal_initializer for initializing all weight matrices.
trainable: Boolean, whether pre-trained layer is trainable.
Adjustable parameters for training pipeline:
model_dir: The location of the model checkpoint files. If not set, temporary directory will be used.
dropout_rate: The rate for dropout.
learning_rate: The initial learning rate for Adam.
predict_batch_size: Batch size for prediction.
tpu: TPU address to connect to. Only used if using tpu.
For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new model_spec.
End of explanation |
9,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fractional optimization
This notebook shows how to solve a simple concave fractional problem, in which the objective is to maximize the ratio of a nonnegative concave function and a positive
convex function. Concave fractional problems are quasiconvex programs (QCPs). They can be specified using disciplined quasiconvex programming (DQCP), and hence can be solved using CVXPY.
Step1: Our goal is to minimize the function
$$\frac{\sqrt{x}}{\exp(x)}.$$
This function is not concave, but it is quasiconcave, as can be seen by inspecting its graph.
Step2: The below code specifies and solves the QCP, using DQCP. The concave fraction function is DQCP-compliant, because the ratio atom is quasiconcave (actually, quasilinear), increasing in the numerator when the denominator is positive, and decreasing in the denominator when the numerator is nonnegative. | Python Code:
!pip install --upgrade cvxpy
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
Explanation: Fractional optimization
This notebook shows how to solve a simple concave fractional problem, in which the objective is to maximize the ratio of a nonnegative concave function and a positive
convex function. Concave fractional problems are quasiconvex programs (QCPs). They can be specified using disciplined quasiconvex programming (DQCP), and hence can be solved using CVXPY.
End of explanation
plt.plot([np.sqrt(y) / np.exp(y) for y in np.linspace(0, 10)])
plt.show()
Explanation: Our goal is to minimize the function
$$\frac{\sqrt{x}}{\exp(x)}.$$
This function is not concave, but it is quasiconcave, as can be seen by inspecting its graph.
End of explanation
x = cp.Variable()
concave_fractional_fn = cp.sqrt(x) / cp.exp(x)
problem = cp.Problem(cp.Maximize(concave_fractional_fn))
assert problem.is_dqcp()
problem.solve(qcp=True)
x.value
Explanation: The below code specifies and solves the QCP, using DQCP. The concave fraction function is DQCP-compliant, because the ratio atom is quasiconcave (actually, quasilinear), increasing in the numerator when the denominator is positive, and decreasing in the denominator when the numerator is nonnegative.
End of explanation |
9,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing MC and LHS methods for sampling from a uniform distribution
This note compares the moments of the emperical uniform distribution sampled using Latin Hypercube sampling with Multi-Dimensional Uniformity (LHSMDU) with the NumPy random number generator with theoretical moments of a uniform distribution.
Step1: Params
Step2: Theoretical values
Step3: Emperical mean ($\mu$) and standard deviation ($\sigma$) estimates
Step4: Plotting mean estimates
Step5: Plotting standard deviation estimates
Step6: Across different number of samples
Step7: Plotting mean estimates
Step8: Plotting standard deviation estimates | Python Code:
import numpy as np
import lhsmdu
import matplotlib.pyplot as plt
def simpleaxis(axes, every=False):
if not isinstance(axes, (list, np.ndarray)):
axes = [axes]
for ax in axes:
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
if every:
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.set_title('')
Explanation: Comparing MC and LHS methods for sampling from a uniform distribution
This note compares the moments of the emperical uniform distribution sampled using Latin Hypercube sampling with Multi-Dimensional Uniformity (LHSMDU) with the NumPy random number generator with theoretical moments of a uniform distribution.
End of explanation
seed = 1
np.random.seed(seed)
lhsmdu.setRandomSeed(seed)
numDimensions = 2
numSamples = 100
numIterations = 100
Explanation: Params
End of explanation
theoretical_mean = 0.5
theoretical_std = np.sqrt(1./12)
theoretical_skew = 0.
Explanation: Theoretical values
End of explanation
mc_Mean, lhs_Mean = [], []
mc_Std, lhs_Std = [], []
for iterate in range(numIterations):
a = np.random.random((numDimensions,numSamples))
b = lhsmdu.sample(numDimensions,numSamples)
mc_Mean.append(np.mean(a))
lhs_Mean.append(np.mean(b))
mc_Std.append(np.std(a))
lhs_Std.append(np.std(b))
Explanation: Emperical mean ($\mu$) and standard deviation ($\sigma$) estimates
End of explanation
fig, ax = plt.subplots()
ax.plot(range(numIterations), mc_Mean, 'ko', label='numpy')
ax.plot(range(numIterations), lhs_Mean, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Iteration #")
ax.set_ylabel("$\mu$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
Explanation: Plotting mean estimates
End of explanation
fig, ax = plt.subplots()
ax.plot(range(numIterations), mc_Std, 'ko', label='numpy')
ax.plot(range(numIterations), lhs_Std, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Iteration #")
ax.set_ylabel("$\sigma$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
Explanation: Plotting standard deviation estimates
End of explanation
mc_Std, lhs_Std = [], []
mc_Mean, lhs_Mean = [], []
numSamples = range(1,numIterations)
for iterate in numSamples:
a = np.random.random((numDimensions,iterate))
b = lhsmdu.sample(numDimensions,iterate)
mc_Mean.append(np.mean(a))
lhs_Mean.append(np.mean(b))
mc_Std.append(np.std(a))
lhs_Std.append(np.std(b))
Explanation: Across different number of samples
End of explanation
fig, ax = plt.subplots()
ax.plot(numSamples, mc_Mean, 'ko', label='numpy')
ax.plot(numSamples, lhs_Mean, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_mean, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Number of Samples")
ax.set_ylabel("$\mu$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
Explanation: Plotting mean estimates
End of explanation
fig, ax = plt.subplots()
ax.plot(numSamples, mc_Std, 'ko', label='numpy')
ax.plot(numSamples, lhs_Std, 'o', c='orange', label='lhsmdu')
ax.hlines(xmin=0, xmax=numIterations, y=theoretical_std, linestyles='--', label='theoretical value', zorder=3)
ax.set_xlabel("Number of Samples")
ax.set_ylabel("$\sigma$")
ax.legend(frameon=False)
simpleaxis(ax)
plt.show()
Explanation: Plotting standard deviation estimates
End of explanation |
9,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Изразът 0 <= seconds <= 59 e булев и има стойност True или False.
Step1: Т. е. горната функция е еквивалентна на
Step2: когато 0 <= seconds <= 59 е True, и на
Step3: когато 0 <= seconds <= 59 е False.
По-лесният начин е функцията просто да върне като резултат стойността на булевия израз. | Python Code:
seconds = 30
0 <= seconds <= 59
seconds = -1
0 <= seconds <= 59
Explanation: Изразът 0 <= seconds <= 59 e булев и има стойност True или False.
End of explanation
def valid_seconds(seconds):
if True:
return True
else:
return False
Explanation: Т. е. горната функция е еквивалентна на:
End of explanation
def valid_seconds(seconds):
if False:
return True
else:
return False
Explanation: когато 0 <= seconds <= 59 е True, и на:
End of explanation
def valid_seconds(seconds):
return 0 <= seconds <= 59
valid_seconds(30)
Explanation: когато 0 <= seconds <= 59 е False.
По-лесният начин е функцията просто да върне като резултат стойността на булевия израз.
End of explanation |
9,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Note
Step2: Include the input file that contains all input parameters needed for all components. This file can either be a Python dictionary or a text file that can be converted into a Python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise.
Step3: Instantiate Landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are
Step4: Lets look at the initial organization of PFTs
Step5: Specify an approximate number of years for the model to run.
IMPORTANT
Step6: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs.
Step7: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year, and slope and aspect grid.
Step8: Specify current_time (in years). current_time is the current time in the simulation.
Step9: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run on each storm whereas cellular automaton vegetation component is run once every year.
Note
Step10: Time_Consumed is an optional variable that gives information about computer running time
Step11: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load().
Step12: Lets look at outputs.
Plots of the cellular field of PFT at specified year step can be found below where
Step13: If you run this model for around 900 years, you will observe patterns of PFTs. For example, you will find more trees on north facing slopes and mostly shrubs and grass on south facing slopes, as shown below | Python Code:
from __future__ import print_function
%matplotlib inline
import time
import numpy as np
from landlab.io import read_esri_ascii
from landlab import RasterModelGrid as rmg
from landlab import load_params
from Ecohyd_functions_DEM import (
Initialize_,
Empty_arrays,
Create_PET_lookup,
Save_,
Plot_,
)
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
WARNING: This tutorial has not been updated to work with Landlab 2.0 and is thus not tested to verify that it will run.
Tutorial For Cellular Automaton Vegetation Model Coupled With Ecohydrologic Model
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/v2_dev/user_guide/tutorials.html">https://landlab.readthedocs.io/en/v2_dev/user_guide/tutorials.html</a></small>
<hr>
This tutorial demonstrates implementation of the Cellular Automaton Tree-GRass-Shrub Simulator (CATGRaSS) [Zhou et al., 2013] on a digital elevation model (DEM). This model is built using components from the Landlab component library. CATGRaSS is a spatially explicit model of plant coexistence. It simulates local ecohydrologic dynamics (soil moisture, transpiration, biomass) and spatial evolution of tree, grass, and shrub Plant Functional Types (PFT) driven by rainfall and solar radiation.
Each cell in the model grid can hold a single PFT or remain empty. Tree and shrub plants disperse seeds to their neighbors. Grass seeds are assumed to be available at each cell. Establishment of plants in empty cells is determined probabilistically based on water stress for each PFT. Plants with lower water stress have higher probability of establishment. Plant mortality is simulated probabilistically as a result of aging and drought stress. Fires and grazing will be added to this model soon.
This model (driver) contains:
- A local vegetation dynamics model that simulates storm and inter-storm water balance and ecohydrologic fluxes (ET, runoff), and plant biomass dynamics by coupling the following components:
- PrecipitationDistribution
- Radiation
- PotentialEvapotranspiration
- SoilMoisture
- Vegetation
A spatially explicit probabilistic cellular automaton component that simulates plant competition by tracking establishment and mortality of plants based on soil moisture stress:
VegCA
To run this Jupyter notebook, please make sure that the following files are in the same folder:
- cellular_automaton_vegetation_DEM.ipynb (this notebook)
- Inputs_Vegetation_CA.txt (Input parameters for the model)
- Ecohyd_functions_DEM.py (Utility functions)
[Ref: Zhou, X, E. Istanbulluoglu, and E.R. Vivoni. "Modeling the ecohydrological role of aspect-controlled radiation on tree-grass-shrub coexistence in a semiarid climate." Water Resources Research 49.5 (2013): 2872-2895]
In this tutorial, we are going to work with a landscape in central New Mexico, USA, where aspect controls the organization of PFTs. The climate in this area is semi-arid with Mean Annual Precipitation (MAP) of 254 mm [Zhou et. al 2013].
We will do the following:
- Import a landscape
- Initialize the landscape with random distribution of PFTs
- Run the coupled Ecohydrology and cellular automata plant competition model for 50 years
- Visualize and examine outputs
Let's walk through the code:
Import the required libraries:
End of explanation
(grid, elevation) = read_esri_ascii("DEM_10m.asc") # Read the DEM
grid1 = rmg((5, 4), xy_spacing=(5.0, 5.0)) # Representative grid
Explanation: Note: 'Ecohyd_functions_DEM.py' is a utility script that contains 'functions', which instantiates components and manages inputs and outputs, and help keep this driver concise. Contents of 'Ecohyd_functions_DEM.py' can be a part of this driver (current file), however left out to keep driver concise.
We will use two grids in this driver. One grid will represent the actual landscape or domain (e.g., created from a DEM). Another grid, with one cell for each of the plant functional types (PFTs), will be used to create Potential Evapotranspiration (PET) lookup tables.
grid: This grid represents the actual landscape. Each cell can be occupied by a single PFT such as tree, shrub, grass, or can be empty (bare). In this example we assume that the elevation field and the vegetation field has the same resolution.
grid1: This grid will be used to compute plant-specific PET at a point. Spatially distributed PET Lookup arrays (for all days of the year) will be created for each PFT based on these point values.
Note: In this tutorial, the physical ecohydrological components and cellular automata plant competition will be run on grids with same resolution. To develop differential spatial resolutions for the two models, see the tutorial 'cellular_automaton_vegetation_flat.ipynb'.
End of explanation
InputFile = "Inputs_Vegetation_CA_DEM.txt"
data = load_params(InputFile) # Creates dictionary that holds the inputs
Explanation: Include the input file that contains all input parameters needed for all components. This file can either be a Python dictionary or a text file that can be converted into a Python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise.
End of explanation
PD_D, PD_W, Rad, Rad_PET, PET_Tree, PET_Shrub, PET_Grass, SM, VEG, vegca = Initialize_(
data, grid, grid1, elevation
)
Explanation: Instantiate Landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are:
- PD_D: object for dry season rainfall,
- PD_W: object for wet season rainfall,
- Rad: Radiation object computes radiation factor defined as the ratio of total shortwave radiation incident on a sloped surface to total shortwave radiation incident on a flat surface.
- Rad_PET: Representative radiation object which is used only as an input for PET.
- PET_PFT: Plant specific PET objects (we use a cosine function, fitted to calculated PET, as a function of Day Of the Year (DOY) to reduce computation overhead). This value is spatially distributed by using a radiation factor.
- SM: Soil Moisture object simulates root-zone average soil moisture at each cell using inputs of potential evapotranspiration, live leaf area index, and vegetation cover.
- VEG: Vegetation dynamics object simulates net primary productivity and biomass and thus leaf area index at each cell based on inputs of root-zone average soil moisture.
- vegca: Cellular Automaton plant competition object. This object simulates the spatial dynamics of PFTs. It is run once every year at the end of the growing season. This object is initialized with a random cellular field of PFT. Each year, this object updates the field of PFTs based on probabilistic establishment and mortality rules employed at each cell of the modeled DEM.
Note: Almost every component in Landlab is coded as a 'class' (to harness the advantages of object oriented programming). An 'object' is the instantiation of the 'class' (for more information, please refer any object oriented programming book). A 'field' refers to a Landlab field (please refer to the Landlab documentation to learn more about Landlab fields).
Now let's instantiate all Landlab components that we are going to use for this tutorial:
End of explanation
import matplotlib.pyplot as plt
import matplotlib as mpl
cmap = mpl.colors.ListedColormap(["green", "red", "black", "white", "red", "black"])
bounds = [-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
description = "green: grass; red: shrub; black: tree; white: bare"
plt.figure(101)
grid.imshow(
"vegetation__plant_functional_type",
at="cell",
cmap=cmap,
grid_units=("m", "m"),
norm=norm,
limits=[0, 5],
allow_colorbar=False,
)
plt.figtext(0.2, 0.0, description, weight="bold", fontsize=10)
Explanation: Lets look at the initial organization of PFTs
End of explanation
n_years = 50 # Approx number of years for model to run
# Calculate approximate number of storms per year
fraction_wet = (data["doy__end_of_monsoon"] - data["doy__start_of_monsoon"]) / 365.0
fraction_dry = 1 - fraction_wet
no_of_storms_wet = (
8760 * (fraction_wet) / (data["mean_interstorm_wet"] + data["mean_storm_wet"])
)
no_of_storms_dry = (
8760 * (fraction_dry) / (data["mean_interstorm_dry"] + data["mean_storm_dry"])
)
n = int(n_years * (no_of_storms_wet + no_of_storms_dry))
Explanation: Specify an approximate number of years for the model to run.
IMPORTANT:
This code in numerically extensive. It might take an hour or more to run this simulation for 300 years. It is suggested to run the simulation for 50 years which might take less than 7 minutes to execute.
End of explanation
P, Tb, Tr, Time, VegType, PET_, Rad_Factor, EP30, PET_threshold = Empty_arrays(
n, n_years, grid, grid1
)
Explanation: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs.
End of explanation
Create_PET_lookup(
Rad, PET_Tree, PET_Shrub, PET_Grass, PET_, Rad_Factor, EP30, Rad_PET, grid
)
Explanation: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year, and slope and aspect grid.
End of explanation
# # Represent current time in years
current_time = 0 # Start from first day of Jan
# Keep track of run time for simulation—optional
Start_time = time.clock() # Recording time taken for simulation
# declaring few variables that will be used in storm loop
time_check = 0.0 # Buffer to store current_time at previous storm
yrs = 0 # Keep track of number of years passed
WS = 0.0 # Buffer for Water Stress
Tg = 365 # Growing season in days
Explanation: Specify current_time (in years). current_time is the current time in the simulation.
End of explanation
# # Run storm Loop
for i in range(0, n):
# # Update objects
# Calculate Day of Year (DOY)
Julian = int(np.floor((current_time - np.floor(current_time)) * 365.0))
# Generate seasonal storms
# for Dry season
if Julian < data["doy__start_of_monsoon"] or Julian > data["doy__end_of_monsoon"]:
PD_D.update()
P[i] = PD_D.get_storm_depth()
Tr[i] = PD_D.get_precipitation_event_duration()
Tb[i] = PD_D.get_interstorm_event_duration()
# Wet Season—Jul to Sep—NA Monsoon
else:
PD_W.update()
P[i] = PD_W.get_storm_depth()
Tr[i] = PD_W.get_precipitation_event_duration()
Tb[i] = PD_W.get_interstorm_event_duration()
# Spatially distribute PET and its 30-day-mean (analogous to degree day)
grid["cell"]["surface__potential_evapotranspiration_rate"] = (
np.choose(grid["cell"]["vegetation__plant_functional_type"], PET_[Julian])
) * Rad_Factor[Julian]
grid["cell"]["surface__potential_evapotranspiration_30day_mean"] = (
np.choose(grid["cell"]["vegetation__plant_functional_type"], EP30[Julian])
) * Rad_Factor[Julian]
# Assign spatial rainfall data
grid["cell"]["rainfall__daily_depth"] = P[i] * np.ones(grid.number_of_cells)
# Update soil moisture component
current_time = SM.update(current_time, Tr=Tr[i], Tb=Tb[i])
# Decide whether its growing season or not
if Julian != 364:
if EP30[Julian + 1, 0] > EP30[Julian, 0]:
PET_threshold = 1
# 1 corresponds to ETThresholdup (begin growing season)
else:
PET_threshold = 0
# 0 corresponds to ETThresholddown (end growing season)
# Update vegetation component
VEG.update(PETthreshold_switch=PET_threshold, Tb=Tb[i], Tr=Tr[i])
# Update yearly cumulative water stress data
WS += (grid["cell"]["vegetation__water_stress"]) * Tb[i] / 24.0
# Record time (optional)
Time[i] = current_time
# Cellular Automata
if (current_time - time_check) >= 1.0:
if yrs % 5 == 0:
print("Elapsed time = {time} years".format(time=yrs))
VegType[yrs] = grid["cell"]["vegetation__plant_functional_type"]
grid["cell"]["vegetation__cumulative_water_stress"] = WS / Tg
vegca.update()
SM.initialize()
VEG.initialize()
time_check = current_time
WS = 0
yrs += 1
VegType[yrs] = grid["cell"]["vegetation__plant_functional_type"]
Explanation: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run on each storm whereas cellular automaton vegetation component is run once every year.
Note: This loop might take around 10 minutes (depending on your computer) to run for a 50 year simulation. Ignore any warnings you might see.
End of explanation
Final_time = time.clock()
Time_Consumed = (Final_time - Start_time) / 60.0 # in minutes
print("Time_consumed = {time} minutes".format(time=Time_Consumed))
Explanation: Time_Consumed is an optional variable that gives information about computer running time
End of explanation
# # Saving
sim = "VegCA_DEM_26Jul16_"
# Save_(sim, Tb, Tr, P, VegType, yrs, Time_Consumed, Time)
Explanation: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load().
End of explanation
Plot_(grid, VegType, yrs, yr_step=10)
Explanation: Lets look at outputs.
Plots of the cellular field of PFT at specified year step can be found below where:
GRASS = Green; SHRUB = Red; TREE = Black; BARE = White;
At the end, percentage cover for each PFT is plotted with respect to time.
End of explanation
from IPython.display import Image
Image(filename="presentation.png")
Explanation: If you run this model for around 900 years, you will observe patterns of PFTs. For example, you will find more trees on north facing slopes and mostly shrubs and grass on south facing slopes, as shown below:
End of explanation |
9,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #02
Convolutional Neural Network
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.
Convolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
Step1: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
Convolutional Layer
The following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.
The red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.
In this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.
Step2: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
In the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.
When the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.
Furthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.
Note that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.
Imports
Step3: This was developed using TensorFlow v. 0.8.0
Step4: Configuration of Neural Network
The configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.
Step5: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step6: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step7: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
Step8: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
Step9: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step10: Plot a few images to see if data is correct
Step11: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step12: Helper-function for creating a new Convolutional Layer
This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 4-dim tensor with the following dimensions
Step13: Helper-function for flattening a layer
A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
Step14: Helper-function for creating a new Fully-Connected Layer
This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].
Step15: Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
Step16: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is
Step17: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step18: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
Step19: Convolutional Layer 1
Create the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
Step20: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
Step21: Convolutional Layer 2
Create the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.
Step22: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
Step23: Flatten Layer
The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
Step24: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
Step25: Fully-Connected Layer 1
Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.
Step26: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
Step27: Fully-Connected Layer 2
Add another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.
Step28: Predicted Class
The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.
Step29: The class-number is the index of the largest element.
Step30: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.
TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.
Step31: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step32: Optimization Method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step33: Performance Measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
Step34: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step35: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step36: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step37: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
Step38: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
Step39: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
Step40: Helper-function to plot confusion matrix
Step41: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
Step42: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
Step43: Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
Step44: Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
Step45: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
Step46: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
Step47: Visualization of Weights and Layers
In trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.
Helper-function for plotting convolutional weights
Step48: Helper-function for plotting the output of a convolutional layer
Step49: Input Images
Helper-function for plotting an image.
Step50: Plot an image from the test-set which will be used as an example below.
Step51: Plot another example image from the test-set.
Step52: Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
Step53: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
Step54: The following images are the results of applying the convolutional filters to the second image.
Step55: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
Step56: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
Step57: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.
Note that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.
Step58: And these are the results of applying the filter-weights to the second image.
Step59: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
These images are then flattened and input to the fully-connected layer, but that is not shown here.
Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
from IPython.display import Image
Image('images/02_network_flowchart.png')
Explanation: TensorFlow Tutorial #02
Convolutional Neural Network
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous tutorial showed that a simple linear model had about 91% classification accuracy for recognizing hand-written digits in the MNIST data-set.
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow which has a classification accuracy of about 99%, or more if you make some of the suggested exercises.
Convolutional Networks work by moving small filters across the input image. This means the filters are re-used for recognizing patterns throughout the entire input image. This makes the Convolutional Networks much more powerful than Fully-Connected networks with the same number of variables. This in turn makes the Convolutional Networks faster to train.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. Beginners to TensorFlow may also want to study the first tutorial before proceeding to this one.
Flowchart
The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below.
End of explanation
Image('images/02_convolution.png')
Explanation: The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.
These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.
The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.
The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.
These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.
Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.
Convolutional Layer
The following chart shows the basic idea of processing an image in the first convolutional layer. The input image depicts the number 7 and four copies of the image are shown here, so we can see more clearly how the filter is being moved to different positions of the image. For each position of the filter, the dot-product is being calculated between the filter and the image pixels under the filter, which results in a single pixel in the output image. So moving the filter across the entire input image results in a new image being generated.
The red filter-weights means that the filter has a positive reaction to black pixels in the input image, while blue pixels means the filter has a negative reaction to black pixels.
In this case it appears that the filter recognizes the horizontal line of the 7-digit, as can be seen from its stronger reaction to that line in the output image.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
Explanation: The step-size for moving the filter across the input is called the stride. There is a stride for moving the filter horizontally (x-axis) and another stride for moving vertically (y-axis).
In the source-code below, the stride is set to 1 in both directions, which means the filter starts in the upper left corner of the input image and is being moved 1 pixel to the right in each step. When the filter reaches the end of the image to the right, then the filter is moved back to the left side and 1 pixel down the image. This continues until the filter has reached the lower right corner of the input image and the entire output image has been generated.
When the filter reaches the end of the right-side as well as the bottom of the input image, then it can be padded with zeroes (white pixels). This causes the output image to be of the exact same dimension as the input image.
Furthermore, the output of the convolution may be passed through a so-called Rectified Linear Unit (ReLU), which merely ensures that the output is positive because negative values are set to zero. The output may also be down-sampled by so-called max-pooling, which considers small windows of 2x2 pixels and only keeps the largest of those pixels. This halves the resolution of the input image e.g. from 28x28 to 14x14 pixels.
Note that the second convolutional layer is more complicated because it takes 16 input channels. We want a separate filter for each input channel, so we need 16 filters instead of just one. Furthermore, we want 36 output channels from the second convolutional layer, so in total we need 16 x 36 = 576 filters for the second convolutional layer. It can be a bit challenging to understand how this works.
Imports
End of explanation
tf.__version__
Explanation: This was developed using TensorFlow v. 0.8.0
End of explanation
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16 # There are 16 of these filters.
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36 # There are 36 of these filters.
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
Explanation: Configuration of Neural Network
The configuration of the Convolutional Neural Network is defined here for convenience, so you can easily find and change these numbers and re-run the Notebook.
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.cls = np.argmax(data.test.labels, axis=1)
Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data Dimensions
The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used for inputting data to the graph.
Variables that are going to be optimized so as to make the convolutional network perform better.
The mathematical formulas for the convolutional network.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Helper-functions for creating new variables
Functions for creating new TensorFlow variables in the given shape and initializing them with random values. Note that the initialization is not actually done at this point, it is merely being defined in the TensorFlow graph.
End of explanation
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
Explanation: Helper-function for creating a new Convolutional Layer
This function creates a new convolutional layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 4-dim tensor with the following dimensions:
Image number.
Y-axis of each image.
X-axis of each image.
Channels of each image.
Note that the input channels may either be colour-channels, or it may be filter-channels if the input is produced from a previous convolutional layer.
The output is another 4-dim tensor with the following dimensions:
Image number, same as input.
Y-axis of each image. If 2x2 pooling is used, then the height and width of the input images is divided by 2.
X-axis of each image. Ditto.
Channels produced by the convolutional filters.
End of explanation
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
Explanation: Helper-function for flattening a layer
A convolutional layer produces an output tensor with 4 dimensions. We will add fully-connected layers after the convolution layers, so we need to reduce the 4-dim tensor to 2-dim which can be used as input to the fully-connected layer.
End of explanation
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
Explanation: Helper-function for creating a new Fully-Connected Layer
This function creates a new fully-connected layer in the computational graph for TensorFlow. Nothing is actually calculated here, we are just adding the mathematical formulas to the TensorFlow graph.
It is assumed that the input is a 2-dim tensor of shape [num_images, num_inputs]. The output is a 2-dim tensor of shape [num_images, num_outputs].
End of explanation
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
Explanation: Placeholder variables
Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
Explanation: The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
End of explanation
y_true = tf.placeholder(tf.float32, shape=[None, 10], name='y_true')
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.argmax(y_true, dimension=1)
Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
End of explanation
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
Explanation: Convolutional Layer 1
Create the first convolutional layer. It takes x_image as input and creates num_filters1 different filters, each having width and height equal to filter_size1. Finally we wish to down-sample the image so it is half the size by using 2x2 max-pooling.
End of explanation
layer_conv1
Explanation: Check the shape of the tensor that will be output by the convolutional layer. It is (?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?), each image is 14 pixels wide and 14 pixels high, and there are 16 different channels, one channel for each of the filters.
End of explanation
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
Explanation: Convolutional Layer 2
Create the second convolutional layer, which takes as input the output from the first convolutional layer. The number of input channels corresponds to the number of filters in the first convolutional layer.
End of explanation
layer_conv2
Explanation: Check the shape of the tensor that will be output from this convolutional layer. The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images, with each image having width and height of 7 pixels, and there are 36 channels, one for each filter.
End of explanation
layer_flat, num_features = flatten_layer(layer_conv2)
Explanation: Flatten Layer
The convolutional layers output 4-dim tensors. We now wish to use these as input in a fully-connected network, which requires for the tensors to be reshaped or flattened to 2-dim tensors.
End of explanation
layer_flat
num_features
Explanation: Check that the tensors now have shape (?, 1764) which means there's an arbitrary number of images which have been flattened to vectors of length 1764 each. Note that 1764 = 7 x 7 x 36.
End of explanation
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
Explanation: Fully-Connected Layer 1
Add a fully-connected layer to the network. The input is the flattened layer from the previous convolution. The number of neurons or nodes in the fully-connected layer is fc_size. ReLU is used so we can learn non-linear relations.
End of explanation
layer_fc1
Explanation: Check that the output of the fully-connected layer is a tensor with shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128.
End of explanation
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
Explanation: Fully-Connected Layer 2
Add another fully-connected layer that outputs vectors of length 10 for determining which of the 10 classes the input image belongs to. Note that ReLU is not used in this layer.
End of explanation
y_pred = tf.nn.softmax(layer_fc2)
Explanation: Predicted Class
The second fully-connected layer estimates how likely it is that the input image belongs to each of the 10 classes. However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each element is limited between zero and one and the 10 elements sum to one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: The class-number is the index of the largest element.
End of explanation
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for all the network layers. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the network layers.
TensorFlow has a built-in function for calculating the cross-entropy. Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly rather than y_pred which has already had the softmax applied.
End of explanation
cost = tf.reduce_mean(cross_entropy)
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
Explanation: Optimization Method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the AdamOptimizer which is an advanced form of Gradient Descent.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Performance Measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.initialize_all_variables())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
train_batch_size = 64
Explanation: Helper-function to perform optimization iterations
There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.
If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.
End of explanation
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))
Explanation: Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
End of explanation
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Helper-function to plot example errors
Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function to plot confusion matrix
End of explanation
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
Explanation: Helper-function for showing the performance
Function for printing the classification accuracy on the test-set.
It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.
Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
End of explanation
print_test_accuracy()
Explanation: Performance before any optimization
The accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.
End of explanation
optimize(num_iterations=1)
print_test_accuracy()
Explanation: Performance after 1 optimization iteration
The classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
End of explanation
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
Explanation: Performance after 100 optimization iterations
After 100 optimization iterations, the model has significantly improved its classification accuracy.
End of explanation
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
End of explanation
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
Explanation: Performance after 10,000 optimization iterations
After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
End of explanation
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Visualization of Weights and Layers
In trying to understand why the convolutional neural network can recognize handwritten digits, we will now visualize the weights of the convolutional filters and the resulting output images.
Helper-function for plotting convolutional weights
End of explanation
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper-function for plotting the output of a convolutional layer
End of explanation
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
Explanation: Input Images
Helper-function for plotting an image.
End of explanation
image1 = data.test.images[0]
plot_image(image1)
Explanation: Plot an image from the test-set which will be used as an example below.
End of explanation
image2 = data.test.images[13]
plot_image(image2)
Explanation: Plot another example image from the test-set.
End of explanation
plot_conv_weights(weights=weights_conv1)
Explanation: Convolution Layer 1
Now plot the filter-weights for the first convolutional layer.
Note that positive weights are red and negative weights are blue.
End of explanation
plot_conv_layer(layer=layer_conv1, image=image1)
Explanation: Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer. Note that these images are down-sampled to 14 x 14 pixels which is half the resolution of the original input image.
End of explanation
plot_conv_layer(layer=layer_conv1, image=image2)
Explanation: The following images are the results of applying the convolutional filters to the second image.
End of explanation
plot_conv_weights(weights=weights_conv2, input_channel=0)
Explanation: It is difficult to see from these images what the purpose of the convolutional filters might be. It appears that they have merely created several variations of the input image, as if light was shining from different angles and casting shadows in the image.
Convolution Layer 2
Now plot the filter-weights for the second convolutional layer.
There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.
Note again that positive weights are red and negative weights are blue.
End of explanation
plot_conv_weights(weights=weights_conv2, input_channel=1)
Explanation: There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
End of explanation
plot_conv_layer(layer=layer_conv2, image=image1)
Explanation: It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.
Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.
Note that these are down-sampled yet again to 7 x 7 pixels which is half the resolution of the images from the first conv-layer.
End of explanation
plot_conv_layer(layer=layer_conv2, image=image2)
Explanation: And these are the results of applying the filter-weights to the second image.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: From these images, it looks like the second convolutional layer might detect lines and patterns in the input images, which are less sensitive to local variations in the original input images.
These images are then flattened and input to the fully-connected layer, but that is not shown here.
Close TensorFlow Session
We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
9,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
Step1: What was the most popular type of complaint, and how many times was it filed?
Step2: Make a horizontal bar graph of the top 5 most frequent complaint types.
Step3: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
Step4: According to your selection of data, how many cases were filed in March? How about May?
Step5: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
Step6: What was the most popular type of complaint on April 1st?
What were the most popular three types of complaint on April 1st
Step7: What month has the most reports filed? How many? Graph it.
Step8: What week of the year has the most reports filed? How many? Graph the weekly complaints.
Step9: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
Step10: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
Step11: What hour of the day are the most complaints? Graph a day of complaints.
Step12: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
Step13: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
Step14: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
Step15: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
Step16: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer. | Python Code:
#df = pd.read_csv('311-2010-2016.csv')
# We select a list of columns for a better efficiency
columns_list = ['Unique Key', 'Created Date', 'Closed Date', 'Agency', 'Agency Name',
'Complaint Type', 'Descriptor', 'Borough']
df = pd.read_csv('311-2015.csv', nrows=200000, usecols= columns_list)
df['Created Date'].describe()
def parse_date(str_date):
return dateutil.parser.parse(str_date)
df['created_datetime'] = df['Created Date'].apply(parse_date)
df.index = df['created_datetime']
df.head(2)
Explanation: First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.
Importing and preparing your data
Import your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.
End of explanation
top_five_complaints = []
complaints_count = df['Complaint Type'].value_counts().head(5)
for complaint_type, count in complaints_count.items():
top_five_complaints.append({"type": complaint_type, "count": count})
print("The most popular type of complaint is", top_five_complaints[0]['type'], "and it was filed", top_five_complaints[0]['count'], "times.")
Explanation: What was the most popular type of complaint, and how many times was it filed?
End of explanation
top5_df = pd.DataFrame(top_five_complaints)
top5_df.plot(kind='barh', x='type', y='count', label='Top 5 most frequent complaint types').invert_yaxis()
fig = plt.gcf()
fig.set_size_inches(18, 7, forward=True)
plt.xlabel("Count")
plt.ylabel("Type")
Explanation: Make a horizontal bar graph of the top 5 most frequent complaint types.
End of explanation
boroughs_dict_count = df.groupby(by='Borough')['Unique Key'].agg(['count']).to_dict()['count']
boroughs_dict_pop = {'BRONX': 1438159,
'BROOKLYN': 2621793,
'MANHATTAN': 1636268,
'QUEENS': 2321580,
'STATEN ISLAND': 47327}
complaints_per_capita = {}
for borough, count in boroughs_dict_count.items():
if borough != 'Unspecified':
complaints_per_capita[borough] = count/boroughs_dict_pop[borough]
# print(borough.title(), "has", count/boroughs_dict_pop[borough], "complaints per capita.")
most_complaints_per_capita = max(complaints_per_capita, key=lambda i: complaints_per_capita[i])
answer = "{} has the most complaints per capita, with {:.3f} complaints per capita.".format(most_complaints_per_capita.title(), complaints_per_capita[most_complaints_per_capita])
print(answer)
Explanation: Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.
End of explanation
print(len(df['2015-03']), "cases were filed in March.")
print(len(df['2015-05']), "cases were filed in May.")
Explanation: According to your selection of data, how many cases were filed in March? How about May?
End of explanation
april_1 = df['2015-04-01']
april_1
Explanation: I'd like to see all of the 311 complaints called in on April 1st.
Surprise! We couldn't do this in class, but it was just a limitation of our data set
End of explanation
print("The most popular three types of complaint on April 1st were:")
c = 1
for value, count in april_1['Complaint Type'].value_counts().iteritems(): # zip type
if c > 3:
break
print("{}) {} ({} complaints)".format(c, value, count))
c = c + 1
Explanation: What was the most popular type of complaint on April 1st?
What were the most popular three types of complaint on April 1st
End of explanation
#mdata = df.resample('M').size()
mdata = df.groupby([lambda x: x.month]).size() # this one also works if we want to merge multiple years
months_list = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
print(months_list[mdata.idxmax()-1], "has the most reports filed, with", mdata.max(), "reports.")
ax = mdata.plot(kind='bar')
plt.xticks(list(range(0,12)), months_list) # or: ax.set_xticklabels(months_list)
print('Reports filed per month:')
Explanation: What month has the most reports filed? How many? Graph it.
End of explanation
wdata = df.groupby([lambda x: x.week]).size()
wdata.plot(kind='bar')
fig = plt.gcf()
fig.set_size_inches(18, 7, forward=True)
plt.rcParams.update({'font.size': 14})
plt.rc('ytick', labelsize=12)
plt.ylabel('Complaints')
plt.xlabel('Weeks of the year')
Explanation: What week of the year has the most reports filed? How many? Graph the weekly complaints.
End of explanation
noise_df = df[df['Complaint Type'].str.contains('Noise')]
data_annual = noise_df.resample('M').size()
data_annual.plot(figsize=(12, 5), title="Noise complaints - Annual graph")
plt.xlabel("")
print("")
data_cyclic = noise_df.groupby([lambda x: x.hour]).size()
data_cyclic.plot(grid=True, title="Noise complaints - Cyclic graph")
fig = plt.gcf()
fig.set_size_inches(12, 5)
hours = list(range(0, 24))
hours_str = [str(i) + "h" for i in hours]
plt.xticks(hours, hours_str)
print('')
Explanation: Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).
End of explanation
ddata = df.groupby([lambda x: x.month, lambda x: x.day]).size()
days_top5 = ddata.sort_values(ascending=False).head(5)
days_top5_list = []
for day, count in days_top5.items():
day_str = months_list[day[0]-1] + " " + str(day[1])
print("-", day_str, "with", count, "complaints;")
days_top5_list.append({"Day": day_str, "Count": count})
top5_df = pd.DataFrame(days_top5_list)
top5_df.plot(kind='bar', x='Day', y='Count', legend=False)
Explanation: Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.
End of explanation
hdata = df.groupby([lambda x: x.hour]).size()
print(hdata.idxmax(), "hour has the most reports filed, with", hdata.max(), "reports filed.")
hdata.plot(title="A day of complaints", figsize=(12,5))
plt.xticks(hours, hours_str)
print('')
Explanation: What hour of the day are the most complaints? Graph a day of complaints.
End of explanation
# The odd number is midnight (very high peak)
type_per_h = pd.DataFrame(df.groupby([lambda x: x.hour])['Complaint Type'].value_counts())
print("0h, Top Counts of", type_per_h['Complaint Type'][0].head(3))
print("\n23h, Top Counts of", type_per_h['Complaint Type'][23].head(3))
print("\n01h, Top Counts of", type_per_h['Complaint Type'][1].head(3))
Explanation: One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?
End of explanation
df_midnight = df[df.index.hour==0]
df_midnight = df_midnight.groupby(by=df_midnight.index.minute)
df_midnight['Unique Key'].count().plot()
Explanation: So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.
End of explanation
common_agencies = list(df['Agency Name'].value_counts().head(5).keys())
for agency in common_agencies:
df_agency = df[df['Agency Name'] == agency]
hdata = df_agency.groupby([lambda x: x.hour]).size()
hdata.plot(legend=True, figsize=(12,5))
plt.xticks(hours, hours_str)
plt.legend(common_agencies)
print('')
Explanation: Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).
End of explanation
for agency in common_agencies:
df_agency = df[df['Agency Name'] == agency]
wdata = df_agency.groupby([lambda x: x.week]).size()
wdata.plot(legend=True, figsize=(15,8))
plt.legend(common_agencies)
print('')
Explanation: Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?
End of explanation
nypd = common_agencies[0]
hpd = common_agencies[1]
def complaints_for_agency(agency):
return df[df['Agency Name'] == agency]
print("NYPD July-August:\n" + str(complaints_for_agency(nypd)['2015-07':'2015-08']['Complaint Type'].value_counts().head(10)))
print("\nNYPD May:\n" + str(complaints_for_agency(nypd)['2015-05']['Complaint Type'].value_counts().head(10)))
print("\nHPD Winter:\n" + str(complaints_for_agency(hpd)['2015-06-21':'2015-09-20']['Complaint Type'].value_counts().head(10)))
print("\nHPD Summer:\n" + str(complaints_for_agency(hpd)['2015-01-01':'2015-03-21']['Complaint Type'].value_counts().head(10)))
Explanation: Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.
End of explanation |
9,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo Methods
Step1: Point to note
Step2: Efficiency
Note that the sum over all particles scales as $n^2$ where $n$ is the number of particles. As the number of steps the algorithm will need to take will also scale as $n$, this makes the number of calculations at least as bad as $n^3$. This is expensive; if you try the naive approach then you'll have difficulty using more than 50 particles in a moderate time.
Instead we can note that, at each stage, the algorithm will move only one particle. Therefore, if we store not just the locations of the particles but also their pairwise separations, at each step we will only have to modify a small number of the separations. So we can store $r^2_{ij} = \vec{r}{ij} \cdot \vec{r}{ij}$ only, for $j > i$, and when perturbing particle $k$ we only need to update the separations $r^2_{ik}$ for $i<k$ and $r^2_{kj}$ for $k<j$.
This should significantly reduce the number of calculations done in each step.
In addition, note that for reasonable behaviour the acceptance rate should be $\sim 40\%$. This depends on the fractional perturbation distance $\Delta$; values $\sim 0.4$ are reasonable when $\rho \sim 0.1$, but values $\sim 0.02$ are reasonable when $\rho \sim 0.9$.
Results
Check that the energy has converged to a "constant" state.
Plot a histogram of the energies to show that they follow the Boltzmann distribution. | Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Monte Carlo Methods: Lab 2
End of explanation
p_JZG_T2 = [0.1776, 0.329, 0.489, 0.7, 1.071, 1.75, 3.028, 5.285, 9.12]
Explanation: Point to note: an electronic copy of the Frenkel and Smit book is available through the library. This lab is based on case study 1 in chapter 3.4 of that book.
Lennard-Jones fluids
When computing the interactions between lots of bodies (atoms, molecules, planets, etc) we can either use the true potential or force between them, or we can approximate it with some potential (or force) that is easier (and usually cheaper) to calculate. The parameters of the potential can then be set to approximate the "real" features we're interested in.
In computational chemistry, one such approximation is the Lennard-Jones potential. Given two bodies separated by a distance $r$, the potential generated by those two bodies is
\begin{equation}
U(r) = 4 \varepsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left( \frac{\sigma}{r} \right)^{6} \right].
\end{equation}
Here $\varepsilon$ and $\sigma$ are parameters. When there are more than two bodies the total potential is the sum over all pairwise potentials.
In principle this generates a potential between particles that are separated by huge distances. Instead it is typical to truncate the potential: to pick a cut-off distance so that any particles separated by more than that distance do not contribute, and to correct for those small contributions.
Here we use a Lennard-Jones potential inside a box size $[0,L]^3$ with a cut-off $r_c = L/2$, with parameters set so that
\begin{equation}
U = \begin{cases} 4 \left[ \frac{1}{r^{12}} - \frac{1}{r^6} \right] & r < r_c \ 0 & r > r_c. \end{cases}
\end{equation}
Include tail corrections (that is, additional energy and pressure terms resulting from the particles outside the cutoff radius) as
\begin{align}
U^{\text{tail}} & = \frac{8 \pi \rho}{3} \left[ \frac{1}{3} \frac{1}{r_c^9} - \frac{1}{r_c^3} \right] \
p^{\text{tail}} & = \frac{16 \pi \rho^2}{3} \left[ \frac{2}{3} \frac{1}{r_c^9} - \frac{1}{r_c^3} \right].
\end{align}
For each configuration we need to compute the pressure using
$$
\begin{equation}
p = \frac{\rho}{\beta} + \frac{\text{Virial}}{V}
\end{equation}
$$
where
$$
\begin{equation}
\text{Virial} = \sum_i \sum_{j > i} \vec{f}( \vec{r}{ij} ) \cdot \vec{r}{ij}
\end{equation}
$$
where, as usual, $\vec{r}{ij}$ is the separation between the atoms, $\vec{r}{ij} = \vec{r}_i - \vec{r}_j$, and the intermolecular force $\vec{f}$ is given by
$$
\begin{align}
\vec{f}(\vec{r}{ij}) &= - \nabla U \
& = \begin{cases} 24 \left[ 2 \frac{1}{r^{14}} - \frac{1}{r^8} \right] \vec{r}{ij} & r < r_c \ \vec{0} & r > r_c \end{cases}
\end{align}
$$
Note that in the reduced coordinates $\beta = T^{-1}$.
Monte Carlo code
We will be using an $NTV$ approach, keeping the number of particles fixed ($N = 100$), the temperature fixed ($T=2$) and the volume fixed (indirectly, via the density $\rho = N / V = N L^{-3}$; use $\rho = a/10$ for $a = 1, \dots, 9$, but start by just considering the $a=1, 2$ cases). You will need to take at least $10,000$ steps for the larger values of $a$; $20,000$ is better, but in all cases you should test with a smaller number of particles and steps ($1,000$ may be sufficient for small values of $a$).
For reference we note the solutions, taken from Johnson, Zollweg and Gubbins for the pressures at $T=2$ are:
End of explanation
%matplotlib inline
import numpy
from scipy import constants
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
Explanation: Efficiency
Note that the sum over all particles scales as $n^2$ where $n$ is the number of particles. As the number of steps the algorithm will need to take will also scale as $n$, this makes the number of calculations at least as bad as $n^3$. This is expensive; if you try the naive approach then you'll have difficulty using more than 50 particles in a moderate time.
Instead we can note that, at each stage, the algorithm will move only one particle. Therefore, if we store not just the locations of the particles but also their pairwise separations, at each step we will only have to modify a small number of the separations. So we can store $r^2_{ij} = \vec{r}{ij} \cdot \vec{r}{ij}$ only, for $j > i$, and when perturbing particle $k$ we only need to update the separations $r^2_{ik}$ for $i<k$ and $r^2_{kj}$ for $k<j$.
This should significantly reduce the number of calculations done in each step.
In addition, note that for reasonable behaviour the acceptance rate should be $\sim 40\%$. This depends on the fractional perturbation distance $\Delta$; values $\sim 0.4$ are reasonable when $\rho \sim 0.1$, but values $\sim 0.02$ are reasonable when $\rho \sim 0.9$.
Results
Check that the energy has converged to a "constant" state.
Plot a histogram of the energies to show that they follow the Boltzmann distribution.
End of explanation |
9,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
层次聚类 Lab
在此 notebook 中,我们将使用 sklearn 对鸢尾花数据集执行层次聚类。该数据集包含 4 个维度/属性和 150 个样本。每个样本都标记为某种鸢尾花品种(共三种)。
在此练习中,我们将忽略标签和基于属性的聚类,并将不同层次聚类技巧的结果与实际标签进行比较,看看在这种情形下哪种技巧的效果最好。然后,我们将可视化生成的聚类层次。
1. 导入鸢尾花数据集
Step1: 查看数据集中的前 10 个样本
Step2: 2. 聚类
现在使用 sklearn 的 AgglomerativeClustering 进行层次聚类
Step3: 并且尝试完全连接法和平均连接法
练习:
通过完全连接法进行层次聚类,将预测的标签存储在变量 complete_pred 中
通过平均连接法进行层次聚类,将预测的标签存储在变量 avg_pred 中
Step4: 为了判断哪个聚类结果与样本的原始标签更匹配,我们可以使用 adjusted_rand_score,它是一个外部聚类有效性指标,分数在 -1 到 1 之间,1 表示两个聚类在对数据集中的样本进行分组时完全一样(无论每个聚类分配的标签如何)。
在这门课程的稍后部分会讨论聚类有效性指标。
Step5: 练习:
计算通过完全连接法和平均连接法得出的聚类的调整离差平方和(ward)分数
Step6: 哪个算法的调整兰德分数更高?
Step7: 3. 标准化对聚类的影响
可以改进该聚类结果吗?
我们再看看数据集
Step8: 查看该数据集后,可以看出第四列的值比其他列要小,因此它的方差对聚类处理流程的影响更新(因为聚类是基于距离的)。我们对数据集进行标准化 ,使每个维度都位于 0 到 1 之间,以便在聚类流程中具有相等的权重。
方法是让每列减去最小值,然后除以范围。
sklearn 提供了一个叫做 preprocessing.normalize() 的实用工具,可以帮助我们完成这一步
Step9: 现在所有列都在 0 到 1 这一范围内了。这么转换之后对数据集进行聚类会形成更好的聚类吗?(与样本的原始标签更匹配)
Step10: 4. 通过 scipy 进行谱系图可视化
我们来可视化分数最高的聚类结果。
为此,我们需要使用 Scipy 的 linkage 函数再次进行聚类,以便获取稍后用来可视化层次关系的连接矩阵
Step11: 使用 scipy 的 dendrogram 函数进行绘制
Step12: 5. 通过 Seaborn 的 clustermap 进行可视化
python 的 seaborn 绘制库可以绘制聚类图,它是一种更详细地可视化数据集的谱系图。它也会进行聚类,因此我们只需传入数据集和想要的连接类型,它将在后台使用 scipy 进行聚类 | Python Code:
from sklearn import datasets
iris = datasets.load_iris()
Explanation: 层次聚类 Lab
在此 notebook 中,我们将使用 sklearn 对鸢尾花数据集执行层次聚类。该数据集包含 4 个维度/属性和 150 个样本。每个样本都标记为某种鸢尾花品种(共三种)。
在此练习中,我们将忽略标签和基于属性的聚类,并将不同层次聚类技巧的结果与实际标签进行比较,看看在这种情形下哪种技巧的效果最好。然后,我们将可视化生成的聚类层次。
1. 导入鸢尾花数据集
End of explanation
iris.data[:10]
iris.target
Explanation: 查看数据集中的前 10 个样本
End of explanation
from sklearn.cluster import AgglomerativeClustering
# Hierarchical clustering
# Ward is the default linkage algorithm, so we'll start with that
ward = AgglomerativeClustering(n_clusters=3)
ward_pred = ward.fit_predict(iris.data)
Explanation: 2. 聚类
现在使用 sklearn 的 AgglomerativeClustering 进行层次聚类
End of explanation
# Hierarchical clustering using complete linkage
# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters
complete = AgglomerativeClustering(n_clusters=3, linkage="complete")
# Fit & predict
# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels
complete_pred = complete.fit_predict(iris.data)
# Hierarchical clustering using average linkage
# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters
avg = AgglomerativeClustering(n_clusters=3, linkage="average")
# Fit & predict
# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels
avg_pred = avg.fit_predict(iris.data)
Explanation: 并且尝试完全连接法和平均连接法
练习:
通过完全连接法进行层次聚类,将预测的标签存储在变量 complete_pred 中
通过平均连接法进行层次聚类,将预测的标签存储在变量 avg_pred 中
End of explanation
from sklearn.metrics import adjusted_rand_score
ward_ar_score = adjusted_rand_score(iris.target, ward_pred)
Explanation: 为了判断哪个聚类结果与样本的原始标签更匹配,我们可以使用 adjusted_rand_score,它是一个外部聚类有效性指标,分数在 -1 到 1 之间,1 表示两个聚类在对数据集中的样本进行分组时完全一样(无论每个聚类分配的标签如何)。
在这门课程的稍后部分会讨论聚类有效性指标。
End of explanation
# TODO: Calculated the adjusted Rand score for the complete linkage clustering labels
complete_ar_score = adjusted_rand_score(iris.target, complete_pred)
# TODO: Calculated the adjusted Rand score for the average linkage clustering labels
avg_ar_score = adjusted_rand_score(iris.target, avg_pred)
Explanation: 练习:
计算通过完全连接法和平均连接法得出的聚类的调整离差平方和(ward)分数
End of explanation
print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score)
Explanation: 哪个算法的调整兰德分数更高?
End of explanation
iris.data[:15]
Explanation: 3. 标准化对聚类的影响
可以改进该聚类结果吗?
我们再看看数据集
End of explanation
from sklearn import preprocessing
normalized_X = preprocessing.normalize(iris.data)
normalized_X[:10]
Explanation: 查看该数据集后,可以看出第四列的值比其他列要小,因此它的方差对聚类处理流程的影响更新(因为聚类是基于距离的)。我们对数据集进行标准化 ,使每个维度都位于 0 到 1 之间,以便在聚类流程中具有相等的权重。
方法是让每列减去最小值,然后除以范围。
sklearn 提供了一个叫做 preprocessing.normalize() 的实用工具,可以帮助我们完成这一步
End of explanation
ward = AgglomerativeClustering(n_clusters=3)
ward_pred = ward.fit_predict(normalized_X)
complete = AgglomerativeClustering(n_clusters=3, linkage="complete")
complete_pred = complete.fit_predict(normalized_X)
avg = AgglomerativeClustering(n_clusters=3, linkage="average")
avg_pred = avg.fit_predict(normalized_X)
ward_ar_score = adjusted_rand_score(iris.target, ward_pred)
complete_ar_score = adjusted_rand_score(iris.target, complete_pred)
avg_ar_score = adjusted_rand_score(iris.target, avg_pred)
print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score)
Explanation: 现在所有列都在 0 到 1 这一范围内了。这么转换之后对数据集进行聚类会形成更好的聚类吗?(与样本的原始标签更匹配)
End of explanation
# Import scipy's linkage function to conduct the clustering
from scipy.cluster.hierarchy import linkage
# Specify the linkage type. Scipy accepts 'ward', 'complete', 'average', as well as other values
# Pick the one that resulted in the highest Adjusted Rand Score
linkage_type = 'ward'
linkage_matrix = linkage(normalized_X, linkage_type)
Explanation: 4. 通过 scipy 进行谱系图可视化
我们来可视化分数最高的聚类结果。
为此,我们需要使用 Scipy 的 linkage 函数再次进行聚类,以便获取稍后用来可视化层次关系的连接矩阵
End of explanation
from scipy.cluster.hierarchy import dendrogram
import matplotlib.pyplot as plt
plt.figure(figsize=(22,18))
# plot using 'dendrogram()'
dendrogram(linkage_matrix)
plt.show()
Explanation: 使用 scipy 的 dendrogram 函数进行绘制
End of explanation
import seaborn as sns
sns.clustermap(normalized_X, figsize=(12,18), method=linkage_type, cmap='viridis')
# Expand figsize to a value like (18, 50) if you want the sample labels to be readable
# Draw back is that you'll need more scrolling to observe the dendrogram
plt.show()
Explanation: 5. 通过 Seaborn 的 clustermap 进行可视化
python 的 seaborn 绘制库可以绘制聚类图,它是一种更详细地可视化数据集的谱系图。它也会进行聚类,因此我们只需传入数据集和想要的连接类型,它将在后台使用 scipy 进行聚类
End of explanation |
9,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Trees
An introductory example of decision trees using data from this interactive visualization. This is an over-simplified example that doesn't use normalization as a pre-processing step, or cross validation as a mechanism for tuning the model.
Set up
Step1: Data Exploration
Some basic exploratory analysis before creating a decision tree
Step2: Build a decision tree using all variables
Step3: Assess Model Fit
Step4: Show the tree
A little bit of a pain, though there are some alternatives to the documentation presented here. You may have to do the following
Step5: Comparion to KNN
Purely out of curiosity, how well does this model fit with KNN (for K=3) | Python Code:
# Load packages
import pandas as pd
from sklearn import tree
from __future__ import division
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Read data
df = pd.read_csv('./data/housing-data.csv')
Explanation: Decision Trees
An introductory example of decision trees using data from this interactive visualization. This is an over-simplified example that doesn't use normalization as a pre-processing step, or cross validation as a mechanism for tuning the model.
Set up
End of explanation
# What is the shape of our data?
# What variables are present in the dataset?
# What is the distribution of our outcome variable `in_sf`?
# How does elevation vary for houses in/not-in sf (I suggest an overlapping histogram)
Explanation: Data Exploration
Some basic exploratory analysis before creating a decision tree
End of explanation
# Create variables to hold features and outcomes separately
# Split data into testing and training sets
# Create a classifier and fit your features to your outcome
Explanation: Build a decision tree using all variables
End of explanation
# Generate a set of predictions for your test data
# Calculate accuracy for our test set (percentage of the time that prediction == truth)
# By comparison, how well do we predict in our training data?
Explanation: Assess Model Fit
End of explanation
# Create tree diagram
Explanation: Show the tree
A little bit of a pain, though there are some alternatives to the documentation presented here. You may have to do the following:
```
Install graphviz in your terminal
conda install graphviz
```
I then suggest the following solution:
tree.export_graphviz(clf, out_file="mytree.dot")
with open("mytree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
End of explanation
# Create a knn classifier
# Fit our classifier to our training data
# Predict on our test data and assess accuracy
Explanation: Comparion to KNN
Purely out of curiosity, how well does this model fit with KNN (for K=3)
End of explanation |
9,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
imports
Step1: importing datasets
Step2: pretty cleaned datasets ( Majority numbers, so dont forget to use gplearn (Genetic Programming Module) plus different feats on basis of +,-,*,/
Step3: Initial Processing
Step4: gplearn
Step5: Baseline RF
Step6: todo define r^2
Wow, an r^2 of 0.9699 - that's great, right? Well, perhaps not...
Possibly the most important idea in machine learning is that of having separate training & validation data sets
Step7: It's Pathetic as We are Clearly Overfitting...
Have a look at the RM(L)SE Scores and the Accuracy...
They aree way too off...
Step8: Single Tree
Step9: Bagging
Step10: The shape of this curve suggests that adding more trees isn't going to help us much
Step11: OOF's
Step12: RMSLE FOR VALID IS TOO HIGH, we need to change the randomness i guess | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import time
import xgboost as xgb
import lightgbm as lgb
# import category_encoders as cat_ed
# import gc, mlcrate, glob
# from gplearn.genetic import SymbolicTransformer, SymbolicRegressor
from fastai.imports import *
from fastai.structured import *
from pandas_summary import DataFrameSummary
# from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, RandomForestRegressor
from IPython.display import display
# from catboost import CatBoostClassifier
# from scipy.cluster import hierarchy as hc
# from collections import Counter
# from sklearn import metrics
# from sklearn.linear_model import LogisticRegression
# from sklearn.model_selection import train_test_split
# from sklearn.metrics import mean_squared_error
# from sklearn.metrics import roc_auc_score, log_loss
# from sklearn.model_selection import KFold, StratifiedKFold
# from sklearn.model_selection import GridSearchCV
# from sklearn.decomposition import PCA, TruncatedSVD, FastICA, FactorAnalysis
# from sklearn.random_projection import GaussianRandomProjection, SparseRandomProjection
# from sklearn.cluster import KMeans
# from sklearn.metrics import accuracy_score, log_loss
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.tree import DecisionTreeClassifier
# from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
# from sklearn.naive_bayes import GaussianNB
# from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# from sklearn.neural_network import MLPClassifier
# from sklearn.gaussian_process import GaussianProcessClassifier
# from sklearn.gaussian_process.kernels import RBF
# will ignore all warning from sklearn, seaborn etc..
def ignore_warn(*args, **kwargs):
pass
warnings.warn = ignore_warn
pd.option_context("display.max_rows", 1000);
pd.option_context("display.max_columns", 1000);
PATH = os.getcwd()
PATH
!dir {PATH}
Explanation: imports
End of explanation
df_raw = pd.read_csv(f'{PATH}\\train_new_agg_feats.csv', low_memory=False,dtype='float32')
df_test = pd.read_csv(f'{PATH}\\test_new_agg_feats.csv', low_memory=False, dtype='float32')
def display_all(df):
with pd.option_context("display.max_rows", 100):
with pd.option_context("display.max_columns", 100):
display(df)
def make_submission(probs):
sample = pd.read_csv(f'{PATH}\\sample_submission.csv')
submit = sample.copy()
submit['Upvotes'] = probs
return submit
df_raw.shape,
df_raw.get_ftype_counts()
Explanation: importing datasets
End of explanation
display_all(df_raw.isnull().sum().sort_index()/len(df_raw))
# df_raw['target'] = np.exp(target) - 1
# df_raw.to_csv(f'{PATH}\\train_new_agg_feats.csv', index=False)
# df_test.to_csv(f'{PATH}\\test_new_agg_feats.csv', index=False)
Explanation: pretty cleaned datasets ( Majority numbers, so dont forget to use gplearn (Genetic Programming Module) plus different feats on basis of +,-,*,/
End of explanation
man_train_list = df_raw.Username.unique()
man_test_list = df_test.Username.unique()
man_not_in_test = set(man_train_list) - set(man_test_list)
man_not_in_train = set(man_test_list) - set(man_train_list)
df_raw.drop(index = df_raw.loc[list(man_not_in_test)].index, inplace=True)
target = df_raw.target.values - 1
df_raw.drop('target', axis=1, inplace=True)
Explanation: Initial Processing
End of explanation
function_set = ['add','sub','mul','div','sqrt','log','abs','neg','inv','min','max']
gp = SymbolicTransformer(generations=20,population_size=3000,n_jobs=-1,hall_of_fame=100,n_components=10,verbose=1,\
function_set=function_set,parsimony_coefficient=0.005,max_samples=0.9,random_state=123)
gp.fit(df_raw, target)
gp_feat_eng_train = gp.transform(df_raw)
gp_feat_eng_test = gp.transform(df_test)
ext_train = np.hstack((df_raw, gp_feat_eng_train))
ext_test = np.hstack((df_test, gp_feat_eng_test))
my_xgb = xgb.XGBRegressor(8,0.01,n_jobs=-1,colsample_bytree=0.9,gamma=0.5,silent=False)
my_xgb.fit(ext_train, target)
xgb_preds = my_xgb.predict(ext_test)
xgb_preds
submit = make_submission(xgb_preds)
submit.to_csv(f'{PATH}\\xgb_v1.csv', index=None)
min(xgb_preds), max(xgb_preds)
sns.distplot(np.log(target + 1))
sns.distplot(np.log(xgb_preds + 1))
min(np.percentile(target,[90,91,92,93,94,95,96,97,98,99])), max(np.percentile(target,[90,91,92,93,94,95,96,97,98,99]))
np.percentile(xgb_preds,[90,91,92,93,94,95,96,97,98,99])
np.where(xgb_preds>3313,3313,xgb_preds)
min(np.where(xgb_preds>3313,3313,xgb_preds)), max(np.where(xgb_preds>3313,3313,xgb_preds))
xgb_preds_threshold = np.where(xgb_preds>3313,3313,xgb_preds)
submit = make_submission(xgb_preds_threshold)
submit.to_csv(f'{PATH}\\xgb_v2_thresholding_at_3133.csv', index=None)
# temp1 = df_raw.groupby('Username').count().iloc[:,-1]
# temp2 = df_test.groupby('Username').count().iloc[:,-1]
# df_man = pd.concat([temp1,temp2], axis = 1, join = 'outer')
# df_man.columns = ['train_count','test_count']
# df_man.head(2)
# man_list = df_man['train_count'].sort_values(ascending = False).index
# ixes = df_raw.Username.isin(man_list)
# df10000 = df_raw[ixes][['Username','Tag']]
# tags_dummies = pd.get_dummies(df10000.Tag)
# df10000 = pd.concat([df10000,tags_dummies[['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x']]], axis = 1).drop('Tag', axis = 1)
# # print("The contributors account for {} entries\n".format(len(df10000)))
# # print(df10000.head(10))
# df10000.groupby('Username').count().sort_values(by = 'a', ascending = False).head()
xyz = pd.concat([df_raw.groupby('Username').mean(),df_raw.groupby('Username').count()], axis = 1).iloc[:,:-5]
xyz.columns = ['ID', 'Reputation', 'Answers', 'Views', 'Upvotes', 'count']
############################################################################################# Mean Aggs
unames = xyz.sort_values(by = 'count', ascending = False).reset_index()['Username'].values.astype('int64')
count = xyz.sort_values(by = 'count', ascending = False).reset_index()['count'].values.astype('int64')
answers = xyz.sort_values(by = 'count', ascending = False).reset_index()['Answers'].values.astype('int64')
views = xyz.sort_values(by = 'count', ascending = False).reset_index()['Views'].values.astype('int64')
repo = xyz.sort_values(by = 'count', ascending = False).reset_index()['Reputation'].values.astype('int64')
d = {}
for idx,k in enumerate(unames):
d[k] = count[idx]
df_raw['agg_count'] = df_raw['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = answers[idx]
df_raw['agg_answers'] = df_raw['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = views[idx]
df_raw['agg_views'] = df_raw['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = repo[idx]
df_raw['agg_repo'] = df_raw['Username'].map(d)
xyz = pd.concat([df_test.groupby('Username').mean(),df_test.groupby('Username').count()], axis = 1).iloc[:,:-4]
xyz.columns = ['ID', 'Reputation', 'Answers', 'Views', 'count']
########################################################################################## Mean Aggregates
unames = xyz.sort_values(by = 'count', ascending = False).reset_index()['Username'].values.astype('int64')
count = xyz.sort_values(by = 'count', ascending = False).reset_index()['count'].values.astype('int64')
answers = xyz.sort_values(by = 'count', ascending = False).reset_index()['Answers'].values.astype('int64')
views = xyz.sort_values(by = 'count', ascending = False).reset_index()['Views'].values.astype('int64')
repo = xyz.sort_values(by = 'count', ascending = False).reset_index()['Reputation'].values.astype('int64')
d = {}
for idx,k in enumerate(unames):
d[k] = count[idx]
df_test['agg_count'] = df_test['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = answers[idx]
df_test['agg_answers'] = df_test['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = views[idx]
df_test['agg_views'] = df_test['Username'].map(d)
d = {}
for idx,k in enumerate(unames):
d[k] = repo[idx]
df_test['agg_repo'] = df_test['Username'].map(d)
df_test.head(3)
add_trans = ['Reputation', 'Answers', 'Username', 'Views', 'agg_count', 'agg_answers', 'agg_views', 'agg_repo']
for col in add_trans:
df_raw[f'log_trans_{col}'.format(col)] = np.log(df_raw[col] + 1) #avoid log 0's if any
df_test[f'log_trans_{col}'.format(col)] = np.log(df_test[col] + 1) #avoid log 0's if any
df_raw['repo_per_Answers'] = df_raw['Reputation'] / (df_raw['Answers']+1)
df_raw['repo_per_Views'] = df_raw['Reputation'] / df_raw['Views']
df_test['repo_per_Answers'] = df_test['Reputation'] / (df_test['Answers'] +1)
df_test['repo_per_Views'] = df_test['Reputation'] / df_test['Views']
df_raw.shape, df_test.shape
# gby = pd.concat([df10000.groupby('Username').mean(),df10000.groupby('Username').count()], axis = 1).iloc[:,:-9]
# gby.columns = ['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x', 'count']
# gby.sort_values(by = 'count', ascending = False).head(3)[['a', 'c', 'h', 'i', 'j', 'o', 'p', 'r', 's', 'x', 'count']]
# gby.sort_values(by = 'count', ascending = False).drop('count', axis = 1).plot(kind = 'bar', stacked = True, figsize = (15,6))
# plt.figure()
# gby.sort_values(by = 'count', ascending = False)['count'].plot(kind = 'bar', figsize = (15,6));
# pd.concat([df_raw['Tag'].value_counts().sort_values(ascending=False),df_test['Tag'].value_counts().sort_values(ascending=False)],sort=False, axis =1,\
# keys=['Train_Stats', 'Test_Stats'])
# gby.shape
# gby['skill'] = gby['r']*1 + gby['o']*2 + gby['h']*3 + gby['s']*4 + gby['a']*5 + gby['i']*6 + gby['p']*7 + gby['j']*8 \
# + gby['c']*9
Explanation: gplearn
End of explanation
##logging Remeber doing np.exp again
df_raw.Upvotes = np.log(df_raw.Upvotes + 2)
target = df_raw.Upvotes.values
drop_cols = ['ID']
df_raw.drop(drop_cols+['Upvotes'],inplace=True,axis=1)
df_test.drop(drop_cols,inplace=True,axis=1)
sns.distplot(target)
df_raw.Tag = df_raw.Tag.astype('category')
train_cats(df_raw);
apply_cats(df_test, df_raw);
df_raw.Tag = df_raw.Tag.cat.codes
df_test.Tag = df_test.Tag.cat.codes
df_raw.fillna(0, inplace=True)
df_test.fillna(0, inplace=True)
m = RandomForestRegressor(n_jobs=-1)
m.fit(df_raw, target)
# print('Before -->>', df_raw.shape)
# df_raw.drop(index = df_raw.loc[list(man_not_in_test)].index, inplace=True)
# print('After -->>', df_raw.shape)
m = RandomForestRegressor(n_jobs=-1)
m.fit(df_raw, target)
m.score(df_raw,target)
Explanation: Baseline RF
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(df_raw, target, test_size=0.2, random_state=42)
def split_vals(a,n): return a[:n].copy(), a[n:].copy()
n_valid = 30000
n_trn = len(df_raw)-n_valid
raw_train, raw_valid = split_vals(df_raw, n_trn)
X_train, X_valid = split_vals(df_raw, n_trn)
y_train, y_valid = split_vals(target, n_trn)
X_train.shape, y_train.shape, X_valid.shape
def rmse(x,y): return math.sqrt(((x-y)**2).mean())
def print_score(m):
res = ['RMSLE X_train', rmse(m.predict(X_train), y_train), '\n RMSLE X_valid', rmse(m.predict(X_valid), y_valid),
'\n R**2 Train',m.score(X_train, y_train), '\n R**2 Valid', m.score(X_valid, y_valid)]
if hasattr(m, 'oob_score_'): res.append(['\n OOB_Score', m.oob_score_])
print(res)
m = RandomForestRegressor(n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: todo define r^2
Wow, an r^2 of 0.9699 - that's great, right? Well, perhaps not...
Possibly the most important idea in machine learning is that of having separate training & validation data sets
End of explanation
m.fit(df,y)
preds = np.exp(m.predict(df_test)).astype('int32') - 1;
preds
submit = make_submission(preds)
submit.to_csv(f'{PATH}\\Adi_rf_08_58_31-07-2018.csv', index=False)
submit.head(2)
Explanation: It's Pathetic as We are Clearly Overfitting...
Have a look at the RM(L)SE Scores and the Accuracy...
They aree way too off...
End of explanation
m = RandomForestRegressor(n_estimators=1, max_depth=3, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
draw_tree(m.estimators_[0], df_raw, precision=3)
m = RandomForestRegressor(n_estimators=1, bootstrap=False, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: Single Tree
End of explanation
m = RandomForestRegressor(n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
preds = np.stack([t.predict(X_valid) for t in m.estimators_])
preds[:,0], np.mean(preds[:,0]), y_valid[0]
preds.shape
plt.plot([metrics.r2_score(y_valid, np.mean(preds[:i+1], axis=0)) for i in range(10)]);
Explanation: Bagging
End of explanation
m = RandomForestRegressor(n_estimators=20, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=80, n_jobs=-1)
m.fit(X_train, y_train)
print_score(m)
Explanation: The shape of this curve suggests that adding more trees isn't going to help us much
End of explanation
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
X_valid.shape, X_train.shape
df_trn, y_trn, nas = proc_df(df_raw, 'Upvotes', max_n_cat=20)
X_train, X_valid = split_vals(df_trn, n_trn)
y_train, y_valid = split_vals(y_trn, n_trn)
set_rf_samples(50000)
m = RandomForestRegressor(n_jobs=-1, oob_score=True)
%time m.fit(X_train, y_train)
print_score(m)
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
reset_rf_samples()
m = RandomForestRegressor(n_estimators=40, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
X_train.shape
m = RandomForestRegressor(n_estimators=40, min_samples_leaf=3, n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
Explanation: OOF's
End of explanation
fi = rf_feat_importance(m, df_trn); fi[:10]
fi.plot('cols', 'imp', figsize=(10,6), legend=False);
def plot_fi(fi): return fi.plot('cols', 'imp', 'barh', figsize=(12,7), legend=False)
plot_fi(fi[:]);
to_keep = fi[fi.imp>0.005].cols; len(to_keep)
df_keep = df_raw[to_keep].copy()
X_train, X_valid = split_vals(df_keep, n_trn)
from scipy.cluster import hierarchy as hc
corr = np.round(scipy.stats.spearmanr(df_keep).correlation, 4)
corr_condensed = hc.distance.squareform(1-corr)
z = hc.linkage(corr_condensed, method='average')
fig = plt.figure(figsize=(16,10))
dendrogram = hc.dendrogram(z, labels=df_keep.columns, orientation='left', leaf_font_size=16)
m = RandomForestRegressor(n_estimators=100, min_samples_leaf=3, max_features=0.5,
n_jobs=-1, oob_score=True)
m.fit(X_train, y_train)
print_score(m)
fi = rf_feat_importance(m, df_keep)
plot_fi(fi);
def get_oob(df):
m = RandomForestRegressor(n_estimators=100, min_samples_leaf=5, max_features=0.6, n_jobs=-1, oob_score=True)
x, _ = split_vals(df, n_trn)
m.fit(x, y_train)
return m.oob_score_
get_oob(df_keep)
m
preds = np.exp(m.predict(df_test[to_keep])) - 1;
preds
submit = make_submission(preds)
submit.to_csv(f'{PATH}\\Adi_rf_08_58_31-07-2018.csv', index=False)
submit.head(2)
Explanation: RMSLE FOR VALID IS TOO HIGH, we need to change the randomness i guess
End of explanation |
9,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tuning query parameters for the MSMARCO Document dataset
The following shows a principled, data-driven approach to tuning parameters of a basic query, such as field boosts, using the MSMARCO Document dataset.
Step1: Baseline evaluation
Let's look at a basic query that might be used to search documents that have multuple fields. This query uses the multi_match query of type cross_fields to search for query terms across the url, title, and body fields. Here's what the search template looks like that we'll be using
Step2: Now we have a baseline score that we'll iterate towards improving.
Getting started with tuning
Step3: The first thing we notice is that there's really not much difference between the variants of minimum_should_match. There is however a very big difference between OR and AND operator. It's pretty clear that with the kinds of queries we get in this dataset (long-form, natural language questions), that OR is always better than AND. Based on this we're going to just assume that OR is always the better option and we'll continue to look for a good minimum_should_match. Let's do that in combination with tie_breaker now since those two parameters can have an impact on each other. We'll start simple again with a grid search over a limited number of parameter values for each, all of which are discrete values. With two dimensions and five parameter values each, we have a parameter space of size 25 and can test every possible value in a reasonable amount of time.
Step4: Well that looks pretty good and we see some improvements on the training set. Let's evaluate on the development dataset now using the best parameters we've found so far. This will show us where we are relative to the baseline query.
Step5: Definitely a good improvement and all we've done is optimize a few basic query parameters!
Advanced tuning
Step6: Great! It looks like we made an improvement over both the baseline and the preivous best parameters found. This example shows how important it is to not tune field boosts manually, as there is no intuitive relationship between the boost values of fields.
Note that due to some randomness in executions of this process, re-running the optimization process may provide slightly different optimal boosts. Most importantly for field boost tuning in general, the relative value between the fields should be about the same.
Exploring a parameter space
Now that we see the results of a parameter tuning process we can actually look at the details to understand a little bit about the field boost parameter space in particlar. That is, for every combination of the three boost parameters that we tried, we can get a 3-dimensional space and look at what kind of relationships there are between the various dimensions in the parameter space. Here's a plot showing the three parameters and all the combinations that were attempted. Note that we invert the MRR score (multiple by -1) since the library we are relying (scikit-optimize) on wants to minimize a score, while MRR should be maximized.
Step7: Experiment
Step8: Ok, so not a big difference to the step-wise method we used above, but maybe it was a bit simpler to just throw in a huge parameter space.
More iterations, smaller parameter space using hints from prior grid search
Let's see if we can do even better by throwing more iterations into it, and by using a smaller search space for parameters that we already have a good range for minimum_should_match and tie_breaker, from the above grid search. This is kind of a hint and maybe not a fair comparison, but let's see if it makes any difference. We're not against using any prior knowledge to our advantage!
Step9: Looks like we did about the same as the other methods in terms of MRR@100. In terms of simplicity though, this approach definitely wins as we can throw all the parameters in at once and not have to think too much about order and parameter dependencies.
Random search
Something we haven't tried yet is a fully random search. When initializing Bayesian optimization, we're doing a uniform random sample from the parameter space, then using those points to seed the process. A common approach is actually to just do all your search iterations with random parameters. Let's use the same parameter space and try out a fully random search with a lot of iterations and see what happens. | Python Code:
%load_ext autoreload
%autoreload 2
import importlib
import os
import sys
from elasticsearch import Elasticsearch
from skopt.plots import plot_objective
# project library
sys.path.insert(0, os.path.abspath('..'))
import qopt
importlib.reload(qopt)
from qopt.notebooks import evaluate_mrr100_dev, optimize_query_mrr100
from qopt.optimize import Config
# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)
# es = Elasticsearch('http://localhost:9200')
es = Elasticsearch('http://35.234.93.126:9200')
# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls
# max_concurrent_searches = 10
max_concurrent_searches = 30
index = 'msmarco-document'
template_id = 'cross_fields'
Explanation: Tuning query parameters for the MSMARCO Document dataset
The following shows a principled, data-driven approach to tuning parameters of a basic query, such as field boosts, using the MSMARCO Document dataset.
End of explanation
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id,
params={
'operator': 'OR',
'minimum_should_match': 50, # in percent/%
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
})
Explanation: Baseline evaluation
Let's look at a basic query that might be used to search documents that have multuple fields. This query uses the multi_match query of type cross_fields to search for query terms across the url, title, and body fields. Here's what the search template looks like that we'll be using:
json
{
"query": {
"multi_match": {
"type": "cross_fields",
"query": "{{query_string}}",
"operator": "{{operator}}",
"minimum_should_match": "{{minimum_should_match}}%",
"tie_breaker": "{{tie_breaker}}",
"fields": [
"url^{{url|boost}}",
"title^{{title|boost}}",
"body^{{body|boost}}"
]
}
}
}
First we'll run an evaluation on the "development" (or dev in MSMARCO terms) dataset using the standardized metric for MSMARCO, MRR@100. This will show us what our baseline is that we want to improve against. We'll use default values for all parameters in the template above.
End of explanation
%%time
_ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'grid',
'space': {
'operator': ['OR', 'AND'],
'minimum_should_match': [30, 40, 50, 60, 70],
},
'default': {
'tie_breaker': 0.0,
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
}
}))
Explanation: Now we have a baseline score that we'll iterate towards improving.
Getting started with tuning: low dimensionality and discrete parameter values
There's a lot of parameters that we need to choose for this query. There's three boost parameters (one for each field) and then the operator, minimum_should_match, and tie_breaker parameters. We're going to break this up into steps and optimize using basic approaches first. Breaking things up like this allows us to tackle the problem in steps and avoids introducing a large amount of complexity and what is called a large parameter space. A parameter space is the total scope of all parameters combined, and all possible values of those parameters. If we have 3 parameters, we have a 3 dimensional space and each possible combination of parameters is what makes up the coordinates of that space.
Let's start with just using all the default values but looking at the difference between the operator values ['OR', 'AND'] in combination with a few optoins for minimum_should_match, since these parameters sometimes have an effect together. Since we have just two dimensions and a few possible values each, the parameter space is very small. In this case it's operator:2 * minimum_should_match:5 = 10. That's pretty small and so we'll use a simple grid search that will test every possible combination of those parmeters, 10 tests in all.
When we do this search over the parameters space, we'll use a different dataset to avoid overfitting (coming up with parameter values that work only on one dataset) and then do another evaluation on the dev dataset (as above) after finding what we think are the optimal parameter values in order to check our results. To make this process a bit faster (although less robust), we'll continue to use MRR@100 but with a training query dataset of just 1,000 queries compared to the over 3,000 in the development dataset.
End of explanation
%%time
_, _, final_params_msmtb, _ = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'grid',
'space': {
'minimum_should_match': [30, 40, 50, 60, 70],
'tie_breaker': [0.0, 0.25, 0.5, 0.75, 1.0],
},
'default': {
'operator': 'OR',
'url|boost': 1.0,
'title|boost': 1.0,
'body|boost': 1.0,
}
}))
Explanation: The first thing we notice is that there's really not much difference between the variants of minimum_should_match. There is however a very big difference between OR and AND operator. It's pretty clear that with the kinds of queries we get in this dataset (long-form, natural language questions), that OR is always better than AND. Based on this we're going to just assume that OR is always the better option and we'll continue to look for a good minimum_should_match. Let's do that in combination with tie_breaker now since those two parameters can have an impact on each other. We'll start simple again with a grid search over a limited number of parameter values for each, all of which are discrete values. With two dimensions and five parameter values each, we have a parameter space of size 25 and can test every possible value in a reasonable amount of time.
End of explanation
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_msmtb)
Explanation: Well that looks pretty good and we see some improvements on the training set. Let's evaluate on the development dataset now using the best parameters we've found so far. This will show us where we are relative to the baseline query.
End of explanation
final_params_msmtb
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 50,
'num_initial_points': 25,
'space': {
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': final_params_msmtb,
}))
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Explanation: Definitely a good improvement and all we've done is optimize a few basic query parameters!
Advanced tuning: high dimensionality and continuous parameter values
So the question now is, can we improve this query further by tuning each of the field boosts? One of the difficulties with picking field boosts intuitively is that they are not necesarrily interpretable in relation to each other. That is, a weight of 2.0 on the title field does not mean that it is two times more important than the body field with a boost of 1.0. In order to find the best field boosts, we'll need to search over a parameter space that includes a continuous range from 0.0 to 10.0.
We can't just use a grid search as we did above, testing each possible combination of the three field boosts, as an exhaustive search would require 1,000 evaluations if we test just at steps of 1 and no finer (10 steps per parameter, 3 parameters, makes 10 * 10 * 10 evaluations). Since the evaluation method is time consuming and a grid search over all combinations would be prohibitive, we'll use Bayesian optimization to pick the combinations of boosts. We're going to use a lot of iterations and probably more than necessary, but it'll be useful in a later step to plot the parameter space. Note that there are two iteration parameters that we need to set. First the num_iterations which controls the total numnber of iterations in the process. Second the num_initial_points is used to control the number of iterations that will randomly select parameter values from the parameter space, which are used to seed the Bayesian optimization process. That means the number of iterations that actually use Bayesian optimization (prior results used to select the next parameter values to try) is in fact num_iterations - num_initial_points. It's important to not set the num_initial_points too low or your Bayesian process may find the best parameters much more slowly (i.e. need more total num_iterations).
Since we already found the optimal operator, minimum_should_match and tie_breaker parameters in the steps above, we'll bring those forward as default parameters that don't need to optimized any further. The current best parameters are as follows.
End of explanation
_ = plot_objective(metadata_boosts, sample_source='result')
Explanation: Great! It looks like we made an improvement over both the baseline and the preivous best parameters found. This example shows how important it is to not tune field boosts manually, as there is no intuitive relationship between the boost values of fields.
Note that due to some randomness in executions of this process, re-running the optimization process may provide slightly different optimal boosts. Most importantly for field boost tuning in general, the relative value between the fields should be about the same.
Exploring a parameter space
Now that we see the results of a parameter tuning process we can actually look at the details to understand a little bit about the field boost parameter space in particlar. That is, for every combination of the three boost parameters that we tried, we can get a 3-dimensional space and look at what kind of relationships there are between the various dimensions in the parameter space. Here's a plot showing the three parameters and all the combinations that were attempted. Note that we invert the MRR score (multiple by -1) since the library we are relying (scikit-optimize) on wants to minimize a score, while MRR should be maximized.
End of explanation
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 75,
'num_initial_points': 30,
'space': {
'minimum_should_match': { 'low': 30, 'high': 70 },
'tie_breaker': { 'low': 0.0, 'high': 1.0 },
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Explanation: Experiment: all-in-one tuning
You might be wondering why bother doing the tuning step-wise, first searching through a few parameters, then moving onto the next set. When parameters are dependent on others, it might make sense to put all the parameters into a single, giant parameters space and try and optimize all at once. This is a more difficult optimization problem as the space is huge, but let's give it a shot and see what happens.
Same number of iterations, same parameter space
First we'll use the same number of total) iterations and combined parameter space as in the above steps. This is more of an apples-to-apples comparison then. However since we can easily search over continuous parameter spaces using Bayesian optimization techniques, we'll make it slightly harder but more complete by allowing for any value in our range of minimum_should_match and tie_breaker, instead of providing just the limited, discrete values as we did above (that was more for a grid search example than anything else).
Note that in the following examples we supress progress output since we are using so many iterations.
End of explanation
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'bayesian',
'num_iterations': 100,
'num_initial_points': 40,
'space': {
'minimum_should_match': { 'low': 40, 'high': 60 }, # 50 +/- 10
'tie_breaker': { 'low': 0.1, 'high': 0.4 }, # 0.25 +/- 0.15
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Explanation: Ok, so not a big difference to the step-wise method we used above, but maybe it was a bit simpler to just throw in a huge parameter space.
More iterations, smaller parameter space using hints from prior grid search
Let's see if we can do even better by throwing more iterations into it, and by using a smaller search space for parameters that we already have a good range for minimum_should_match and tie_breaker, from the above grid search. This is kind of a hint and maybe not a fair comparison, but let's see if it makes any difference. We're not against using any prior knowledge to our advantage!
End of explanation
%%time
_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,
config_space=Config.parse({
'method': 'random',
'num_iterations': 75,
'space': {
'minimum_should_match': { 'low': 30, 'high': 70 },
'tie_breaker': { 'low': 0.0, 'high': 1.0 },
'url|boost': { 'low': 0.0, 'high': 10.0 },
'title|boost': { 'low': 0.0, 'high': 10.0 },
'body|boost': { 'low': 0.0, 'high': 10.0 },
},
'default': {
'operator': 'OR',
}
}),
verbose=False)
_ = plot_objective(metadata_boosts, sample_source='result')
%%time
_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)
Explanation: Looks like we did about the same as the other methods in terms of MRR@100. In terms of simplicity though, this approach definitely wins as we can throw all the parameters in at once and not have to think too much about order and parameter dependencies.
Random search
Something we haven't tried yet is a fully random search. When initializing Bayesian optimization, we're doing a uniform random sample from the parameter space, then using those points to seed the process. A common approach is actually to just do all your search iterations with random parameters. Let's use the same parameter space and try out a fully random search with a lot of iterations and see what happens.
End of explanation |
9,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reverse engineering a dynamic web page
Initialization
Import modules needed below. It is assumed a downloader.py module is in the working directory.
Step1: First attempt
Step2: Clearly nothing happened. That is because the data is not resident in the downloaded web page. The actual data is obtained in response to a GET request generated by JavaScript code in the web page and sent to the server. This is called AJAX for Asynchronous JavaScript and XML, a technology for client-server data transfer without reloading the whole HTML page.
We can emulate JS by sending to the server an url just like the JS script would have sent, like this
Step3: As expected we got the desired response. Notice that AJAX returns data in JSON format.
The question is, how do we find that right JS-like url.
<img src="https
Step4: Hm, nothing. As expected, results are generated server-side by a JS request. Lets try again, this time using WebKit.
To do that we must assemble a bunch of objects that act web browser-like
Step5: Next create a container for web documents
Step6: A local event loop is created next
Step7: The loadFinished callback of the QWebView object is connected to the quit method of QEventLoop so that when the web page finishes loading the event loop will be stopped. The url to load must be wrapped in a QUrl object, which is passed to QWebView.
Step8: The QWebView load method is asynchronous so that execution will immediately pass to the next line while the web page is still being loaded. However we want to wait until the web page is fully loaded, so loop.exec is called to start the event loop.
We are almost done, a new web browser almost ready to rumble!
Step9: After the web page has been completely loaded the event loop will exit and execution moves to the next line, where the resulting HTML can be extracted as usual
Step10: This code was packed in the script webkit_render.py
Cool, heh?
<img src="https
Step11: When this command is run an empty browser window will pop up.
This is handy because with each command this window can be checked to see if Selenium worked as expected.
So next we load a web page in the browser using the get() method
Step12: Next we input country names in the Name box. To select it we use its ID, 'search_term'. Once it is found, use the send_keys() command to input data simulating actual key input. To select all countries use the '.' metacharacter.
Step13: We want to get all countries in a single search. To achieve that we set the browser page to 1000, which is acomplished using JavaScript
Step14: Next we select the Search button. To select it we use its ID, 'search'. Once it is found, we simulate clicking it with the method click().
Step15: We need to wait for AJAX to complete the request before using the results. That is done with a timeout
Step16: Next we search for the desired element like before
Step17: Then we create a list of countries by extracting the text from each link that was found
Step18: Now we got that response. Before we leave, close the browser window we created. | Python Code:
import os, json
import lxml.html
import cssselect
import pprint
from PyQt4.QtGui import *
from PyQt4.QtCore import *
from PyQt4.QtWebKit import *
# go to working dir, where a module downloader.py should exist
os.chdir(r'C:\Users\ps\Desktop\python\work\web scraping')
from downloader import Downloader # script to download html page
base = r'http://example.webscraping.com/'
surl = r'http://example.webscraping.com/search' # url for example site's search page
durl = r'http://example.webscraping.com/dynamic' # url for JavaScript example
Explanation: Reverse engineering a dynamic web page
Initialization
Import modules needed below. It is assumed a downloader.py module is in the working directory.
End of explanation
D = Downloader()
html = D(surl)
tree = lxml.html.fromstring(html.decode('utf-8')) # must decode else errors happen
tree.cssselect('div#results a')
Explanation: First attempt:
Just simple web scraping, downloading html page and searching for div results a
End of explanation
url = base + 'ajax/search.json?page=0&page_size=10&search_term=a'
html = D(url)
res = json.loads(html.decode('utf-8'))
pprint.pprint(res)
Explanation: Clearly nothing happened. That is because the data is not resident in the downloaded web page. The actual data is obtained in response to a GET request generated by JavaScript code in the web page and sent to the server. This is called AJAX for Asynchronous JavaScript and XML, a technology for client-server data transfer without reloading the whole HTML page.
We can emulate JS by sending to the server an url just like the JS script would have sent, like this:
End of explanation
html = D(durl)
tree = lxml.html.fromstring(html.decode('utf-8'))
tree.cssselect('#result')[0].text_content()
Explanation: As expected we got the desired response. Notice that AJAX returns data in JSON format.
The question is, how do we find that right JS-like url.
<img src="https://s-media-cache-ak0.pinimg.com/236x/ab/80/e9/ab80e9fc1f771001c8b48bddf74f92d2.jpg">
Using WebKit
This is a web-rendering engine that executes JavaScript. We use it in order to simulate a legitimate web browser's JS request to the server and receive the correct response.
First lets try to receive data from this new url in the traditional manner:
End of explanation
app = QApplication([])
Explanation: Hm, nothing. As expected, results are generated server-side by a JS request. Lets try again, this time using WebKit.
To do that we must assemble a bunch of objects that act web browser-like:
The Qt framework requires creation of a QApplication first, to initialize stuff.
End of explanation
webview = QWebView()
Explanation: Next create a container for web documents:
End of explanation
loop = QEventLoop()
Explanation: A local event loop is created next:
End of explanation
webview.loadFinished.connect(loop.quit)
webview.load(QUrl(durl))
Explanation: The loadFinished callback of the QWebView object is connected to the quit method of QEventLoop so that when the web page finishes loading the event loop will be stopped. The url to load must be wrapped in a QUrl object, which is passed to QWebView.
End of explanation
loop.exec_()
Explanation: The QWebView load method is asynchronous so that execution will immediately pass to the next line while the web page is still being loaded. However we want to wait until the web page is fully loaded, so loop.exec is called to start the event loop.
We are almost done, a new web browser almost ready to rumble!
End of explanation
html = webview.page().mainFrame().toHtml()
tree = lxml.html.fromstring(html)
tree.cssselect('#result')[0].text_content()
Explanation: After the web page has been completely loaded the event loop will exit and execution moves to the next line, where the resulting HTML can be extracted as usual:
End of explanation
from selenium import webdriver
driver = webdriver.Firefox()
Explanation: This code was packed in the script webkit_render.py
Cool, heh?
<img src="https://media3.giphy.com/media/MF1kR4YmC2Z20/200_s.gif">
Using Selenium
The advantage of using WebKit is full control to customize the browser renderer to behave as we need it to. If such flexibility is not needed then Selenium is an alternative.
Lets redo the previous example using Selenium and its API. The first step is connect it to the current web browser (Firefox is current in this example).
End of explanation
driver.get(surl)
Explanation: When this command is run an empty browser window will pop up.
This is handy because with each command this window can be checked to see if Selenium worked as expected.
So next we load a web page in the browser using the get() method:
End of explanation
driver.find_element_by_id('search_term').send_keys('.')
Explanation: Next we input country names in the Name box. To select it we use its ID, 'search_term'. Once it is found, use the send_keys() command to input data simulating actual key input. To select all countries use the '.' metacharacter.
End of explanation
js = "document.getElementById('page_size').options[1].text = '1000'"
driver.execute_script(js)
Explanation: We want to get all countries in a single search. To achieve that we set the browser page to 1000, which is acomplished using JavaScript:
End of explanation
driver.find_element_by_id('search').click()
Explanation: Next we select the Search button. To select it we use its ID, 'search'. Once it is found, we simulate clicking it with the method click().
End of explanation
driver.implicitly_wait(30) # 30 seconds
Explanation: We need to wait for AJAX to complete the request before using the results. That is done with a timeout:
End of explanation
links = driver.find_elements_by_css_selector('#results a')
Explanation: Next we search for the desired element like before:
End of explanation
countries = [link.text for link in links]
pprint.pprint(countries)
Explanation: Then we create a list of countries by extracting the text from each link that was found:
End of explanation
driver.close()
Explanation: Now we got that response. Before we leave, close the browser window we created.
End of explanation |
9,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
Step1: After doing a pip install, restart your kernel by selecting kernel from the menu and clicking Restart Kernel before proceeding further
Step2: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads
Step4: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
Step5: Try the query above in https
Step6: First, let's define a function for preprocessing the data
Step7: Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
Step8: 4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size.
Step9: The following step will take <b>15-20 minutes.</b> Monitor job progress on the Cloud Console, in the Dataflow section
Step10: Once the job completes, observe the files created in Google Cloud Storage
Step11: 5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development.
Step12: We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
Step13: Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
Step14: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
Step15: 5. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
Step16: The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
Step17: Optional
Step18: <h2> 6. Hyper-parameter tune </h2>
Look at <a href="hyperparam.ipynb">hyper-parameter tuning notebook</a> to decide what parameters to use for model. Based on that run, I ended up choosing
Step19: The RMSE after training on the 2-million-row dataset is \$3.03. This graph shows the improvements so far ... | Python Code:
%%bash
sudo pip install httplib2==0.12.0 apache-beam[gcp]==2.16.0
Explanation: <h1> Feature Engineering </h1>
In this notebook, you will learn how to incorporate feature engineering into your pipeline.
<ul>
<li> Working with feature columns </li>
<li> Adding feature crosses in TensorFlow </li>
<li> Reading data from BigQuery </li>
<li> Creating datasets using Dataflow </li>
<li> Using a wide-and-deep model </li>
</ul>
End of explanation
import tensorflow as tf
import apache_beam as beam
import shutil
print(tf.__version__)
Explanation: After doing a pip install, restart your kernel by selecting kernel from the menu and clicking Restart Kernel before proceeding further
End of explanation
import os
PROJECT = 'cloud-training-demos' # CHANGE THIS
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME. Use a regional bucket in the region you selected.
REGION = 'us-central1' # Choose an available region for Cloud MLE from https://cloud.google.com/ml-engine/docs/regions.
# for bash
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
## ensure we're using python3 env
os.environ['CLOUDSDK_PYTHON'] = 'python3'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: <h2> 1. Environment variables for project and bucket </h2>
<li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li>
<li> Cloud training often involves saving and restoring model files. Therefore, we should <b>create a single-region bucket</b>. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available) </li>
</ol>
<b>Change the cell below</b> to reflect your Project ID and bucket name.
End of explanation
def create_query(phase, EVERY_N):
if EVERY_N == None:
EVERY_N = 4 #use full dataset
#select and pre-process fields
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
DAYOFWEEK(pickup_datetime) AS dayofweek,
HOUR(pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
#add subsampling criteria by modding with hashkey
if phase == 'train':
query = "{} AND ABS(HASH(pickup_datetime)) % {} < 2".format(base_query,EVERY_N)
elif phase == 'valid':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 2".format(base_query,EVERY_N)
elif phase == 'test':
query = "{} AND ABS(HASH(pickup_datetime)) % {} == 3".format(base_query,EVERY_N)
return query
print(create_query('valid', 100)) #example query using 1% of data
Explanation: <h2> 2. Specifying query to pull the data </h2>
Let's pull out a few extra columns from the timestamp.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: Try the query above in https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips if you want to see what it does (ADD LIMIT 10 to the query!)
<h2> 3. Preprocessing Dataflow job from BigQuery </h2>
This code reads from BigQuery and saves the data as-is on Google Cloud Storage. We can do additional preprocessing and cleanup inside Dataflow, but then we'll have to remember to repeat that prepreprocessing during inference. It is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at this in future notebooks. For now, we are simply moving data from BigQuery to CSV using Dataflow.
While we could read from BQ directly from TensorFlow (See: https://www.tensorflow.org/api_docs/python/tf/contrib/cloud/BigQueryReader), it is quite convenient to export to CSV and do the training off CSV. Let's use Dataflow to do this at scale.
Because we are running this on the Cloud, you should go to the GCP Console (https://console.cloud.google.com/dataflow) to look at the status of the job. It will take several minutes for the preprocessing job to launch.
End of explanation
import datetime
####
# Arguments:
# -rowdict: Dictionary. The beam bigquery reader returns a PCollection in
# which each row is represented as a python dictionary
# Returns:
# -rowstring: a comma separated string representation of the record with dayofweek
# converted from int to string (e.g. 3 --> Tue)
####
def to_csv(rowdict):
days = ['null', 'Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
CSV_COLUMNS = 'fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat,passengers,key'.split(',')
rowdict['dayofweek'] = days[rowdict['dayofweek']]
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
####
# Arguments:
# -EVERY_N: Integer. Sample one out of every N rows from the full dataset.
# Larger values will yield smaller sample
# -RUNNER: 'DirectRunner' or 'DataflowRunner'. Specfy to run the pipeline
# locally or on Google Cloud respectively.
# Side-effects:
# -Creates and executes dataflow pipeline.
# See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
####
def preprocess(EVERY_N, RUNNER):
job_name = 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/taxifare/ch4/taxi_preproc/'.format(BUCKET)
#dictionary of pipeline options
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': 'preprocess-taxifeatures' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S'),
'project': PROJECT,
'runner': RUNNER
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags=[], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ['train', 'valid']:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, '{}.csv'.format(phase))
(
p | 'read_{}'.format(phase) >> beam.io.Read(beam.io.BigQuerySource(query=query))
| 'tocsv_{}'.format(phase) >> beam.Map(to_csv)
| 'write_{}'.format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done")
Explanation: First, let's define a function for preprocessing the data
End of explanation
preprocess(50*10000, 'DirectRunner')
%%bash
gsutil ls gs://$BUCKET/taxifare/ch4/taxi_preproc/
Explanation: Now, let's run pipeline locally. This takes upto <b>5 minutes</b>. You will see a message "Done" when it is done.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: 4. Run Beam pipeline on Cloud Dataflow
Run pipeline on cloud on a larger sample size.
End of explanation
preprocess(50*100, 'DataflowRunner')
#change first arg to None to preprocess full dataset
Explanation: The following step will take <b>15-20 minutes.</b> Monitor job progress on the Cloud Console, in the Dataflow section
End of explanation
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
#print first 10 lines of first shard of train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" | head
Explanation: Once the job completes, observe the files created in Google Cloud Storage
End of explanation
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/train.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/valid.csv-00000-of-*" > sample/valid.csv
Explanation: 5. Develop model with new inputs
Download the first shard of the preprocessed data to enable local development.
End of explanation
%%bash
grep -A 20 "INPUT_COLUMNS =" taxifare/trainer/model.py
%%bash
grep -A 50 "build_estimator" taxifare/trainer/model.py
%%bash
grep -A 15 "add_engineered(" taxifare/trainer/model.py
Explanation: We have two new inputs in the INPUT_COLUMNS, three engineered features, and the estimator involves bucketization and feature crosses.
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m trainer.task \
--train_data_paths=${PWD}/sample/train.csv \
--eval_data_paths=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
%%bash
ls taxi_trained/export/exporter/
Explanation: Try out the new model on the local sample (this takes <b>5 minutes</b>) to make sure it works fine.
End of explanation
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
%%writefile /tmp/test.json
{"dayofweek": "Sun", "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403, "passengers": 2}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ml-engine local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://$BUCKET/taxifare/ch4/taxi_preproc/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/ch4/taxi_preproc/valid*" \
--train_steps=5000 \
--output_dir=$OUTDIR
Explanation: 5. Train on cloud
This will take <b> 10-15 minutes </b> even though the prompt immediately returns after the job is submitted. Monitor job progress on the Cloud Console, in the AI Platform section and wait for the training job to complete.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ml-engine local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
Explanation: The RMSE is now 8.33249, an improvement over the 9.3 that we were getting ... of course, we won't know until we train/validate on a larger dataset. Still, this is promising. But before we do that, let's do hyper-parameter tuning.
<b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b>
End of explanation
%%bash
MODEL_NAME="feateng"
MODEL_VERSION="v1"
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
echo "Run these commands one-by-one (the very first time, you'll create a model and then create a version)"
#gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ai-platform delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version $TFVERSION
%%bash
gcloud ai-platform predict --model=feateng --version=v1 --json-instances=/tmp/test.json
Explanation: Optional: deploy model to cloud
End of explanation
%%bash
WARNING -- this uses significant resources and is optional. Remove this line to run the block.
OUTDIR=gs://${BUCKET}/taxifare/feateng2m
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
TIER=STANDARD_1
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/taxifare/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=$TIER \
--runtime-version=$TFVERSION \
-- \
--train_data_paths="gs://${BUCKET}/taxifare/train*" \
--eval_data_paths="gs://${BUCKET}/taxifare/valid*" \
--output_dir=$OUTDIR \
--train_steps=418168 \
--train_batch_size=512 --nbuckets=16 --hidden_units="64 64 64 8"
Explanation: <h2> 6. Hyper-parameter tune </h2>
Look at <a href="hyperparam.ipynb">hyper-parameter tuning notebook</a> to decide what parameters to use for model. Based on that run, I ended up choosing:
<ol>
<li> train_batch_size: 512 </li>
<li> nbuckets: 16 </li>
<li> hidden_units: "64 64 64 8" </li>
</ol>
This gives an RMSE of 5, a considerable improvement from the 8.3 we were getting earlier ... Let's try this over a larger dataset.
Optional: Run Cloud training on 2 million row dataset
This run uses as input 2 million rows and takes ~20 minutes with 10 workers (STANDARD_1 pricing tier). The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). Because the Dataflow preprocessing takes about 15 minutes, we train here using CSV files in a public bucket.
When doing distributed training, use train_steps instead of num_epochs. The distributed workers don't know how many rows there are, but we can calculate train_steps = num_rows * num_epochs / train_batch_size. In this case, we have 2141023 * 100 / 512 = 418168 train steps.
End of explanation
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame({'Lab' : pd.Series(['1a', '2-3', '4a', '4b', '4c']),
'Method' : pd.Series(['Heuristic Benchmark', 'tf.learn', '+Feature Eng.', '+ Hyperparam', '+ 2m rows']),
'RMSE': pd.Series([8.026, 9.4, 8.3, 5.0, 3.03]) })
ax = sns.barplot(data = df, x = 'Method', y = 'RMSE')
ax.set_ylabel('RMSE (dollars)')
ax.set_xlabel('Labs/Methods')
plt.plot(np.linspace(-20, 120, 1000), [5] * 1000, 'b');
%%bash
gsutil -m mv gs://${BUCKET}/taxifare/ch4/ gs://${BUCKET}/taxifare/ch4_1m/
Explanation: The RMSE after training on the 2-million-row dataset is \$3.03. This graph shows the improvements so far ...
End of explanation |
9,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: BCC
Source ID: BCC-ESM1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
9,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boundary Layer Solver
This notebook will develop a numerical method for solving the boundary layer momentum integral equation using Pohlhausen velocity profiles.
Momentum integral equation
In the boundary layer portion of the course we derived the governing equations for a boundary layer using the concept of a velocity profile
$$u = u_e(x) f(\eta), \quad \eta=\frac y{\delta(x)}$$
where $u_e$ is the local free stream velocity and $\delta$ is the boundary layer thickness. Note that $x$ is the distance along the wall from the leading edge and $y$ is the distance from the wall.
This enables the development of the momentum integral equation
$$ \frac 12 c_f = \frac{u_e'}{u_e}(\delta_1+2\delta_2)+\delta_2' $$
which balances the local wall friction with the change in the boundary layer profile. The tick mark indicates a derivative, ie $u_e'=\frac{du_e}{dx}$.
The goal is to use the momentum equation to determine how the boundary layer develops, predicting the friction drag and the point of separation.
The velocity $u_e$ (and $u_e'$) is considered to be prescribed by the potential flow solution, but there are still too many unknowns. We need to choose a profile to develop this further...
Pohlhausen profile
The Pohlhausen profile is used to describe a laminar velocity profile exposed to external pressure gradients. The profile is defined as
$$ \frac u {u_e} = f(\eta) = P_F(\eta)+\lambda P_G(\eta) $$
where $\lambda$ is the shape factor, given by
$$ \lambda = \frac {\delta^2}\nu u_e'$$
and the profile shapes are defined by
$ P_F = 2\eta-2\eta^3+\eta^4 $ is the flat plate profile
$ P_G = \frac\eta 6 (1-\eta)^3 $ is the modification for pressure gradients
These can be easly defined using a set of python functions
Step1: Change $\lambda$ below to see its effect on the profile shape.
Step2: Quiz 1
What value of $\lambda$ denotes separated flow?
$\lambda$<-12
$\lambda$=0
$\lambda$>12
Using the Pohlhausen profile, the various factors in the momentum integral equation are defined as
$\frac{\delta_1}\delta = \int_0^1 (1-f) d\eta = \frac3{10}-\lambda\frac1{120}$
$\frac{\delta_2}\delta = \int_0^1 f(1-f) d\eta = \frac{37}{315}-\lambda\frac1{945}-\lambda^2\frac1{9072}$
$\frac 12 c_f Re_\delta =f'(0)= 2+\lambda\frac1{6}$
where $Re_\delta = \frac{u_e\delta}\nu$ is the local boundary layer Reynolds number.
Step3: Note that these are all polynomial functions of $\lambda$. Since $u_e$ is given by potential flow and $\lambda = \frac {\delta^2}\nu u_e'$, the only unknown in the momentum equation is now $\delta(x)$!
Stagnation point condition
Now we need to write the momentum equation in terms of $\delta$ (and $\lambda$) and solve. This equation needs to be valid from the leading edge all the way to the point of separation.
For any body with finite thickness the boundary layer will begin at the stagnation point at the front of the body. However, describing the boundary layer at a stagnation point is somewhat tricky.
Quiz 2
Which relationships are true at a stagnation point?
$u_e = 0$
$u_e' = 0$
$\delta/x << 1$
$c_f$ is singular
That's no good - the momentum equation will be singular at the leading edge. We can avoid this problem by multiplying the whole equation by $Re_\delta$, leading to
Step4: Using this definition, the momentum equation is
$$ g_1(\lambda) = Re_\delta \delta_2'$$
Quiz 3
The equation above further simplifies at the stagnation point. Which is correct?
$g_1 = 0$
$g_1 = Re_\delta$
$ \frac 12 c_f = 0$
Solving this equations will determine our initial condition $\lambda_0$. Using my vast google skills I found the bisect function in scipy.optimize which will solve for the root.
Step5: With the value of $\lambda_0$ determined, the initial condition $\delta_0$ is simply
$$ \delta_0 = \sqrt{\frac{\nu \lambda_0}{u_e'(x_0)}} $$
Pohlhausen momentum equation
The only thing left to do is write $\delta_2'$ in terms of $\delta'$. Using $F=\frac{\delta_2}\delta$ we have
$$ \delta_2' = \frac{d}{dx}(F\delta) $$
From the line plot above, we see that $F$ is nearly unchanged across the whole range of $\lambda$, so we will treat it as a constant. Therefore the complete Pohlhausen momentum equation is
$$ g_1 = Re_\delta F \delta'$$
Isolating the derivative, we have
$$ \delta'= \frac{g_1(\lambda)}{Re_\delta F(\lambda)} $$
Step6: Lets plot the functions of $\lambda$ to get a feel for how the boundary layer will develop.
Step7: Quiz 4
What will happen if $\lambda>\lambda_0$?
Flat plate boundary layer flow.
The boundary layer will shrink.
The Pohlausen equation will be singular.
Ordinary differential equations
The momentum equation above is an ordinary differentilal equation (ODE), having the form
$$ \psi' = g(\psi(x),x) $$
where the derivative is only a function of the variable $\psi$ and one indepentant variable $x$. All ODEs have an important feature in common
Step8: In this code we've made the integrand g a function of psi and the index i. Note that g_i_1=$g_{i+1}$ and we've passed $i+1$ as the index. We've also left the option for additional arguments to be passed to g as *args which is required for the boundary layer ODE.
Before we get to that, lets test heun using $\psi'=\psi$ with $\psi_0=1$, since we know the solution is $\psi = e^x$
Step9: Looks good, only 1% error.
Bonus
Step10: where u_e, du_e, and nu are the extra arguments, needed to compute $Re_\delta$ and $\lambda$.
Then we use this function and heun to march from the initial condition $\lambda_0,\delta_0$ along the boundary layer until we reach the point of separation at $\lambda<-12$
Step11: and we're done!
Let's test it on the flow around a circle.
In this case the boundary layer will march around the circle from $s=0,\ldots,R\pi$. Lets set the parameters $R=1$, $U_\infty=1$ and $Re_R=10^5$, such that $\nu=10^{-5}$. The tangential velocity around a circular cylinder using potential flow is simply
$$u_e = 2\sin(s)$$
Now that we've defined march we can set-up and solve for the boundary layer in just a few lines of code
Step12: Let plot the boundary layer thickness on the circle compared to the exact solution for a laminar flat plate boundary layer from Blausius.
Step13: The circle solution is completely different due to the external pressure gradients.
The boundary layer growth is stunted on the front body
$\delta$ increases rapidly as the flow approaches the midbody
The flow separates around $1.87$ radians $\approx 107^o$
This is in good agreement with Hoerner's Fluid-Dynamic Drag, which states that theoretical laminar separation occurs at ~$110^o$ for this case.
Quiz 6
How does the separation point depend on $\nu$?
Increasing $\nu$ delays separation
Decreasing $\nu$ delays separation
Chaning $\nu$ has no effect on separation
We know analytically that $\delta$ scales as $\sqrt\nu$ (which you can double check using the code above), and therefore $\lambda=\frac{\delta^2}\nu u_e'$ doesn't depend on $\nu$ at all. Since the separation point is determined by $\lambda$, this is also independant of $\nu$.
Fluids fundamental
Step14: Please ignore the cell below. It just loads our style for the notebooks. | Python Code:
import numpy
def pohlF(eta): return 2*eta-2*eta**3+eta**4
def pohlG(eta): return eta/6*(1-eta)**3
from matplotlib import pyplot
%matplotlib inline
def pohlPlot(lam):
pyplot.xlabel(r'$u/u_e$', fontsize=16)
pyplot.axis([-0.1,1.1,0,1])
pyplot.ylabel(r'$y/\delta$', fontsize=16)
eta = numpy.linspace(0.0,1.0,100)
pyplot.plot(pohlF(eta),eta, lw=1, c='k', label=r'$P_F$')
pyplot.plot(pohlF(eta)+lam*pohlG(eta),eta, lw=2, c='g', label=r'$P_F+\lambda P_G$')
pyplot.legend(loc='upper left')
Explanation: Boundary Layer Solver
This notebook will develop a numerical method for solving the boundary layer momentum integral equation using Pohlhausen velocity profiles.
Momentum integral equation
In the boundary layer portion of the course we derived the governing equations for a boundary layer using the concept of a velocity profile
$$u = u_e(x) f(\eta), \quad \eta=\frac y{\delta(x)}$$
where $u_e$ is the local free stream velocity and $\delta$ is the boundary layer thickness. Note that $x$ is the distance along the wall from the leading edge and $y$ is the distance from the wall.
This enables the development of the momentum integral equation
$$ \frac 12 c_f = \frac{u_e'}{u_e}(\delta_1+2\delta_2)+\delta_2' $$
which balances the local wall friction with the change in the boundary layer profile. The tick mark indicates a derivative, ie $u_e'=\frac{du_e}{dx}$.
The goal is to use the momentum equation to determine how the boundary layer develops, predicting the friction drag and the point of separation.
The velocity $u_e$ (and $u_e'$) is considered to be prescribed by the potential flow solution, but there are still too many unknowns. We need to choose a profile to develop this further...
Pohlhausen profile
The Pohlhausen profile is used to describe a laminar velocity profile exposed to external pressure gradients. The profile is defined as
$$ \frac u {u_e} = f(\eta) = P_F(\eta)+\lambda P_G(\eta) $$
where $\lambda$ is the shape factor, given by
$$ \lambda = \frac {\delta^2}\nu u_e'$$
and the profile shapes are defined by
$ P_F = 2\eta-2\eta^3+\eta^4 $ is the flat plate profile
$ P_G = \frac\eta 6 (1-\eta)^3 $ is the modification for pressure gradients
These can be easly defined using a set of python functions
End of explanation
pohlPlot(lam=7)
Explanation: Change $\lambda$ below to see its effect on the profile shape.
End of explanation
def disp_ratio(lam): return 3./10.-lam/120.
def mom_ratio(lam): return 37./315.-lam/945.-lam**2/9072.
def df_0(lam): return 2+lam/6.
pyplot.xlabel(r'$\lambda$', fontsize=16)
lam = numpy.linspace(-12,12,100)
pyplot.plot(lam,disp_ratio(lam), lw=2, label=r'$\delta_1/\delta$')
pyplot.plot(lam,mom_ratio(lam), lw=2, label=r'$\delta_2/\delta$')
pyplot.plot(lam,df_0(lam)/10., lw=2, label=r'$c_f Re_\delta/20$')
pyplot.legend(loc='upper right')
Explanation: Quiz 1
What value of $\lambda$ denotes separated flow?
$\lambda$<-12
$\lambda$=0
$\lambda$>12
Using the Pohlhausen profile, the various factors in the momentum integral equation are defined as
$\frac{\delta_1}\delta = \int_0^1 (1-f) d\eta = \frac3{10}-\lambda\frac1{120}$
$\frac{\delta_2}\delta = \int_0^1 f(1-f) d\eta = \frac{37}{315}-\lambda\frac1{945}-\lambda^2\frac1{9072}$
$\frac 12 c_f Re_\delta =f'(0)= 2+\lambda\frac1{6}$
where $Re_\delta = \frac{u_e\delta}\nu$ is the local boundary layer Reynolds number.
End of explanation
def g_1(lam): return df_0(lam)-lam*(disp_ratio(lam)+2*mom_ratio(lam))
Explanation: Note that these are all polynomial functions of $\lambda$. Since $u_e$ is given by potential flow and $\lambda = \frac {\delta^2}\nu u_e'$, the only unknown in the momentum equation is now $\delta(x)$!
Stagnation point condition
Now we need to write the momentum equation in terms of $\delta$ (and $\lambda$) and solve. This equation needs to be valid from the leading edge all the way to the point of separation.
For any body with finite thickness the boundary layer will begin at the stagnation point at the front of the body. However, describing the boundary layer at a stagnation point is somewhat tricky.
Quiz 2
Which relationships are true at a stagnation point?
$u_e = 0$
$u_e' = 0$
$\delta/x << 1$
$c_f$ is singular
That's no good - the momentum equation will be singular at the leading edge. We can avoid this problem by multiplying the whole equation by $Re_\delta$, leading to:
$$ \frac 12 c_f Re_\delta = \frac\delta\nu u_e' [\delta_1+2\delta_2]+Re_\delta \delta_2'$$
The first term on the RHS can be simplified by dividing the brackets by $\delta$ and mutiplying by $\delta$ outside out to produce the definition of $\lambda$. This lets us group the terms only dependant on $\lambda$ together to define
$$ g_1(\lambda) = \frac 12 c_f Re_\delta - \lambda \left[\frac{\delta_1}{\delta}+2\frac{\delta_2}\delta\right]$$
End of explanation
from scipy.optimize import bisect
lam0 = bisect(g_1,-12,12) # use bisect method to find root between -12...12
print 'lambda_0 = ',lam0
Explanation: Using this definition, the momentum equation is
$$ g_1(\lambda) = Re_\delta \delta_2'$$
Quiz 3
The equation above further simplifies at the stagnation point. Which is correct?
$g_1 = 0$
$g_1 = Re_\delta$
$ \frac 12 c_f = 0$
Solving this equations will determine our initial condition $\lambda_0$. Using my vast google skills I found the bisect function in scipy.optimize which will solve for the root.
End of explanation
def ddx_delta(Re_d,lam):
if Re_d==0: return 0 # Stagnation point condition
return g_1(lam)/mom_ratio(lam)/Re_d # delta'
Explanation: With the value of $\lambda_0$ determined, the initial condition $\delta_0$ is simply
$$ \delta_0 = \sqrt{\frac{\nu \lambda_0}{u_e'(x_0)}} $$
Pohlhausen momentum equation
The only thing left to do is write $\delta_2'$ in terms of $\delta'$. Using $F=\frac{\delta_2}\delta$ we have
$$ \delta_2' = \frac{d}{dx}(F\delta) $$
From the line plot above, we see that $F$ is nearly unchanged across the whole range of $\lambda$, so we will treat it as a constant. Therefore the complete Pohlhausen momentum equation is
$$ g_1 = Re_\delta F \delta'$$
Isolating the derivative, we have
$$ \delta'= \frac{g_1(\lambda)}{Re_\delta F(\lambda)} $$
End of explanation
pyplot.xlabel(r'$\lambda$', fontsize=16)
pyplot.ylabel(r'$g_1/F$', fontsize=16)
pyplot.plot(lam,ddx_delta(1,lam), lw=2)
pyplot.scatter(lam0,0, s=100, c='r')
pyplot.text(lam0,3, r'$\lambda_0$',fontsize=15)
Explanation: Lets plot the functions of $\lambda$ to get a feel for how the boundary layer will develop.
End of explanation
def heun(g,psi_i,i,dx,*args):
g_i = g(psi_i,i,*args) # integrand at i
tilde_psi = psi_i+g_i*dx # predicted estimate at i+1
g_i_1 = g(tilde_psi,i+1,*args) # integrand at i+1
return psi_i+0.5*(g_i+g_i_1)*dx # corrected estimate
Explanation: Quiz 4
What will happen if $\lambda>\lambda_0$?
Flat plate boundary layer flow.
The boundary layer will shrink.
The Pohlausen equation will be singular.
Ordinary differential equations
The momentum equation above is an ordinary differentilal equation (ODE), having the form
$$ \psi' = g(\psi(x),x) $$
where the derivative is only a function of the variable $\psi$ and one indepentant variable $x$. All ODEs have an important feature in common:
Mathematics fundamental: ODEs
Systems' whose evolution depends only on their current state
This makes them easier to solve. If we integrate the ODE from $x_0$ to $x_1$ we have
$$ \psi(x_1) = \psi_1= \psi_0+\int_{x_0}^{x_1} g(\psi(x),x) dx $$
which means all we need to solve for $\psi_1$ is the initial condition $\psi_0$ and an estimate the RHS integral. And once we have $\psi_1$, we can get $\psi_2$, etc. In general we have
$$ \psi_{i+1}= \psi_i+\int_{x_i}^{x_{i+1}} g(\psi(x),x) dx \quad i=0,\ldots, N-1$$
This means the ODE can be solved by marching from $x=0$ to $x=L$. Compare this to the vortex panel method and its linear system of equations that needed to be solved simultaneously using matrices... This is easy.
Numerical integration
You've seen numerical ways to determine the area under a curve before, like the trapezioal rule
$$ \int_{x_i}^{x_{i+1}} f(x) dx \approx \frac12[f(x_1)+f(x_{i+1})] \Delta x$$
where $\Delta x=x_{i+1}-x_1$
Quiz 5
What is the important difference between the integral above and the ODE integral?
$\psi_{i+1}$ is unknown
$g$ is unknown
$g$ is nonlinear
This means we have to split the numerical method into two steps. First we estimate the integral as $g(\psi_i,x_i)\Delta x$. This lets us predict an estimate of $\psi_{i+1}$
$$ \tilde\psi_{i+1}= \psi_i+ g(\psi_i,x_i)\Delta x $$
However, this one-sided estimate of the integral is very rough. In the next step we correct the prediction using the trapeziodal rule
$$ \psi_{i+1}= \psi_i+ \frac12[g(\psi_i,x_i)+g(\tilde\psi_{i+1},x_{i+1})]\Delta x$$
This is often called the predictor/corrector method, or Heun's method.
Let's code it up:
End of explanation
N = 20 # number of steps
x = numpy.linspace(0,numpy.pi,N) # set up x array from 0..pi
psi = numpy.full_like(x,1.) # psi array with phi0=1
def g_test(psi,i): return psi # define derivative function
for i in range(N-1): # march!
psi[i+1] = heun(g_test,psi[i],i,(x[i+1]-x[i]))
pyplot.plot(x,psi)
pyplot.plot(x,numpy.exp(x))
print 'exp(pi) ~ ', psi[N-1],', error = ',1-psi[N-1]/numpy.exp(numpy.pi)
Explanation: In this code we've made the integrand g a function of psi and the index i. Note that g_i_1=$g_{i+1}$ and we've passed $i+1$ as the index. We've also left the option for additional arguments to be passed to g as *args which is required for the boundary layer ODE.
Before we get to that, lets test heun using $\psi'=\psi$ with $\psi_0=1$, since we know the solution is $\psi = e^x$
End of explanation
def g_pohl(delta_i,i,u_e,du_e,nu):
Re_d = delta_i*u_e[i]/nu # compute local Reynolds number
lam = delta_i**2*du_e[i]/nu # compute local lambda
return ddx_delta(Re_d,lam) # get derivative
Explanation: Looks good, only 1% error.
Bonus: What is the error if we don't do the correction step?
Boundary layer on a circle
Returning to the boundary layer ODE, we first define a function which can be integrated by heun
End of explanation
def march(x,u_e,du_e,nu):
delta0 = numpy.sqrt(lam0*nu/du_e[0]) # set delta0
delta = numpy.full_like(x,delta0) # delta array
lam = numpy.full_like(x,lam0) # lambda array
for i in range(len(x)-1): # march!
delta[i+1] = heun(g_pohl,delta[i],i,x[i+1]-x[i], # integrate BL using...
u_e,du_e,nu) # additional arguments
lam[i+1] = delta[i+1]**2*du_e[i+1]/nu # compute lambda
if abs(lam[i+1])>12: break # check stop condition
return delta,lam,i # return with separation index
Explanation: where u_e, du_e, and nu are the extra arguments, needed to compute $Re_\delta$ and $\lambda$.
Then we use this function and heun to march from the initial condition $\lambda_0,\delta_0$ along the boundary layer until we reach the point of separation at $\lambda<-12$
End of explanation
nu = 1e-4 # viscosity
N = 32 # number of steps
s = numpy.linspace(0,numpy.pi,N) # distance goes from 0..pi
u_e = 2.*numpy.sin(s) # velocity
du_e = 2.*numpy.cos(s) # gradient
delta,lam,iSep = march(s,u_e,du_e,nu) # solve!
Explanation: and we're done!
Let's test it on the flow around a circle.
In this case the boundary layer will march around the circle from $s=0,\ldots,R\pi$. Lets set the parameters $R=1$, $U_\infty=1$ and $Re_R=10^5$, such that $\nu=10^{-5}$. The tangential velocity around a circular cylinder using potential flow is simply
$$u_e = 2\sin(s)$$
Now that we've defined march we can set-up and solve for the boundary layer in just a few lines of code:
End of explanation
pyplot.ylabel(r'$\delta/R$', fontsize=16)
pyplot.xlabel(r'$s/R$', fontsize=16)
pyplot.plot(s[:iSep+1],delta[:iSep+1],lw=2,label='Circle')
pyplot.plot(s,s*5/numpy.sqrt(s/nu),lw=2,label='Flat plate')
pyplot.legend(loc='upper left')
pyplot.scatter(s[iSep],delta[iSep], s=100, c='r')
pyplot.text(s[iSep]+0.1,delta[iSep],'separation between\n'
+'%.2f' % s[iSep]+'<s<'+'%.2f' % s[iSep+1],fontsize=12)
Explanation: Let plot the boundary layer thickness on the circle compared to the exact solution for a laminar flat plate boundary layer from Blausius.
End of explanation
# your code here
Explanation: The circle solution is completely different due to the external pressure gradients.
The boundary layer growth is stunted on the front body
$\delta$ increases rapidly as the flow approaches the midbody
The flow separates around $1.87$ radians $\approx 107^o$
This is in good agreement with Hoerner's Fluid-Dynamic Drag, which states that theoretical laminar separation occurs at ~$110^o$ for this case.
Quiz 6
How does the separation point depend on $\nu$?
Increasing $\nu$ delays separation
Decreasing $\nu$ delays separation
Chaning $\nu$ has no effect on separation
We know analytically that $\delta$ scales as $\sqrt\nu$ (which you can double check using the code above), and therefore $\lambda=\frac{\delta^2}\nu u_e'$ doesn't depend on $\nu$ at all. Since the separation point is determined by $\lambda$, this is also independant of $\nu$.
Fluids fundamental: Separation Point
The point of laminar separation is independant of $Re$
This is not true of a turbulent boundary layer.
Quiz 7
How can you compute the total friction drag coefficient $C_F$ on the circle?
Use the flat plate estimate, I'm sure that will be fine...
Compute $\tau_w=\frac 12 c_f \rho u_e^2 $ and integrate numerically
Hint: numpy.trapz
Your turn
Determine $C_F=\frac {2F_F}{\rho U_\infty^2 S}$ , where $F_F = \int \tau_w s_x ds$ is the 2D friction drag and $S$ is the 2D surface area, and compare it to the flat plate solution: $1.33 Re^{-1/2}$.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open('../styles/custom.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: Please ignore the cell below. It just loads our style for the notebooks.
End of explanation |
9,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the NYC Subway Dataset
Questions
Overview
This project consists of two parts. In Part 1 of the project, you should have completed the questions in Problem Sets 2, 3, and 4 in the Introduction to Data Science course.
This document addresses part 2 of the project. Please use this document as a template and answer the following questions to explain your reasoning and conclusion behind your work in the problem sets. You will attach a document with your answers to these questions as part of your final project submission.
Section 0. References
Please include a list of references you have used for this project. Please be specific - for example, instead of including a general website such as stackoverflow.com, try to include a specific topic from Stackoverflow that you have found useful.
http
Step1: Section 1. Statistical Test
1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?
1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples.
1.3 What results did you get from this statistical test? These should include the following numerical values
Step2: A.2 Exploratory Data Analysis for Features
a) Number of Riders over the Day Dependent on Rain
Step3: b) Number of Riders over the Day Dependent on Weekday
Step4: c) Number of Riders over the Day Dependent on Fog
Step5: A.3 Test of Normal Distribution
Step6: Based on the plot, the sample of entries does not seem normally distributed. Hence, the Mann-Whitney-Wilcoxon RankSum test (no assumptions about any underlying distributions) is conducted to test if the two samples of the number of entries in the NYC subway on rainy and non rainy days come from the same population | Python Code:
import pandas as pd
import pandasql as pdsql
import datetime as dt
import numpy as np
import scipy as sc
import scipy.stats
import statsmodels.api as sm
from sklearn.linear_model import SGDRegressor
from ggplot import *
%matplotlib inline
Explanation: Analyzing the NYC Subway Dataset
Questions
Overview
This project consists of two parts. In Part 1 of the project, you should have completed the questions in Problem Sets 2, 3, and 4 in the Introduction to Data Science course.
This document addresses part 2 of the project. Please use this document as a template and answer the following questions to explain your reasoning and conclusion behind your work in the problem sets. You will attach a document with your answers to these questions as part of your final project submission.
Section 0. References
Please include a list of references you have used for this project. Please be specific - for example, instead of including a general website such as stackoverflow.com, try to include a specific topic from Stackoverflow that you have found useful.
http://blog.yhathq.com/posts/facebook-ggplot-tutorial.html
http://ggplot.yhathq.com/docs/index.html
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mannwhitneyu.html
End of explanation
weather_data = pd.read_csv("turnstile_weather_v2.csv")
weather_data["hour"] = weather_data["hour"].astype('category')
weather_data["rain"] = (weather_data["rain"]+1).astype('category')
weather_data["fog"] = (weather_data["fog"]+1).astype('category')
weather_data.head(3)
Explanation: Section 1. Statistical Test
1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?
1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples.
1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test.
1.4 What is the significance and interpretation of these results?
Section 2. Linear Regression
2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model:
OLS using Statsmodels or Scikit Learn
Gradient descent using Scikit Learn
Or something different?
2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?
2.3 Why did you select these features in your model? We are looking for specific reasons that lead you to believe that the selected features will contribute to the predictive power of your model.
Your reasons might be based on intuition. For example, response for fog might be: “I decided to use fog because I thought that when it is very foggy outside people might decide to use the subway more often.”
Your reasons might also be based on data exploration and experimentation, for example: “I used feature X because as soon as I included it in my model, it drastically improved my R2 value.”
2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model?
2.5 What is your model’s R2 (coefficients of determination) value?
2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value?
Section 3. Visualization
Please include two visualizations that show the relationships between two or more variables in the NYC subway data.
Remember to add appropriate titles and axes labels to your plots. Also, please add a short description below each figure commenting on the key insights depicted in the figure.
3.1 One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days.
You can combine the two histograms in a single plot or you can use two separate plots.
If you decide to use to two separate plots for the two histograms, please ensure that the x-axis limits for both of the plots are identical. It is much easier to compare the two in that case.
For the histograms, you should have intervals representing the volume of ridership (value of ENTRIESn_hourly) on the x-axis and the frequency of occurrence on the y-axis. For example, each interval (along the x-axis), the height of the bar for this interval will represent the number of records (rows in our data) that have ENTRIESn_hourly that falls in this interval.
Remember to increase the number of bins in the histogram (by having larger number of bars). The default bin width is not sufficient to capture the variability in the two samples.
3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like. Some suggestions are:
Ridership by time-of-day
Ridership by day-of-week
Section 4. Conclusion
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
4.1 From your analysis and interpretation of the data, do more people ride the NYC subway when it is raining or when it is not raining?
4.2 What analyses lead you to this conclusion? You should use results from both your statistical tests and your linear regression to support your analysis.
Section 5. Reflection
Please address the following questions in detail. Your answers should be 1-2 paragraphs long.
5.1 Please discuss potential shortcomings of the methods of your analysis, including:
Dataset,
Analysis, such as the linear regression model or statistical test
5.2 (Optional) Do you have any other insight about the dataset that you would like to share with us?
Data Analysis and Source Code
A.1 Import NYC Subway Data
End of explanation
p = ggplot(aes(x = 'rain', y='ENTRIESn_hourly', color = "meantempi"), data=weather_data)
p + geom_point(position = "jitter", alpha = 0.7) + scale_y_continuous(limits = [0,45000]) + \
facet_wrap('hour') + ggtitle("Number of Riders over the Day Dependent on Rain") + theme_bw()
Explanation: A.2 Exploratory Data Analysis for Features
a) Number of Riders over the Day Dependent on Rain
End of explanation
p = ggplot(aes(x = 'rain', y='ENTRIESn_hourly', color = "meantempi"), data=weather_data)
p + geom_point(position = "jitter", alpha = 0.7) + scale_y_continuous(limits = [0,45000]) + theme_bw() + \
facet_wrap('day_week', nrow = 4) + ggtitle("Number of Riders over the Week Dependent on Rain")
Explanation: b) Number of Riders over the Day Dependent on Weekday
End of explanation
p = ggplot(aes(x = 'fog', y='ENTRIESn_hourly', color = "meantempi"), data=weather_data)
p + geom_point(position = "jitter", alpha = 0.7) + scale_y_continuous(limits = [0,45000]) + \
facet_wrap('hour') + ggtitle("Number of Riders over the Day Dependent on Fog") + theme_bw()
Explanation: c) Number of Riders over the Day Dependent on Fog
End of explanation
p = ggplot(aes(x = 'ENTRIESn_hourly', color = 'rain'), data=weather_data)
p + geom_density(size = 3, alpha = 0.25) + theme_bw() + \
scale_x_continuous(limits = [-1000,5000]) + ggtitle("Number of Riders Dependent on Rain")
Explanation: A.3 Test of Normal Distribution
End of explanation
no_rain = weather_data["ENTRIESn_hourly"][weather_data["rain"]==1].dropna()
with_rain = weather_data["ENTRIESn_hourly"][weather_data["rain"]==2].dropna()
print no_rain.head()
print with_rain.head()
without_rain_mean = np.mean(no_rain)
with_rain_mean = np.mean(with_rain)
print without_rain_mean
print with_rain_mean
U, p = sc.stats.mannwhitneyu(no_rain, with_rain)
z, pval = sc.stats.ranksums(no_rain, with_rain)
print U, p
print z, pval
Explanation: Based on the plot, the sample of entries does not seem normally distributed. Hence, the Mann-Whitney-Wilcoxon RankSum test (no assumptions about any underlying distributions) is conducted to test if the two samples of the number of entries in the NYC subway on rainy and non rainy days come from the same population:
H0: The distribution of number of entries on rainy days $F_{rain}(x)$ is identical with the distribution on non rainy days $F_{no-rain}(x-a)$, hence a = 0
H1: The distributions are not the same, a $\neq$ 0
End of explanation |
9,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
title
Step1: In order to call one of the functions belonging to a particular module, you can use the . syntax. For example, numpy has a mean() function which will compute the arithmetic mean across an axis. If we wanted to call that function, we would simply write
Step2: For those coming from R, this is the equivalent of something like dplyr
Step3: Lists can be arbitrarily long and can store hold multiple types of data, although this isn't usually a good idea
Step4: Similar to lists, the numpy module provides the ability to work with n-dimensional arrays for numerical data only. We can initialize an array full of zeros using the np.zeros() function
Step5: If we want to work with numerical data that has two dimensions, we can create a matrix in a very similar way
Step6: We won't be working much with numpy arrays directly today, but you should know that they are often a better option than lists when you are working with numerical data.
Today, we will primarily be working with pandas dataframes. This object provides functionality that is very similar to dataframes in R. Let's start by converting our empty matrix into a dataframe. We can also give each of our columns more informative names
Step7: Another way we can create a dataframe is by first creating a dictionary and then converting this to a dataframe. A dictionary is another type of data structure used by Python. Dictionaries consist of an unordered collection of key-value pairs. Keys are used to index the dictionary in order to access the values associated with that key. Let's start by making a simple dictionary with one key and one value
Step8: If we index this dictionary using the name key, it will return the its value, which is a list of names
Step9: We can also add new key-value pairs to this dictionary
Step10: Similar to how we made a dataframe from our numpy array above, we can easily make a dataframe from this dictionary
Step11: Reading Data into Python
It's easy to introduce errors if you are entering data manually like above, and with a lot of data it would get tedious. Most of the time, you'll be reading data from an external file (.txt or .csv), or opening up an existing dataset in Python. Once you find the location of your files, what you do next will depend on the file format.
Reminder about the os module
This module provides a way to interface with the operating system we are running Python on (Windows, Mac, or Linux). Let's start by first loading this module
Step12: It's always important to check where our working directory is when trying to read data into Python.
Step13: You can access a list of everything (all files and directories) within your working directory using the os.listdir() function...
Step14: ...as well as in the "Files" tab on the left-hand side of the JupyterLab window.
What kind of file do you have?
For .txt, .csv, or any kind of delimited (such as tab-delimited) file, you can use the pandas function read_table()
Step15: If you know you have a .csv file, another common option is read_csv(), which has a default comma separator.
Remember, all of these commands can have arguments that will help Python make sense of your data. To find out what arguments are possible, you can use the help() function like we did above to look at what read_table() does.
To do this, just put whatever command you would like to learn about inside of help() (e.g. help(pd.read_table)). Remember that for functions associated with a particular module you will need to tell Python which module they come from using the . syntax.
You can always also Google a function to quickly find this information.
Inspecting your data
Now that you have data, it's time to get some results! But wait! Are you sure this data is OK? Doing some basic steps to inspect your data now can save you lots of headaches later, and Python makes it really easy.
Start by checking that you have the expected number of rows and columns in your data frame. You can do this by by asking Python
Step16: Rename a variable
Now that we've loaded our data into Python and have made sure it makes sense, we can start manipulating and cleaning it.
Look back at your dataframe. What is the fifth variable? What does that even mean? Luckily, this is your study and you know that it's a personality questionnaire measuring neuroticism. Let's fix that name and make it more intuitive
Step17: We can also rename multiple variables at once
Step18: Adding a new column
Often we'll want to add some new data into a dataframe.
Step19: For those coming from R, the Python syntax for referencing columns as df["columnName"] is roughly equivalent to using R's $ operator.
Removing Columns
We can remove columns with the .drop() function
Step20: Indexing a Dataframe
Sometimes you might want to look at only a subset of the columns in a dataframe (for example, when there are many variables). Doing this with a pandas dataframe is relatively straightforward
Step21: Using .loc and .iloc to index DataFrames
If we want to pull out or manipulate specific pieces of dataframes, we can use the .loc[] and .iloc[] functions.
With both functions, the data referenced is always formatted as [selection of rows, selection of columns].
.loc[] takes selections of rows from named columns.
So, here we're asking for elements 0
Step22: We can also use conditional logic to select rows. Here, we ask for all elements in the Age column that are above 24
Step23: .iloc[] takes selections of rows and columns using numeric indices.
Step24: Check for missing data
One problem you may have is missing data. Sometimes this is something you already know about, but you should check your data frame anyway to make sure nothing got missed in a data entry error. For small datasets, you can do this visually, but for larger ones you can ask Python.
Step25: In this case, the missing value is the Age value in row 38. You know you have this info somewhere on a paper form, so you go dig it up and want to replace it.
Step26: Check for correct values
Let's take a look at the Sex variable
Step27: It looks like there are two categories here, but let's double check. We can use the unique() function to list all of the unique values in a column
Step28: Here we see another data entry problem. At least one of the rows has a third category label that should really be another case of "Female". Let's replace this label using the replace() function
Step29: Now let's look at some of the continuous variables. You can also look at these by indexing them individually, but sometimes it's easier to visualize. The hist() function, which creates histograms, is good here.
Step30: Looks like we have a potential outlier on the Neuroticism score. This could be an entry error, but it could also be a real value that just happens to be really low. This is why data inspection is so important for later analysis — now that you know that the value is there, it's up to you to decide how to deal with it.
Filtering data
Let's say we have decided a prori to exclude outliers 3SD above or below the mean. We will first define these boundaries
Step31: We can now use conditional indexing to exclude all rows with a Neuroticism score above or below these values
Step32: The line above says
Step33: Getting Ready for Analysis
Now that we've gone through and cleaned up the problems, you can think ahead to how you'll want to use this data.
Recoding variables
Sometimes we want to treat categorical variables as factors, but sometimes we want to pretend they're numeric (as in a regression, when binary variables can be coded as 0 and 1). Right now, Condition is coded as a binary numeric variable, but that's not very informative, so you'd rather have the values be descriptive. Here, the function replace() is again useful
Step34: Calculating new variables
You may also want to recalculate or rescale some variables. For example, we can turn Neuroticism into a z-score, or calculate an average response across the four time points.
To compute a z-score, we can use the zscore() function from the scipy.stats module
Step35: To calculate the means across each day, we can use the mean() function from pandas on a dataframe that has been indexed to include only data from the four days
Step36: Combining data from multiple sources
Sometimes, data might be spread across multiple files, and you'll want to combine those for your analysis. For example, maybe this study had a follow-up survey on Day 30. Scores from that survey were entered into another spreadsheet, which only has the subject ID and that score. We want to include that score into our data.
Step37: We can use the function merge() to combine the two dataframes. To make sure the data matches up, we use the on argument to specify that IDs should match. That way even if the data is in a different order, scores will match together correctly.
Step38: Shaping data
Finally, you may want to change the layout of your data. Right now, our dataframe is in "wide" format, which means that each row is a subject, and each observation gets its own column. For some analyses, you'll need to use "long" format, where each row is an observation, and columns specify things like Time and ID to differentiate the observations. For this, we can use the melt() function in pandas
Step39: Wide → Long
Step40: Long → Wide
We can go back in the other direction by using the pivot_table() function in pandas
Step41: Saving Your Work
Once you've created a data cleaning script like this one, you'll have a record of all the edits you've made on the raw data, and you can recreate your cleaned data just by running the script again. However, it's often easier to save your cleaned data as its own file (never overwrite the raw data), so when you come back to do analysis you don't have to bother with all the cleaning steps.
You can always save data frames as a .csv for easy sharing and viewing outside of Python. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("poster")
sns.set(style="ticks",font="Arial",font_scale=2)
Explanation: title: "Data Cleaning in Python"
subtitle: "CU Psych Scientific Computing Workshop"
weight: 1201
tags: ["core", "python"]
Goals of this Lesson
Students will learn:
How to open various data types in Python
How to check for missing or problematic data and address issues
How to filter, rearrange and shape data in preparation for analysis
Links to Files
The files for all tutorials can be downloaded from the Columbia Psychology Scientific Computing GitHub page using these instructions. This particular file is located here: /content/tutorials/python/2-datacleaning/index.ipynb.
A Quick Introduction to Python Scientific Computing Modules
As a programming languge, Python can do quite a lot. For example, it is an extremely popular choice for GUI and web-based application development (Reddit, Google, Facebook), databases (Spotify, Netflix), and scientific computing (NASA, for example, but also us!).
One reason that Python is so widely used is due to its extensive library of third-party modules. Let's start by briefly covering the most important modules for scientific computing, some (but not all) of which we'll be using today.
Data Analysis
NumPy: The fundamental package for scientific computing in Python. NumPy provides Python with most of the functionality of MATLAB.
SciPy: Provides many user-friendly and efficient numerical routines such as routines for numerical integration, interpolation, optimization, linear algebra and statistics.
Pandas: Provides high-performance, easy-to-use data structures and data analysis tools. Pandas provides Python with most of the functionality of R.
Data Visualization
Matplotlib: Python 2D plotting library which produces publication quality figures. The pyplot module provides a MATLAB-like interface and is what most people use.
Seaborn: A Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
We'll now import a few of these modules using their standard abbreviations.
End of explanation
np.mean([2,4,6])
Explanation: In order to call one of the functions belonging to a particular module, you can use the . syntax. For example, numpy has a mean() function which will compute the arithmetic mean across an axis. If we wanted to call that function, we would simply write:
End of explanation
mylist = [1,2,3]
mylist
Explanation: For those coming from R, this is the equivalent of something like dplyr::filter(). Python is stricter than R about making sure you specify which library the function you are using comes from.
Now that you're familiar with the basics of modules in Python, let's go ahead and move on to some data cleaning.
Python Data Structures
There are a few ways that data can be stored and manipulated in Python, some of which you've already covered.
To review, the first and most basic is a list:
End of explanation
mylist2 = [1,"2",3.0, [4,5]]
mylist2
Explanation: Lists can be arbitrarily long and can store hold multiple types of data, although this isn't usually a good idea:
End of explanation
myarray = np.zeros((10))
myarray
Explanation: Similar to lists, the numpy module provides the ability to work with n-dimensional arrays for numerical data only. We can initialize an array full of zeros using the np.zeros() function:
End of explanation
mymatrix = np.zeros((10,2))
mymatrix
Explanation: If we want to work with numerical data that has two dimensions, we can create a matrix in a very similar way:
End of explanation
mydataframe = pd.DataFrame(mymatrix,columns=["Height","Weight"])
mydataframe
Explanation: We won't be working much with numpy arrays directly today, but you should know that they are often a better option than lists when you are working with numerical data.
Today, we will primarily be working with pandas dataframes. This object provides functionality that is very similar to dataframes in R. Let's start by converting our empty matrix into a dataframe. We can also give each of our columns more informative names:
End of explanation
data = {'name':["Monica","Michelle","Paul","Ellen"]}
data
Explanation: Another way we can create a dataframe is by first creating a dictionary and then converting this to a dataframe. A dictionary is another type of data structure used by Python. Dictionaries consist of an unordered collection of key-value pairs. Keys are used to index the dictionary in order to access the values associated with that key. Let's start by making a simple dictionary with one key and one value:
End of explanation
data['name']
Explanation: If we index this dictionary using the name key, it will return the its value, which is a list of names:
End of explanation
data['score'] = [16,20,19,35]
data['year'] = [2, 5, 2, 1]
data
Explanation: We can also add new key-value pairs to this dictionary:
End of explanation
dataframe = pd.DataFrame(data)
dataframe
Explanation: Similar to how we made a dataframe from our numpy array above, we can easily make a dataframe from this dictionary:
End of explanation
import os
Explanation: Reading Data into Python
It's easy to introduce errors if you are entering data manually like above, and with a lot of data it would get tedious. Most of the time, you'll be reading data from an external file (.txt or .csv), or opening up an existing dataset in Python. Once you find the location of your files, what you do next will depend on the file format.
Reminder about the os module
This module provides a way to interface with the operating system we are running Python on (Windows, Mac, or Linux). Let's start by first loading this module:
End of explanation
os.getcwd()
Explanation: It's always important to check where our working directory is when trying to read data into Python.
End of explanation
os.listdir()
Explanation: You can access a list of everything (all files and directories) within your working directory using the os.listdir() function...
End of explanation
#help(pd.read_table)
mydata = pd.read_table("Study1.csv", sep=",")
Explanation: ...as well as in the "Files" tab on the left-hand side of the JupyterLab window.
What kind of file do you have?
For .txt, .csv, or any kind of delimited (such as tab-delimited) file, you can use the pandas function read_table():
End of explanation
# get the number of rows and columns
mydata.shape
# get the names of columns
mydata.columns
# take a peek at the first few rows
mydata.head()
Explanation: If you know you have a .csv file, another common option is read_csv(), which has a default comma separator.
Remember, all of these commands can have arguments that will help Python make sense of your data. To find out what arguments are possible, you can use the help() function like we did above to look at what read_table() does.
To do this, just put whatever command you would like to learn about inside of help() (e.g. help(pd.read_table)). Remember that for functions associated with a particular module you will need to tell Python which module they come from using the . syntax.
You can always also Google a function to quickly find this information.
Inspecting your data
Now that you have data, it's time to get some results! But wait! Are you sure this data is OK? Doing some basic steps to inspect your data now can save you lots of headaches later, and Python makes it really easy.
Start by checking that you have the expected number of rows and columns in your data frame. You can do this by by asking Python:
End of explanation
mydata = mydata.rename({'Personality':'Neuroticism'}, axis="columns")
mydata.head()
Explanation: Rename a variable
Now that we've loaded our data into Python and have made sure it makes sense, we can start manipulating and cleaning it.
Look back at your dataframe. What is the fifth variable? What does that even mean? Luckily, this is your study and you know that it's a personality questionnaire measuring neuroticism. Let's fix that name and make it more intuitive:
End of explanation
mydata = mydata.rename({'T1':'Day1',
'T2':'Day2',
'T3':'Day3',
'T4':'Day4'}, axis="columns")
mydata.head()
Explanation: We can also rename multiple variables at once:
End of explanation
# here we add a column where are the values are the same string
mydata['studyName'] = 'study1'
# here we add a column 'random' of 50 unique random numbers
mydata['random'] = np.random.random(50)
mydata.head()
Explanation: Adding a new column
Often we'll want to add some new data into a dataframe.
End of explanation
mydata = mydata.drop(['random', 'studyName'], axis = 1)
Explanation: For those coming from R, the Python syntax for referencing columns as df["columnName"] is roughly equivalent to using R's $ operator.
Removing Columns
We can remove columns with the .drop() function
End of explanation
# indexing a single column
ids = mydata[['ID']]
ids.head()
# indexing multiple columns
mydata_subset = mydata[['ID','Age','Neuroticism']]
mydata_subset.head()
Explanation: Indexing a Dataframe
Sometimes you might want to look at only a subset of the columns in a dataframe (for example, when there are many variables). Doing this with a pandas dataframe is relatively straightforward:
End of explanation
mydata.loc[0:2, ['Age']]
Explanation: Using .loc and .iloc to index DataFrames
If we want to pull out or manipulate specific pieces of dataframes, we can use the .loc[] and .iloc[] functions.
With both functions, the data referenced is always formatted as [selection of rows, selection of columns].
.loc[] takes selections of rows from named columns.
So, here we're asking for elements 0:2 from the Age column:
End of explanation
mydata.loc[mydata['Age'] > 24, ['Age']]
Explanation: We can also use conditional logic to select rows. Here, we ask for all elements in the Age column that are above 24:
End of explanation
mydata.iloc[3:7, 1:4]
Explanation: .iloc[] takes selections of rows and columns using numeric indices.
End of explanation
mydata.isnull()
#mydata.isnull().values.any()
Explanation: Check for missing data
One problem you may have is missing data. Sometimes this is something you already know about, but you should check your data frame anyway to make sure nothing got missed in a data entry error. For small datasets, you can do this visually, but for larger ones you can ask Python.
End of explanation
# Verify that this row contains the missing data
mydata.loc[mydata["ID"]==39]
# Replace row, column with the value 30
mydata.loc[mydata["ID"]==39, "Age"] = 30
# Verify that the replacement worked
mydata.loc[mydata["ID"]==39]
Explanation: In this case, the missing value is the Age value in row 38. You know you have this info somewhere on a paper form, so you go dig it up and want to replace it.
End of explanation
mydata['Sex'].head()
Explanation: Check for correct values
Let's take a look at the Sex variable:
End of explanation
mydata["Sex"].unique()
Explanation: It looks like there are two categories here, but let's double check. We can use the unique() function to list all of the unique values in a column:
End of explanation
mydata["Sex"] = mydata["Sex"].replace('Femle', 'Female')
# Verify that the replacement worked
mydata["Sex"].unique()
Explanation: Here we see another data entry problem. At least one of the rows has a third category label that should really be another case of "Female". Let's replace this label using the replace() function:
End of explanation
mydata["Age"].hist();
mydata["Neuroticism"].hist();
Explanation: Now let's look at some of the continuous variables. You can also look at these by indexing them individually, but sometimes it's easier to visualize. The hist() function, which creates histograms, is good here.
End of explanation
upper = np.mean(mydata["Neuroticism"]) + 3*np.std(mydata["Neuroticism"])
lower = np.mean(mydata["Neuroticism"]) - 3*np.std(mydata["Neuroticism"])
Explanation: Looks like we have a potential outlier on the Neuroticism score. This could be an entry error, but it could also be a real value that just happens to be really low. This is why data inspection is so important for later analysis — now that you know that the value is there, it's up to you to decide how to deal with it.
Filtering data
Let's say we have decided a prori to exclude outliers 3SD above or below the mean. We will first define these boundaries:
End of explanation
mydata = mydata[(mydata["Neuroticism"] > lower) & (mydata["Neuroticism"] < upper)]
Explanation: We can now use conditional indexing to exclude all rows with a Neuroticism score above or below these values:
End of explanation
# Verify that we excluded 1 outlier
mydata.shape
mydata["Neuroticism"].hist();
Explanation: The line above says: return only the Neuroticism values greater than the lower boundary and less than the upper boundary and then save it in the mydata variable.
End of explanation
mydata['ConditionF'] = mydata['Condition'].replace([0,1], ['Control','Treatment'])
# Verify that your variable is now recoded as you'd like
mydata[['Condition','ConditionF']].head()
Explanation: Getting Ready for Analysis
Now that we've gone through and cleaned up the problems, you can think ahead to how you'll want to use this data.
Recoding variables
Sometimes we want to treat categorical variables as factors, but sometimes we want to pretend they're numeric (as in a regression, when binary variables can be coded as 0 and 1). Right now, Condition is coded as a binary numeric variable, but that's not very informative, so you'd rather have the values be descriptive. Here, the function replace() is again useful:
End of explanation
from scipy.stats import zscore
mydata['NeuroticismZ'] = zscore(mydata['Neuroticism'])
mydata['NeuroticismZ'].hist();
Explanation: Calculating new variables
You may also want to recalculate or rescale some variables. For example, we can turn Neuroticism into a z-score, or calculate an average response across the four time points.
To compute a z-score, we can use the zscore() function from the scipy.stats module:
End of explanation
mydata['DayMean'] = mydata[['Day1','Day2','Day3','Day4']].mean(axis="columns")
mydata['DayMean'].hist();
Explanation: To calculate the means across each day, we can use the mean() function from pandas on a dataframe that has been indexed to include only data from the four days:
End of explanation
# first load the followup dataset
mydata2 = pd.read_csv("Study1_Followup.csv")
Explanation: Combining data from multiple sources
Sometimes, data might be spread across multiple files, and you'll want to combine those for your analysis. For example, maybe this study had a follow-up survey on Day 30. Scores from that survey were entered into another spreadsheet, which only has the subject ID and that score. We want to include that score into our data.
End of explanation
mydata = mydata.merge(mydata2,on="ID")
mydata.head()
Explanation: We can use the function merge() to combine the two dataframes. To make sure the data matches up, we use the on argument to specify that IDs should match. That way even if the data is in a different order, scores will match together correctly.
End of explanation
value_cols = ["Day1","Day2","Day3","Day4"] # columns we would like to convert to a single "long" column
id_cols = list(mydata.columns) # columns we would like to stay in the same "wide" format
for i in value_cols:
id_cols.remove(i)
Explanation: Shaping data
Finally, you may want to change the layout of your data. Right now, our dataframe is in "wide" format, which means that each row is a subject, and each observation gets its own column. For some analyses, you'll need to use "long" format, where each row is an observation, and columns specify things like Time and ID to differentiate the observations. For this, we can use the melt() function in pandas:
End of explanation
mydata_Long = pd.melt(mydata,id_vars=id_cols,var_name="Time",value_vars=value_cols,value_name="Score")
mydata_Long.head()
Explanation: Wide → Long
End of explanation
mydata_Wide = mydata_Long.pivot_table(values="Score", index=id_cols, columns='Time').reset_index()
mydata_Wide.columns.name = None
mydata_Wide.head()
Explanation: Long → Wide
We can go back in the other direction by using the pivot_table() function in pandas:
End of explanation
# write data to a .csv
mydata.to_csv("Study1_clean.csv",index = False)
Explanation: Saving Your Work
Once you've created a data cleaning script like this one, you'll have a record of all the edits you've made on the raw data, and you can recreate your cleaned data just by running the script again. However, it's often easier to save your cleaned data as its own file (never overwrite the raw data), so when you come back to do analysis you don't have to bother with all the cleaning steps.
You can always save data frames as a .csv for easy sharing and viewing outside of Python.
End of explanation |
9,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute
Now that we have datasets added to our Bundle, our next step is to run the forward model and compute a synthetic model for each of these datasets.
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: And we'll attach some dummy datasets. See Datasets for more details.
Step3: Default Compute Options
Any default Bundle already has a set of default compute options to run the backend for PHOEBE 2. In most cases, you can just edit the options in this default set of compte options.
Step4: Adding Compute Options
In other cases, we may want to manually add additional sets of compute options.
This syntax should look very familiar by now, it takes a function (or the name of a recognized function in phoebe.parameters.compute) and then any
kwargs to set in that ParameterSet.
Let's say that we want to create two sets of compute options - in this example, we'll create one called 'preview' which will cut some corners to quickly get us a model, and one called 'detailed' which will get a much more precise model but likely take longer. As with other tags, the string you provide for the compute tag is up to you (so long as it doesn't raise an error because it conflicts with other tags).
Step5: Editing Compute Options
Backend-Specific Compute Options
Most of the parameters in the compute options are specific to the backend being used. Here, of course, we're using the PHOEBE 2.0 backend - but for details on other backends see the Alternate Backends Tutorial.
The PHOEBE 2.0 compute options are described in the tutorial on their relevant dataset types
Step6: as you can see, there is a copy for both of our compute options ('preview' and 'detailed').
If we know which set of compute options we'll be using, or only want to enable/disable for a given set, then we can do that
Step7: or to enable/disable a dataset for all sets of compute options, we can use the set_value_all method
Step8: If the enabled parameter is missing for a set of compute options - it is likely that that particular backend does not support that dataset type.
Running Compute
Simplest Case
run_compute takes arguments for the compute tag as well as the model tag for the resulting synthetic model(s).
You do not need to provide the compute tag if only 0 or 1 set of compute options exist in the Bundle. If there are no compute options, the default PHOEBE 2.0 options will be added on your behalf and used. If there is a single set of compute options, those will be assumed. In our case, we have two compute options in the Bundle (with tags 'preview' and 'detailed') so we must provide an argument for compute.
If you do not provide a tag for the model, one will be created for you called 'latest'. Note that any existing model with the same tag will immediately be overwritten once you call run_compute, so if you want to maintain the results from previous calls to run_compute, you must provide a NEW model tag.
Step9: Storing Models
Now let's compute models for three different 'versions' of parameters. By providing a model tag, we can keep the synthetics for each of these different runs in the bundle - which will be handy later on for plotting and comparing models.
Step10: We will now have three new sets of synthetics which can be compared, plotted, or removed.
Step11: Running Compute with Multiple Sets of Options
So far we've seen how setting up different sets of compute options can be handy - 'preview' vs 'detailed', for example. But there could also be situations where you want to use different sets of options per dataset. Perhaps you have a high-precision follow-up light curve of an eclipse along with a lower-precision light curve over a longer time baseline. So here you'd want to run 'detailed' on the high-precision light curve, but 'preview' on the lower-precision light curve.
You could of course call run_compute twice and create two separate models - but that isn't always convenient and will be a problem in the future when we want to fit data.
Instead we can send a list of compute options to run_compute.
A given dataset can only be enabled in up to 1 of the compute options we're sending to run_compute. So let's take care of that first (if we don't, we'd get an error when trying to call run_compute)
Step12: We probably have the same problem with 'lc01', but just didn't get far enough to raise the error. So let's fix that as well
Step13: So in this case, 'lc01' will be computed using the options in 'detailed' while 'orb01' will use the options in 'preview'.
Step14: Accessing Synthetics from Models
The synthetics can be accessed by their dataset and model tags.
Step15: or of course through method access | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Compute
Now that we have datasets added to our Bundle, our next step is to run the forward model and compute a synthetic model for each of these datasets.
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset(phoebe.dataset.orb, times=np.linspace(0,10,10), dataset='orb01', component=['primary', 'secondary'])
times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)
# test.lc.in has 1000 datapoints... let's use every 10 just for brevity
times, fluxes, sigmas = times[:10], fluxes[:10], sigmas[:10]
b.add_dataset(phoebe.dataset.lc, times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
Explanation: And we'll attach some dummy datasets. See Datasets for more details.
End of explanation
print b.computes
print b.filter(context='compute')
b.set_value('irrad_method', 'none')
Explanation: Default Compute Options
Any default Bundle already has a set of default compute options to run the backend for PHOEBE 2. In most cases, you can just edit the options in this default set of compte options.
End of explanation
b.add_compute(phoebe.compute.phoebe, compute='preview', irrad_method='none')
print b['preview@compute']
b.add_compute('phoebe', compute='detailed', irrad_method='wilson')
print b.get_compute('detailed')
Explanation: Adding Compute Options
In other cases, we may want to manually add additional sets of compute options.
This syntax should look very familiar by now, it takes a function (or the name of a recognized function in phoebe.parameters.compute) and then any
kwargs to set in that ParameterSet.
Let's say that we want to create two sets of compute options - in this example, we'll create one called 'preview' which will cut some corners to quickly get us a model, and one called 'detailed' which will get a much more precise model but likely take longer. As with other tags, the string you provide for the compute tag is up to you (so long as it doesn't raise an error because it conflicts with other tags).
End of explanation
print b['enabled@lc01']
Explanation: Editing Compute Options
Backend-Specific Compute Options
Most of the parameters in the compute options are specific to the backend being used. Here, of course, we're using the PHOEBE 2.0 backend - but for details on other backends see the Alternate Backends Tutorial.
The PHOEBE 2.0 compute options are described in the tutorial on their relevant dataset types:
Orbits (orb)
Meshes (mesh)
Light Curves/Fluxes (lc)
Radial Velocities (rv)
Enabling/Disabling Datasets
By default, synthetic models will be created for all datasets in the Bundle when run_compute is called. But you can disable a dataset to have run_compute ignore that dataset. This is handled by a BoolParameter with the qualifier 'enabled' - and has a copy that lives in each set of compute options
Let's say we wanted to compute the orbit but not light curve - so we want to see enabled@lc01:
End of explanation
b['enabled@lc01@preview'] = False
print b['enabled@lc01']
Explanation: as you can see, there is a copy for both of our compute options ('preview' and 'detailed').
If we know which set of compute options we'll be using, or only want to enable/disable for a given set, then we can do that:
End of explanation
b.set_value_all('enabled@lc01', False)
print b['enabled@lc01']
Explanation: or to enable/disable a dataset for all sets of compute options, we can use the set_value_all method:
End of explanation
b.run_compute(compute='preview')
b.models
Explanation: If the enabled parameter is missing for a set of compute options - it is likely that that particular backend does not support that dataset type.
Running Compute
Simplest Case
run_compute takes arguments for the compute tag as well as the model tag for the resulting synthetic model(s).
You do not need to provide the compute tag if only 0 or 1 set of compute options exist in the Bundle. If there are no compute options, the default PHOEBE 2.0 options will be added on your behalf and used. If there is a single set of compute options, those will be assumed. In our case, we have two compute options in the Bundle (with tags 'preview' and 'detailed') so we must provide an argument for compute.
If you do not provide a tag for the model, one will be created for you called 'latest'. Note that any existing model with the same tag will immediately be overwritten once you call run_compute, so if you want to maintain the results from previous calls to run_compute, you must provide a NEW model tag.
End of explanation
b.set_value('incl@orbit', 90)
b.run_compute(compute='preview', model='run_with_incl_90')
b.set_value('incl@orbit', 85)
b.run_compute(compute='preview', model='run_with_incl_85')
b.set_value('incl@orbit', 80)
b.run_compute(compute='preview', model='run_with_incl_80')
Explanation: Storing Models
Now let's compute models for three different 'versions' of parameters. By providing a model tag, we can keep the synthetics for each of these different runs in the bundle - which will be handy later on for plotting and comparing models.
End of explanation
b.models
Explanation: We will now have three new sets of synthetics which can be compared, plotted, or removed.
End of explanation
print b['enabled@orb01']
b.set_value_all('enabled@orb01@detailed', False)
b.set_value_all('enabled@orb01@preview', True)
print b['enabled@orb01']
Explanation: Running Compute with Multiple Sets of Options
So far we've seen how setting up different sets of compute options can be handy - 'preview' vs 'detailed', for example. But there could also be situations where you want to use different sets of options per dataset. Perhaps you have a high-precision follow-up light curve of an eclipse along with a lower-precision light curve over a longer time baseline. So here you'd want to run 'detailed' on the high-precision light curve, but 'preview' on the lower-precision light curve.
You could of course call run_compute twice and create two separate models - but that isn't always convenient and will be a problem in the future when we want to fit data.
Instead we can send a list of compute options to run_compute.
A given dataset can only be enabled in up to 1 of the compute options we're sending to run_compute. So let's take care of that first (if we don't, we'd get an error when trying to call run_compute):
End of explanation
print b['enabled@lc01']
b.set_value_all('enabled@lc01@detailed', True)
b.set_value_all('enabled@lc01@preview', False)
print b['enabled@lc01']
Explanation: We probably have the same problem with 'lc01', but just didn't get far enough to raise the error. So let's fix that as well
End of explanation
b.run_compute(compute=['detailed', 'preview'], model='multiplecompute')
b.models
Explanation: So in this case, 'lc01' will be computed using the options in 'detailed' while 'orb01' will use the options in 'preview'.
End of explanation
b['run_with_incl_90']
b['primary@run_with_incl_90']
b['x@primary@run_with_incl_90']
Explanation: Accessing Synthetics from Models
The synthetics can be accessed by their dataset and model tags.
End of explanation
print b.get_value(qualifier='xs', dataset='orb01', component='primary', model='run_with_incl_90')[:10]
Explanation: or of course through method access:
End of explanation |
9,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Draw sample MNIST images from dataset
Demonstrates how to sample and plot MNIST digits using tf.keras API.
Using tf.keras.datasets, loading the MNIST data is just 1-line of code. After loading, the dataset is already grouped into train and test splits.
Step1: Bar graph of train data
We can see that the labels are almost uniformly distributed.
Note that this is ideal. In some datasets, the distribution may be highly un-balanced. In such cases, the training is more challenging.
Step2: The test data is also almost uniformly distributed.
Step3: Random sample of data from train split
Let us get and show 25 random samples from the train split. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
# load dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Explanation: Draw sample MNIST images from dataset
Demonstrates how to sample and plot MNIST digits using tf.keras API.
Using tf.keras.datasets, loading the MNIST data is just 1-line of code. After loading, the dataset is already grouped into train and test splits.
End of explanation
# count the number of unique train labels
unique, counts = np.unique(y_train, return_counts=True)
print("Train labels: ", dict(zip(unique, counts)))
plt.bar(unique, counts)
plt.xticks(unique, unique)
plt.show()
Explanation: Bar graph of train data
We can see that the labels are almost uniformly distributed.
Note that this is ideal. In some datasets, the distribution may be highly un-balanced. In such cases, the training is more challenging.
End of explanation
# count the number of unique test labels
unique, counts = np.unique(y_test, return_counts=True)
print("Test labels: ", dict(zip(unique, counts)))
plt.bar(unique, counts, color='orange')
plt.xticks(unique, unique)
plt.show()
Explanation: The test data is also almost uniformly distributed.
End of explanation
# sample 25 mnist digits from train dataset
indexes = np.random.randint(0, x_train.shape[0], size=25)
images = x_train[indexes]
labels = y_train[indexes]
# plot the 25 mnist digits
plt.figure(figsize=(5,5))
for i in range(len(indexes)):
plt.subplot(5, 5, i + 1)
image = images[i]
plt.imshow(image, cmap='gray')
plt.axis('off')
# plt.savefig("mnist-samples.png")
plt.show()
Explanation: Random sample of data from train split
Let us get and show 25 random samples from the train split.
End of explanation |
9,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP Example
Step1: Plotting Support
Step2: Jaynes-Cummings model, with the cavity as a non-Markovian bath
As a simple example, we consider the Jaynes-Cummings mode, and the non-Markovian dynamics of the qubit when the cavity is traced out. In this example, the dynamical maps $\mathcal{E}_k$ are the reduced time-propagators for the qubit, after evolving and tracing out the cavity, i.e.
$$
\mathcal{E}k \rho = {\rm tr}{\rm cav} \left[ {\rm e}^{\mathcal{L} t_k} \rho \otimes \rho_{0,{\rm cav}} \right],
$$
where $\mathcal{L}$ is the Lindbladian for the dissipative JC model (defined below) and $\rho_{0,{\rm cav}}$ is the initial state of the cavity.
Problem setup
Step3: Exact timepropagators to learn from
The function dynmap generates an exact timepropagator for the qubit $\mathcal{E}_{k}$ for a time $t_k$. <br>
Step4: Exact time evolution using standard mesolve method
Step5: Approximate solution using the Transfer Tensor Method for different learning times
Step6: Visualize results
Step7: Discussion
The figure above illustrates how the transfer tensor method needs a sufficiently long set of learning times to get good results. The green dots show results for learning times $t_k=0,0.1,\dots,0.5$, which is clearly not sufficient. The red dots show results for $t_k=0,0.1,\dots,2.0$, which gives results that are in very good agreement with the exact solution.
Epilouge | Python Code:
import numpy as np
import qutip as qt
from qutip.ipynbtools import version_table
import qutip.nonmarkov.transfertensor as ttm
Explanation: QuTiP Example: The Transfer Tensor Method for Non-Markovian Open Quantum Systems
Arne L. Grimsmo <br>
Université de Sherbrooke <br>
[email protected]
$\newcommand{\ket}[1]{\left|#1\right\rangle}$
$\newcommand{\bra}[1]{\left\langle#1\right|}$
Introduction
The "Transfer Tensor Method" was introduced by Cerrillo and Cao in Phys. Rev. Lett 112, 110401 (2014) (arXiv link), as a general method for evolving non-Markovian open quantum systems.
The method takes as input a set of dynamical maps $\mathcal{E}_k$, such that
$$
\rho(t_k) = \mathcal{E}_k \rho(0)
$$
for an intial set of times $t_k$. This set of dynamical maps could be the result of experimental process tomography of they could be precomputed through some other (typically costly) method. The idea is that based on knowledge of these maps, one can try to exptrapolate the, in general non-Markovian, time-evolution to larger times, $t_n > t_k$. The method assumes that there is no explicit time-dependence in the total system-bath Hamiltonian.
Preamble
Imports
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Plotting Support
End of explanation
kappa = 1.0 # cavity decay rate
wc = 0.0*kappa # cavity frequency
wa = 0.0*kappa # qubit frequency
g = 10.0*kappa # coupling strength
N = 3 # size of cavity basis
# intial state
psi0c = qt.basis(N,0)
rho0c = qt.ket2dm(psi0c)
rho0a = qt.ket2dm(qt.basis(2,0))
rho0 = qt.tensor(rho0a,rho0c)
rho0avec = qt.operator_to_vector(rho0a)
# identity superoperator
Id = qt.tensor(qt.qeye(2),qt.qeye(N))
E0 = qt.sprepost(Id,Id)
# partial trace over the cavity, reprsented as a superoperator
ptracesuper = qt.tensor_contract(E0,(1,3))
# intial state of the cavity, represented as a superoperator
superrho0cav = qt.sprepost(qt.tensor(qt.qeye(2),psi0c),
qt.tensor(qt.qeye(2),psi0c.dag()))
# operators
a = qt.tensor(qt.qeye(2), qt.destroy(N))
sm = qt.tensor(qt.sigmam(), qt.qeye(N))
sz = qt.tensor(qt.sigmaz(), qt.qeye(N))
# Hamiltonian
H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag())
c_ops = [np.sqrt(kappa)*a]
Explanation: Jaynes-Cummings model, with the cavity as a non-Markovian bath
As a simple example, we consider the Jaynes-Cummings mode, and the non-Markovian dynamics of the qubit when the cavity is traced out. In this example, the dynamical maps $\mathcal{E}_k$ are the reduced time-propagators for the qubit, after evolving and tracing out the cavity, i.e.
$$
\mathcal{E}k \rho = {\rm tr}{\rm cav} \left[ {\rm e}^{\mathcal{L} t_k} \rho \otimes \rho_{0,{\rm cav}} \right],
$$
where $\mathcal{L}$ is the Lindbladian for the dissipative JC model (defined below) and $\rho_{0,{\rm cav}}$ is the initial state of the cavity.
Problem setup
End of explanation
def dynmap(t):
# reduced dynamical map for the qubit at time t
Et = qt.mesolve(H, E0, [0.,t], c_ops, []).states[-1]
return ptracesuper*(Et*superrho0cav)
Explanation: Exact timepropagators to learn from
The function dynmap generates an exact timepropagator for the qubit $\mathcal{E}_{k}$ for a time $t_k$. <br>
End of explanation
exacttimes = np.arange(0,5,0.01)
exactsol = qt.mesolve(H, rho0, exacttimes, c_ops, [])
Explanation: Exact time evolution using standard mesolve method
End of explanation
times = np.arange(0,5,0.1) # total extrapolation time
ttmsols = []
maxlearningtimes = [0.5, 2.0] # maximal learning times
for T in maxlearningtimes:
learningtimes = np.arange(0,T,0.1)
learningmaps = [dynmap(t) for t in learningtimes] # generate exact dynamical maps to learn from
ttmsols.append(ttm.ttmsolve(learningmaps, rho0a, times)) # extrapolate using TTM
Explanation: Approximate solution using the Transfer Tensor Method for different learning times
End of explanation
fig, ax = plt.subplots(figsize=(10,7))
ax.plot(exactsol.times, qt.expect(sz, exactsol.states),'-b',linewidth=3.0)
style = ['og','or']
for i,ttmsol in enumerate(ttmsols):
ax.plot(ttmsol.times, qt.expect(qt.sigmaz(), ttmsol.states),style[i],linewidth=1.5,)
ax.legend(['exact',str(maxlearningtimes[0]),str(maxlearningtimes[1])])
ax.set_xlabel(r'$\kappa t$', fontsize=20)
ax.set_ylabel(r'$\sigma_z$', fontsize=20)
Explanation: Visualize results
End of explanation
version_table()
Explanation: Discussion
The figure above illustrates how the transfer tensor method needs a sufficiently long set of learning times to get good results. The green dots show results for learning times $t_k=0,0.1,\dots,0.5$, which is clearly not sufficient. The red dots show results for $t_k=0,0.1,\dots,2.0$, which gives results that are in very good agreement with the exact solution.
Epilouge
End of explanation |
9,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Profiling and Optimizing
By C Hummels (Caltech)
Step1: It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimization. Let's give it a whirl.
Problem 1) Using timeit
We will begin our experience with profilers by using the time and timeit commands. time can be run on any size of program, but it returns coarse level time information on how long something took to run overall.
There are a lot of small optimizations that can add up to a lot of time in real-world software. Let's look at a few of the non-obvious ones.
Problem 1a
What is the best way to join a bunch of strings into a larger string? There are several ways of doing this, but some are clearly superior to others. Let's use timeit to test things out.
Below, in each of the cells after the string_list is defined, put a new code snippet using the following three methods for building a string
Step2: Interesting! So it appears that the join method was the fastest by a factor of four or so. Good to keep that in mind for future use of strings!
Problem 1b
What about building big lists or list-like structures (like numpy arrays)? We now know how to construct lists in a variety of ways, so let's see which is fastest. Make a list of ascending perfect squares (i.e. 1, 4, 9, ...) for the first 1 million integers. Use these methods
Step6: Whoa! We were able to see a >100x efficiency increase by just switching these methods slightly! Numpy arrays are awesome, but I'm sort of surprised that the lambda function won compared to native numpy.
Problem 2) Deeper profiling with cProfile and line_profiler
Problem 2a
OK, so what about larger program? Here is a sorting algorithm that I wrote, which may possess some inefficiencies. But it is hard to know which bugs are causing the biggest problems (some actually aren't that big of a deal in the long term). Let's see if we can speed it up. First, take this code and copy it into a file called sort.py. Read through the code to make sure you understand it. Then, run it with the time command, and write down the total time it took to run.
Step17: Problem 2b
OK, now try running the cProfile module with it in order to produce some profiling statistics. You can do this by running | Python Code:
import random
import numpy as np
from matplotlib import pyplot as plt
Explanation: Profiling and Optimizing
By C Hummels (Caltech)
End of explanation
string_list = ['the ', 'quick ', 'brown ', 'fox ', 'jumped ', 'over ', 'the ', 'lazy ', 'dog']
%%timeit
output = ""
# complete
%%timeit
# complete
%%timeit
output = ""
# complete
Explanation: It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimization. Let's give it a whirl.
Problem 1) Using timeit
We will begin our experience with profilers by using the time and timeit commands. time can be run on any size of program, but it returns coarse level time information on how long something took to run overall.
There are a lot of small optimizations that can add up to a lot of time in real-world software. Let's look at a few of the non-obvious ones.
Problem 1a
What is the best way to join a bunch of strings into a larger string? There are several ways of doing this, but some are clearly superior to others. Let's use timeit to test things out.
Below, in each of the cells after the string_list is defined, put a new code snippet using the following three methods for building a string:
--Use the builtin + operator to add strings together in an iterative way
--Use the join method, as in "".join(list).
--Iteratively add the strings from the list together using "%s %s" string composition.
Guess which method you think will be fastest? Now test it out and see if you're right!
End of explanation
%%timeit
output = []
# complete
%%timeit
# complete
%%timeit
# complete
%%timeit
# complete
%%timeit
map(lambda x:# complete
Explanation: Interesting! So it appears that the join method was the fastest by a factor of four or so. Good to keep that in mind for future use of strings!
Problem 1b
What about building big lists or list-like structures (like numpy arrays)? We now know how to construct lists in a variety of ways, so let's see which is fastest. Make a list of ascending perfect squares (i.e. 1, 4, 9, ...) for the first 1 million integers. Use these methods:
--Iteratively appending x**2 values on to an empty list
--A for loop with the built in python range command
--A for loop with the numpy arange command
--Use the numpy arange command directly, and then take the square of it
--Use map to map a lambda squaring function to a numpy array constructed with numpy arange
Guess which method you think will be fastest? Now test it out and see if you're right!
End of explanation
# Sort version1
import random
def create_random_list(n_elements):
Create a list made up of random elements in random order
random_list = []
for i in range(n_elements):
random_list.append(random.random())
return random_list
def find_minimum_index(random_list):
Find the index of the minimum value in the list
# current minimum
min_value = 1
i = 0
# Find minimum in list
for element in random_list:
if element < min_value:
min_value = element
# Find value that matches minimum
for element in random_list:
if element == min_value:
return i
i += 1
def sort_list(random_list):
Sort a list into ascending order
output_list = []
for _ in range(len(random_list)):
i = find_minimum_index(random_list)
minimum = random_list[i]
output_list.append(minimum)
del random_list[i]
return output_list
if __name__ == '__main__':
l = create_random_list(10000)
o = sort_list(l)
Explanation: Whoa! We were able to see a >100x efficiency increase by just switching these methods slightly! Numpy arrays are awesome, but I'm sort of surprised that the lambda function won compared to native numpy.
Problem 2) Deeper profiling with cProfile and line_profiler
Problem 2a
OK, so what about larger program? Here is a sorting algorithm that I wrote, which may possess some inefficiencies. But it is hard to know which bugs are causing the biggest problems (some actually aren't that big of a deal in the long term). Let's see if we can speed it up. First, take this code and copy it into a file called sort.py. Read through the code to make sure you understand it. Then, run it with the time command, and write down the total time it took to run.
End of explanation
from matplotlib import pyplot as plt
import numpy as np
import random
class Galaxy():
Galaxy class for simply representing a galaxy.
def __init__(self, total_mass, cold_gas_mass, stellar_mass, age=0):
self.total_mass = total_mass
self.cold_gas_mass = cold_gas_mass
self.stellar_mass = stellar_mass
self.age = age
self.SFR = 0
self.color = 'red'
def __repr__(self):
return "Galaxy (m_total = %.1g; m_cold = %.1g; m_stars = %.1g; age = %.1g; SFR = %0.2f)" % \
(self.total_mass, self.cold_gas_mass, self.stellar_mass, self.age, self.SFR)
class EvolvingGalaxy(Galaxy):
Galaxy class for representing a galaxy that can evolve over time.
def current_state(self):
Return a tuple of the galaxy's total_mass, cold_gas_mass, stellar_mass, age, and SFR
return (self.total_mass, self.cold_gas_mass, self.stellar_mass, self.age, self.SFR)
def calculate_star_formation_rate(self):
Calculate the star formation rate by taking a random number between 0 and 1
normalized by the galaxy total mass / 1e12;
Also updates the galaxy's color to blue if SFR > 0.01, otherwise color = red
self.SFR = random.random() * (self.total_mass / 1e12)
if self.SFR > 0.01:
self.color = 'blue'
else:
self.color = 'red'
def accrete_gas_from_IGM(self, time):
Allow the galaxy to accrete cold gas from the IGM at a variable rate normalized to
the galaxy's mass
cold_gas_accreted = random.random() * 0.1 * time * (self.total_mass / 1e12)
self.cold_gas_mass += cold_gas_accreted
self.total_mass += cold_gas_accreted
def form_stars(self, time):
Form stars according to the current star formation rate and time available
If unable cold gas, then shut off star formation
if self.cold_gas_mass > self.SFR * time:
self.cold_gas_mass -= self.SFR * time
self.stellar_mass += self.SFR * time
else:
self.SFR = 0
self.color = 'red'
def evolve(self, time):
Evolve this galaxy forward for a period time
if random.random() < 0.01:
self.calculate_star_formation_rate()
self.accrete_gas_from_IGM(time)
self.form_stars(time)
self.age += time
class MovingGalaxy(EvolvingGalaxy):
This galaxy can move over time in the x,y plane
def __init__(self, total_mass, cold_gas_mass, stellar_mass, x_position, y_position, x_velocity, y_velocity, idnum, age=0):
# Replace self with super to activate the superclass's methods
super().__init__(total_mass, cold_gas_mass, stellar_mass)
self.x_position = x_position
self.y_position = y_position
self.x_velocity = x_velocity
self.y_velocity = y_velocity
self.idnum = idnum
def __repr__(self):
return "Galaxy %i (x = %.0f; y = %.0f)" % (self.idnum, self.x_position, self.y_position)
def move(self, time):
self.x_position += self.x_velocity * time
self.y_position += self.y_velocity * time
def calculate_momentum(self):
return (self.total_mass * self.x_velocity, self.total_mass * self.y_velocity)
def evolve(self, time):
self.move(time)
super().evolve(time)
def distance(galaxy1, galaxy2):
x_diff = galaxy1.x_position - galaxy2.x_position
y_diff = galaxy1.y_position - galaxy2.y_position
return (x_diff**2 + y_diff**2)**0.5
class Universe():
def __init__(self):
self.xrange = (0,100)
self.yrange = (0,100)
self.galaxies = []
self.added_galaxies = []
self.removed_galaxies = []
self.time = 0
pass
def __repr__(self):
out = 'Universe: t=%.2g\n' % self.time
for galaxy in self.galaxies:
out = "%s%s\n" % (out, galaxy)
return out
def add_galaxy(self, galaxy=None):
if galaxy is None:
stellar_mass = 10**(4*random.random()) * 1e6
cold_gas_mass = 10**(4*random.random()) * 1e6
total_mass = (cold_gas_mass + stellar_mass)*1e2
galaxy = MovingGalaxy(total_mass,
cold_gas_mass,
stellar_mass,
x_position=random.random()*100,
y_position=random.random()*100,
x_velocity=random.uniform(-1,1)*1e-7,
y_velocity=random.uniform(-1,1)*1e-7,
idnum=len(self.galaxies))
self.galaxies.append(galaxy)
def remove_galaxy(self, galaxy):
if galaxy in self.galaxies:
del self.galaxies[self.galaxies.index(galaxy)]
def evolve(self, time):
for galaxy in self.galaxies:
galaxy.evolve(time)
galaxy.x_position %= 100
galaxy.y_position %= 100
self.check_for_mergers()
for galaxy in self.removed_galaxies:
self.remove_galaxy(galaxy)
for galaxy in self.added_galaxies:
self.add_galaxy(galaxy)
self.removed_galaxies = []
self.added_galaxies = []
self.time += time
def merge_galaxies(self, galaxy1, galaxy2):
print('Merging:\n%s\n%s' % (galaxy1, galaxy2))
x_mom1, y_mom1 = galaxy1.calculate_momentum()
x_mom2, y_mom2 = galaxy2.calculate_momentum()
new_total_mass = galaxy1.total_mass + galaxy2.total_mass
new_galaxy = MovingGalaxy(total_mass = new_total_mass,
cold_gas_mass = galaxy1.cold_gas_mass + galaxy2.cold_gas_mass,
stellar_mass = galaxy1.stellar_mass + galaxy2.stellar_mass,
x_position = galaxy1.x_position,
y_position = galaxy1.y_position,
x_velocity = (x_mom1 + x_mom2) / new_total_mass,
y_velocity = (y_mom1 + y_mom2) / new_total_mass,
idnum = galaxy1.idnum)
self.added_galaxies.append(new_galaxy)
self.removed_galaxies.append(galaxy1)
self.removed_galaxies.append(galaxy2)
def check_for_mergers(self):
for i, galaxy1 in enumerate(self.galaxies):
for j, galaxy2 in enumerate(self.galaxies[i+1:]):
if distance(galaxy1, galaxy2) <= 2:
self.merge_galaxies(galaxy1, galaxy2)
def plot_state(self, frame_id):
plt.clf()
x = [galaxy.x_position for galaxy in self.galaxies]
y = [galaxy.y_position for galaxy in self.galaxies]
color = [galaxy.color for galaxy in self.galaxies]
size = [galaxy.total_mass / 1e9 for galaxy in self.galaxies]
plt.scatter(x,y, color=color, s=size)
plt.xlim(uni.xrange)
plt.ylim(uni.yrange)
plt.savefig('frame%04i.png' % frame_id)
if __name__ == '__main__':
uni = Universe()
n_timesteps = 2e2
n_galaxies = 25
for i in range(n_galaxies):
uni.add_galaxy()
for i in range(int(n_timesteps)):
uni.evolve(2e9/n_timesteps)
uni.plot_state(i)
Explanation: Problem 2b
OK, now try running the cProfile module with it in order to produce some profiling statistics. You can do this by running:
python -m cProfile -o sort.prof sort.py
This will produce an output profile file called sort.prof. You can do a variety of things with sort.prof, but you'll need a few programs to do this. First, install pyprof2html with: pip install pyprof2html. Then, try:
pyprof2html sort.prof
This will produce a html directory, and you can just open up the enclosed index.html file to bring it to your browser. You can see function by function, what is taking the most time! You can click on column headers to change which sorting occurs.
Problem 2c
But there are graphical ways of representing these data effectively. Download snakeviz, another means of viewing your profile data. You can do this with pip install snakeviz. And then open up the same file with snakeviz:
snakeviz sort.prof
This should bring up another graphical interface for analyzing the profile data. Switch to icicle mode, and explore the information a bit. Try to figure out where the "hot" sections of the code are. Namely, what is the most expensive function that is running in terms of time?
Problem 2d
OK, so if that's the most expensive, we better speed it up. We can investigate line-by-line how slow/fast things are, but we need another package for that called line_profiler. Go ahead and install this with pip install line_profiler.
Go back to the source code file, and add a @profile line directly above the slow function. line_profiler automatically installed a file called kernprof to your $PYTHONPATH, which is used with the following format at the command line:
kernprof.py -v -l your_script your_script_args
Start up kernprof and we'll look at the slow function in our sort program! See if you can find where the slowdown is, based on the amount of time spent on a particular line. Can you fix this line to not be so inefficient?
Hint: Remember the ways we discussed for optimizing code: In-place operations, string concatenations, vectorizing loops, list comprehensions, range vs arange, lambda functions, etc.
Problem 2e
Great! Now repeat these steps to improve your code:
1) Run code with cProfile
2) Record total time it took to run in this iteration.
3) Load up the profiler information in snakeviz or pyprof2html
4) Look for "hot" functions
5) Run kernprof with line_profiler to identify individual lines that may be slow
6) Make a modification to the code trying to address the problem
7) Go back to (1) until you're satisfied.
You should be able to iterate on this until not one of the native functions is in the top 20 list of hot functions, the others being associated with loading numpy and such. If this is the case, there is more overhead being spent on loading the data than on your actual code--try increasing the number of elements in the sorting array.
Problem 2f
Here is a good test. Make a new code file where you swap out all of the sorting information and just run python's native list.sort() function. Profile this, look at total time spent running, and see how it compares with our version. Note any differences? What about if you use the np.sort() function?
Problem 2g
Look at the memory consumption of your optimized code versus the real list.sort() and numpy.sort() functions. You can do this by using the memory_profiler. You'll need to download it first with:
pip install memory_profiler
Now, you can look at the line by line memory consumption of a function, just like you did with line_profiler and kernprof. Again, you have to put the @profile decorator just before the function you want to profile, but then you run:
python -m memory_profiler program.py
Run this on your optimized code, and then on the true python list.sort() and the numpy.sort() and see who takes up the most memory. Why do you think that is?
Problem 3) Profiling real code
Problem 3a
Below I have included the Moving Galaxy and Universe code, the challenge problem from Tuesday's session. First, glance over it, to make sure you know what it's doing for the most part. Then profile it and optimize it using the algorithm described above. What is the slowest general part of the runtime?
Hint: If you comment that out, do things speed up?
End of explanation |
9,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build your own NER Tagger
Named Entity Recognition (NER) , also known as entity chunking/extraction , is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.
There are various off the shelf solutions which offer capabilites to perform named entity extraction (some of which we discussed in the previous units). Yet there are times when the requirements are beyond the capabilities of off-the-shelf classifiers.
In this notebook, we will go through an exercise to build our own NER using Conditional Random Fields.
We would be utilizing sklearn_crfsuite to develop our NER.
Load Dataset
Named Entity Recognition is a sequence modeling problem at it's core. It is more related to classification class of problems where in we need a labeled dataset to train a classifier.
There are various labeled datasets for NER class of problems. We would be utilizing a pre-processed version of GMB(Groningen Meaning Bank) corpus for this notebook. The preprocessed version is availble at the following link
Step1: We have 47959 sentences that contain 35178 unique words.
These sentences have a total of 42 unique POS tags and 17 unique NER tags in total.
Tag Distribution
The GMB dataset utilizes IOB tagging or Inside, Outside Beginning. IOB is a common tagging format for tagging tokens which we have discussed earlier. To refresh your memory
Step2: Conditional Random Fields
As mentioned above, NER belongs to sequence modeling class of problems. There are different algorithms to tackle sequence modeling, CRF or Conditional Random Fields are one such example. CRFs are proven to perform extremely well on NER and related domains. In this notebook, we will attempt at developing our own NER based on CRFs.
Question
Step3: Prepare Train and Test Datasets
Step4: Building Models with sklearn-crfsuite
sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides scikit-learn-compatible sklearn_crfsuite.CRF estimator
Step5: Train the model!
Train the model using the default configurations mentioned in the sklearn-crfsuite API docs
algorithm
Step6: Use the following to load our pre-trained model if training above takes a lot of time
Step7: Model Evaluation
Let's evaluate our model performance for NER Tagging on the test data now!
Try playing around with the following cells and observe the overall model performance.
We use standard classification metrics like precision, recall and f1-score
Step9: We have intentially left out the Others tag to understand the performance of model on the remaining tags. The above evaluation statistics showcase a model which seems to have learnt the transitions quite well giving us an overall F1-score of 85%!
We can achieve even better results by fine tuning the feature engineering step along with hyper-parameter tuning.
End-to-End NER Tagger with trained NER Model
There is no fun (or value!) if we cannot use our model to tag new sentences in the future assuming we would want to put this model in production. Let's try and build an end-to-end workflow to perform NER Tagging on our sample document. First we perform NER tagging with SpaCy to remind you how it looks like.
Prepare Sample Document
Step10: NER Tagging with SpaCy
Step11: Pipeline Step 1
Tokenize Text
POS Tagging
Step12: Pipeline Step 2
Extract Features from the POS tagged text document
Hint
Step13: Pipeline Step 3
Use the CRF Model crf to predict on the features
Step14: Pipeline Step 4
Combine text tokens with NER Tags
Retrieve relevant named entities from NER Tags | Python Code:
import pandas as pd
df = pd.read_csv('ner_dataset.csv.gz', compression='gzip', encoding='ISO-8859-1')
df.info()
df.T
df = df.fillna(method='ffill')
df.info()
df.T
df['Sentence #'].nunique(), df.Word.nunique(), df.POS.nunique(), df.Tag.nunique()
Explanation: Build your own NER Tagger
Named Entity Recognition (NER) , also known as entity chunking/extraction , is a popular technique used in information extraction to identify and segment the named entities and classify or categorize them under various predefined classes.
There are various off the shelf solutions which offer capabilites to perform named entity extraction (some of which we discussed in the previous units). Yet there are times when the requirements are beyond the capabilities of off-the-shelf classifiers.
In this notebook, we will go through an exercise to build our own NER using Conditional Random Fields.
We would be utilizing sklearn_crfsuite to develop our NER.
Load Dataset
Named Entity Recognition is a sequence modeling problem at it's core. It is more related to classification class of problems where in we need a labeled dataset to train a classifier.
There are various labeled datasets for NER class of problems. We would be utilizing a pre-processed version of GMB(Groningen Meaning Bank) corpus for this notebook. The preprocessed version is availble at the following link : kaggle/ner
We have provided the dataset in the code repository itself using some intelligent compression and you can access it directly from pandas as follows.
End of explanation
df.Tag.value_counts()
Explanation: We have 47959 sentences that contain 35178 unique words.
These sentences have a total of 42 unique POS tags and 17 unique NER tags in total.
Tag Distribution
The GMB dataset utilizes IOB tagging or Inside, Outside Beginning. IOB is a common tagging format for tagging tokens which we have discussed earlier. To refresh your memory:
I- prefix before a tag indicates that the tag is inside a chunk.
B- prefix before a tag indicates that the tag is the beginning of a chunk.
O- tag indicates that a token belongs to no chunk (outside).
The tags in this dataset are explained as follows:
geo = Geographical Entity
org = Organization
per = Person
gpe = Geopolitical Entity
tim = Time indicator
art = Artifact
eve = Event
nat = Natural Phenomenon
Anything outside these classes is termed as other, denoted as O.
The following output shows the unbalanced distribution of different tags in the dataset
End of explanation
def word2features(sent, i):
word = sent[i][0]
postag = sent[i][1]
features = {
'bias': 1.0,
'word.lower()': word.lower(),
'word[-3:]': word[-3:],
'word[-2:]': word[-2:],
'word.isupper()': word.isupper(),
'word.istitle()': word.istitle(),
'word.isdigit()': word.isdigit(),
'postag': postag,
'postag[:2]': postag[:2],
}
if i > 0:
word1 = sent[i-1][0]
postag1 = sent[i-1][1]
features.update({
'-1:word.lower()': word1.lower(),
'-1:word.istitle()': word1.istitle(),
'-1:word.isupper()': word1.isupper(),
'-1:postag': postag1,
'-1:postag[:2]': postag1[:2],
})
else:
features['BOS'] = True
if i < len(sent)-1:
word1 = sent[i+1][0]
postag1 = sent[i+1][1]
features.update({
'+1:word.lower()': word1.lower(),
'+1:word.istitle()': word1.istitle(),
'+1:word.isupper()': word1.isupper(),
'+1:postag': postag1,
'+1:postag[:2]': postag1[:2],
})
else:
features['EOS'] = True
return features
def sent2features(sent):
return [word2features(sent, i) for i in range(len(sent))]
def sent2labels(sent):
return [label for token, postag, label in sent]
agg_func = lambda s: [(w, p, t) for w, p, t in zip(s['Word'].values.tolist(),
s['POS'].values.tolist(),
s['Tag'].values.tolist())]
grouped_df = df.groupby('Sentence #').apply(agg_func)
print(grouped_df[grouped_df.index == 'Sentence: 1'].values)
grouped_df.shape
sentences = [s for s in grouped_df]
sentences[0]
sent2features(sentences[0][5:7])
sent2labels(sentences[0][5:7])
Explanation: Conditional Random Fields
As mentioned above, NER belongs to sequence modeling class of problems. There are different algorithms to tackle sequence modeling, CRF or Conditional Random Fields are one such example. CRFs are proven to perform extremely well on NER and related domains. In this notebook, we will attempt at developing our own NER based on CRFs.
Question: What is a CRF and how does it work?
Wikipedia : CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets $X$ and $Y$, the observed and output variables, respectively; the conditional distribution $p(Y|X)$ is then modeled.
For more details, checkout the paper Conditional Random Fields: Probabilistic Models
for Segmenting and Labeling Sequence Data
Prepare Data
CRF trains upon sequence of input data to learn transitions from one state (label) to another.
To enable such an algorithm, we need to define features which take into account different transitions.
In the function word2features() below, we transform each word into a feature dictionary depicting the following attributes or features:
lower case of word
suffix containing last 3 characters
suffix containing last 2 characters
flags to determine upper-case, title-case, numeric data and POS tag
We also attach attributes related to previous and next words or tags to determine beginning of sentence (BOS) or end of sentence (EOS)
End of explanation
from sklearn.model_selection import train_test_split
import numpy as np
X = np.array([sent2features(s) for s in sentences])
y = np.array([sent2labels(s) for s in sentences])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
X_train.shape, X_test.shape
Explanation: Prepare Train and Test Datasets
End of explanation
!pip install sklearn-crfsuite
Explanation: Building Models with sklearn-crfsuite
sklearn-crfsuite is a thin CRFsuite (python-crfsuite) wrapper which provides scikit-learn-compatible sklearn_crfsuite.CRF estimator: you can use e.g. scikit-learn model selection utilities (cross-validation, hyperparameter optimization) with it, or save/load CRF models using joblib.
End of explanation
import sklearn_crfsuite
crf = sklearn_crfsuite.CRF(algorithm='lbfgs',
c1=0.1,
c2=0.1,
max_iterations=100,
all_possible_transitions=True,
verbose=True)
crf.fit(X_train, y_train)
Explanation: Train the model!
Train the model using the default configurations mentioned in the sklearn-crfsuite API docs
algorithm: the training algorithm. We use L-BFGS for gradient descent for optimization and getting model parameters
c1: Coefficient for Lasso (L1) regularization
c2: Coefficient for Ridge (L2) regularization
all_possible_transitions: Specify whether CRFsuite generates transition features that do not even occur in the training data
Note: If the model is taking too long to train, you can load up the pre-trained model using the code after the training cells and use that for predictions.
End of explanation
from sklearn.externals import joblib
#joblib.dump(crf, 'ner_model.pkl')
crf = joblib.load('ner_model.pkl')
Explanation: Use the following to load our pre-trained model if training above takes a lot of time
End of explanation
y_pred = crf.predict(X_test)
print(y_pred[0])
print(y_test[0])
from sklearn_crfsuite import metrics as crf_metrics
labels = list(crf.classes_)
labels.remove('O')
print(crf_metrics.flat_classification_report(y_test, y_pred, labels=labels))
Explanation: Model Evaluation
Let's evaluate our model performance for NER Tagging on the test data now!
Try playing around with the following cells and observe the overall model performance.
We use standard classification metrics like precision, recall and f1-score
End of explanation
import re
text = Three more countries have joined an “international grand committee” of parliaments, adding to calls for
Facebook’s boss, Mark Zuckerberg, to give evidence on misinformation to the coalition. Brazil, Latvia and Singapore
bring the total to eight different parliaments across the world, with plans to send representatives to London on 27
November with the intention of hearing from Zuckerberg. Since the Cambridge Analytica scandal broke, the Facebook chief
has only appeared in front of two legislatures: the American Senate and House of Representatives, and the European parliament.
Facebook has consistently rebuffed attempts from others, including the UK and Canadian parliaments, to hear from Zuckerberg.
He added that an article in the New York Times on Thursday, in which the paper alleged a pattern of behaviour from Facebook
to “delay, deny and deflect” negative news stories, “raises further questions about how recent data breaches were allegedly
dealt with within Facebook.”
text = re.sub(r'\n', '', text)
text
Explanation: We have intentially left out the Others tag to understand the performance of model on the remaining tags. The above evaluation statistics showcase a model which seems to have learnt the transitions quite well giving us an overall F1-score of 85%!
We can achieve even better results by fine tuning the feature engineering step along with hyper-parameter tuning.
End-to-End NER Tagger with trained NER Model
There is no fun (or value!) if we cannot use our model to tag new sentences in the future assuming we would want to put this model in production. Let's try and build an end-to-end workflow to perform NER Tagging on our sample document. First we perform NER tagging with SpaCy to remind you how it looks like.
Prepare Sample Document
End of explanation
import spacy
from spacy import displacy
nlp = spacy.load('en')
text_nlp = nlp(text)
displacy.render(text_nlp, style='ent', jupyter=True)
Explanation: NER Tagging with SpaCy
End of explanation
import nltk
text_tokens = nltk.word_tokenize(text)
text_pos = nltk.pos_tag(text_tokens)
text_pos[:10]
Explanation: Pipeline Step 1
Tokenize Text
POS Tagging
End of explanation
features = [sent2features(text_pos)]
features[0][0]
Explanation: Pipeline Step 2
Extract Features from the POS tagged text document
Hint: Use sent2features
End of explanation
labels = crf.predict(features)
doc_labels = labels[0]
doc_labels[10:20]
Explanation: Pipeline Step 3
Use the CRF Model crf to predict on the features
End of explanation
text_ner = [(token, tag) for token, tag in zip(text_tokens, doc_labels)]
print(text_ner)
named_entities = []
temp_entity_name = ''
temp_named_entity = None
for term, tag in text_ner:
if tag != 'O':
temp_entity_name = ' '.join([temp_entity_name, term]).strip()
temp_named_entity = (temp_entity_name, tag)
else:
if temp_named_entity:
named_entities.append(temp_named_entity)
temp_entity_name = ''
temp_named_entity = None
import pandas as pd
pd.DataFrame(named_entities, columns=['Entity', 'Tag'])
Explanation: Pipeline Step 4
Combine text tokens with NER Tags
Retrieve relevant named entities from NER Tags
End of explanation |
9,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
9,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q6
In this question, we'll dive more deeply into some of the review questions from the last flipped session.
A
In one of the review questions, we discussed creating nested for-loops in order to list out all possible combinations of (x, y) values for a given range of numbers. Somebody pointed out that this would indeed generate some duplicates--e.g., (0, 9) and (9, 0) could be considered exactly the same.
In this question, you'll replicate the double-for loop for a range of numbers, but you'll only generate a range of unique pairwise combinations. This is exactly what itertools.combinations(list, 2) does, but you'll implement it yourself. For example, the list [1, 2, 3] should return [ (1, 2), (1, 3), (2, 3) ] (the ordering of the internal tuple pairs doesn't matter).
Put your answer in the list_of_pairs variable.
Step1: B
This question is a straight-up rip-off from one of the lecture review questions. In the code below, you'll be given a matrix in list-of-lists format. You'll need to iterate through the matrix and replace each row list with a tuple, where the first element of the tuple is the row index, and the second element is the original row list.
For example, if the input matrix list-of-lists is
x = [ [1, 2, 3], [4, 5, 6] ]
then the output should be
y = [ (0, [1, 2, 3]), (1, [4, 5, 6]) ]
In other words, y[0] would give the tuple (0, [1, 2, 3]), and y[1] would give (1, [4, 5, 6]).
Put your answer in the tuple_matrix variable. | Python Code:
def my_pairs(x):
list_of_pairs = []
### BEGIN SOLUTION
### END SOLUTION
return list_of_pairs
try:
combinations
itertools.combinations
except:
assert True
else:
assert False
from itertools import combinations as c
i1 = [1, 2, 3]
a1 = set(list(c(i1, 2)))
assert a1 == set(my_pairs(i1))
i2 = [8934, 123, 23, 1, 6, 8, 553, 8, 98.345, 354, 876796.5, 34]
a2 = set(list(c(i2, 2)))
assert a2 == set(my_pairs(i2))
Explanation: Q6
In this question, we'll dive more deeply into some of the review questions from the last flipped session.
A
In one of the review questions, we discussed creating nested for-loops in order to list out all possible combinations of (x, y) values for a given range of numbers. Somebody pointed out that this would indeed generate some duplicates--e.g., (0, 9) and (9, 0) could be considered exactly the same.
In this question, you'll replicate the double-for loop for a range of numbers, but you'll only generate a range of unique pairwise combinations. This is exactly what itertools.combinations(list, 2) does, but you'll implement it yourself. For example, the list [1, 2, 3] should return [ (1, 2), (1, 3), (2, 3) ] (the ordering of the internal tuple pairs doesn't matter).
Put your answer in the list_of_pairs variable.
End of explanation
def add_row_ids(matrix):
tuple_matrix = []
### BEGIN SOLUTION
### END SOLUTION
return tuple_matrix
i1 = [ [1, 2, 3], [4, 5, 6] ]
a1 = set((1, (4, 5, 6)))
i, t = add_row_ids(i1)[1]
assert a1 == set((i, tuple(t)))
i2 = [ [1, 2], [2, 3], [4, 5], [6, 7], [8, 9] ]
a2 = set((4, (8, 9)))
i, t = add_row_ids(i2)[4]
assert a2 == set((i, tuple(t)))
Explanation: B
This question is a straight-up rip-off from one of the lecture review questions. In the code below, you'll be given a matrix in list-of-lists format. You'll need to iterate through the matrix and replace each row list with a tuple, where the first element of the tuple is the row index, and the second element is the original row list.
For example, if the input matrix list-of-lists is
x = [ [1, 2, 3], [4, 5, 6] ]
then the output should be
y = [ (0, [1, 2, 3]), (1, [4, 5, 6]) ]
In other words, y[0] would give the tuple (0, [1, 2, 3]), and y[1] would give (1, [4, 5, 6]).
Put your answer in the tuple_matrix variable.
End of explanation |
9,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Piecewise Exact Integration
The Dynamical System
We want to study a damped SDOF system, so characterized
Step1: The excitation is given by a force such that the static displacement is 5 mm, modulated by a sine in resonance with the dynamic sistem, i.e., $\omega=\omega_n$.
Step2: For such a system, we know exactly the response. The particular integral is
$$\xi(t)=-\frac{\cos\omega t}{2\zeta}$$
(why?) and imposing initial rest conditions the system response is
$$x(t) = \frac{\Delta_{st}}{2\zeta} ((\frac{\zeta}{\sqrt{1-\zeta^2}}\sin\omega_Dt + \cos\omega_Dt)\exp(-\zeta\omega t) - \cos\omega t),\qquad \omega=\omega_n.
$$
Step3: Numerical integration
We define a function that, given the initial conditions and the load, returns the displacement and the velocity at the end of the step.
Step4: With those pieces in place, we can define a function that, for a given number of steps per period computes the response on the interval $0 \le t \le 2.0$.
Step5: Let's compute the responses for different numbers of steps, and store them away too...
Step6: Eventually we can plot the numerical responses along with the exact response
Step7: But... there are only two numerical curves and I've plotted three of them.
Let's plot the difference between the exact response and the response computed at 16 samples per period...
Step8: As you can see, the max difference is about 0.3 mm, to be compared with a max response of almost 25 mm, hence an error in the order of 1.2% that in the previous plot led to the apparent disappearance of the NSTEP=16 curve.
Just for fun, how could you compute a smooth curve that interplates the results of the numerical analysis? Easy if you know the answer... smooth16 is, technically speaking, a class instance (it has methods and data) but it is also a callable (a function of sorts)... | Python Code:
T=1.0 # Natural period of the oscillator
w=2*pi # circular frequency of the oscillator
m=1000.0 # oscillator's mass, in kg
k=m*w*w # oscillator stifness, in N/m
z=0.05 # damping ratio over critical
c=2*z*m*w # damping
wd=w*sqrt(1-z*z) # damped circular frequency
ratio=sqrt(1-z*z) # ratio damped/undamped frequencies
Explanation: Piecewise Exact Integration
The Dynamical System
We want to study a damped SDOF system, so characterized
End of explanation
D=0.005 # static displacement, 5mm
P=D*k # force amplitude
Explanation: The excitation is given by a force such that the static displacement is 5 mm, modulated by a sine in resonance with the dynamic sistem, i.e., $\omega=\omega_n$.
End of explanation
def exact(t):
return D*((z*sin(wd*t)/ratio+cos(wd*t))*exp(-z*w*t)-cos(w*t))/(2*z)
t = np.linspace(0.0, 2.0, 1001)
plt.plot(t, exact(t)); plt.grid()
Explanation: For such a system, we know exactly the response. The particular integral is
$$\xi(t)=-\frac{\cos\omega t}{2\zeta}$$
(why?) and imposing initial rest conditions the system response is
$$x(t) = \frac{\Delta_{st}}{2\zeta} ((\frac{\zeta}{\sqrt{1-\zeta^2}}\sin\omega_Dt + \cos\omega_Dt)\exp(-\zeta\omega t) - \cos\omega t),\qquad \omega=\omega_n.
$$
End of explanation
def step(x0,v0,p0,p1,h,cdh,sdh):
dst=p0/k
ddst=(p1-p0)/k
B = x0 - dst + ((2*z)/w)*(ddst/h)
A = (v0 + z*w*B - ddst/h)/wd
x1 = A*sdh + B*cdh + dst + ddst - ddst/h * 2*z/w
v1 = A*(wd*cdh-z*w*sdh) - B*(z*w*cdh+wd*sdh) + ddst/h
return x1, v1
Explanation: Numerical integration
We define a function that, given the initial conditions and the load, returns the displacement and the velocity at the end of the step.
End of explanation
def resp(nstep):
T = np.linspace(0.0, 2.0, 2*nstep + 1)
X = np.zeros(2*nstep + 1)
h=1./float(nstep)
cdh=cos(wd*h)*exp(-z*w*h)
sdh=sin(wd*h)*exp(-z*w*h)
x1=0. ; v1=0. ; p1=0
for i, t in enumerate(T):
X[i] = x1
x0=x1 ; v0=v1 ; p0=p1 ; p1=P*sin(w*(t+h))
x1,v1=step(x0,v0,p0,p1,h, cdh, sdh)
return T, X
Explanation: With those pieces in place, we can define a function that, for a given number of steps per period computes the response on the interval $0 \le t \le 2.0$.
End of explanation
t_x = {n:resp(n) for n in (4, 8, 16)}
Explanation: Let's compute the responses for different numbers of steps, and store them away too...
End of explanation
plt.plot(t, exact(t), label='Analytical Response', lw=1.3)
for np in sorted(t_x.keys()):
plt.plot(*t_x[np], label='Npoints/period = %2d'%np)
plt.grid()
plt.legend(loc=3)
plt.xlabel('Time t/s')
plt.ylabel('Displacement x/m');
Explanation: Eventually we can plot the numerical responses along with the exact response
End of explanation
t16, x16 = t_x[16]
plt.plot(t16, exact(t16)-x16)
Explanation: But... there are only two numerical curves and I've plotted three of them.
Let's plot the difference between the exact response and the response computed at 16 samples per period...
End of explanation
from scipy.interpolate import InterpolatedUnivariateSpline as spline
smooth16 = spline(*t_x[16])
plt.plot(t, exact(t), label='Analytical')
plt.plot(t, smooth16(t), label='Numerical, 16 ppc, smoothed')
plt.legend(loc='best')
plt.grid()
Explanation: As you can see, the max difference is about 0.3 mm, to be compared with a max response of almost 25 mm, hence an error in the order of 1.2% that in the previous plot led to the apparent disappearance of the NSTEP=16 curve.
Just for fun, how could you compute a smooth curve that interplates the results of the numerical analysis? Easy if you know the answer... smooth16 is, technically speaking, a class instance (it has methods and data) but it is also a callable (a function of sorts)...
End of explanation |
9,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(DGFLUJOREDES)=
4.2 Definiciones generales de flujo en redes
```{admonition} Notas para contenedor de docker
Step1: ```{admonition} Comentarios
Los siguientes nombres son utilizados para referirse a algunas componentes de un arco
Step2: Entonces el nodo $0$ tiene un grado de $2$ y el nodo $6$ de $3$.
Redes no dirigidas
```{admonition} Definición
Se define una red no dirigida al igual que el caso dirigido considerando que los arcos son pares no ordenados de nodos distintos. En una red no dirigida podemos referirnos al arco que une al par de nodos $i$ y $j$ como $(i,j)$ o $(j,i)$ indistintamente.
```
```{admonition} Observación
Step3: La red anterior tiene arcos con valores numéricos asociados que ayudan a representar típicamente costos, capacidades y/o suministro o demanda.
```{admonition} Definición
En la terminología de flujo en redes, la capacidad de un arco es la cantidad máxima de flujo que puede circular en el mismo.
```
```{admonition} Comentarios
Los arcos en una red no dirigida en ocasiones se les nombra ligaduras.
Una red no dirigida puede convertirse en una dirigida sustituyendo sus ligaduras por dos arcos en direcciones opuestas entre el par de nodos involucrados. La interpretación en este caso puede ser un "flujo neto" entre ambos nodos. Por tanto, las definiciones también son aplicables a redes no dirigidas y las definiciones que se dan en esta nota asumen que se tiene una red dirigida. Ver por ejemplo to_directed para una función en Python que convierte a una red no dirigida en una dirigida añadiendo un par de arcos entre los nodos.
```
Adyacencia
```{admonition} Definición
Si existe un arco $(i,j) \in \mathcal{A}$ entonces el nodo $j$ es adyacente al nodo $i$.
```
Representación de redes
Step4: ```{margin}
El resultado de la función adjacency_matrix es equivalente al de to_scipy_sparse_matrix
```
Step5: Y se puede usar to_numpy_array para obtener una matriz de tamaño $7 \times 7$
Step6: Podemos leer la matriz de adyacencia por renglón o columna. En el renglón $i$ si nos encontramos con una entrada distinta de cero nos indica que en la red existe un arco dirigido que sale de $i$, esto es $(i,j) \in \mathcal{A}$. En la columna $j$ si nos encontramos con una entrada distinta de cero nos indica que en la red existe un arco dirigido que entra al nodo $j$, esto es $(i,j) \in \mathcal{A}$.
Ejemplo
Para la red del ejemplo 2 anterior la matriz $\mathcal{M}_A$ es
Step7: ```{margin}
Obsérvese que se utiliza la función todense pues se almacena el NumPy array en formato sparse.
```
Step8: ```{admonition} Observación
Step9: Una representación de la matriz de adyacencia usando diccionarios en Python es con la función to_dict_of_dicts
Step10: (MATINCIDNODOARCO)=
Representación de redes
Step11: ```{admonition} Comentarios
Si consideramos a los renglones de la matriz de incidencia como vectores y hacemos la suma de los mismos entonces el resultado es cero, esto es equivalente a escribir que la matriz de incidencia no tiene rank igual a $n$. Puede verificarse que tiene rank igual a $n-1$.
Sólo $2m$ de las $nm$ entradas son distintas de cero. Aún más, el número de $1$'s en un renglón equivale al outdegree del correspondiente nodo y el número de $-1$'s por renglón corresponde al indegree del nodo.
```
Step12: Subgrafo, subgrafo inducido, spanning subgraph
```{admonition} Definición
Una red $\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es un subgrafo de $\mathcal{G}$ si $\mathcal{N}' \subseteq \mathcal{N}$ y $\mathcal{A}' \subseteq \mathcal{A}$.
```
Ejemplo
Para la red del ejemplo 1 anterior un subgrafo es
Step13: ```{admonition} Definición
$\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es un subgrafo de $\mathcal{G}$ inducido por $\mathcal{N}'$ si $\mathcal{A}'$ contiene todos los arcos en $\mathcal{A}$ (ambos endpoints deben estar en $\mathcal{N}'$).
```
Ejemplo
Para la red del ejemplo 1 anterior un subgrafo inducido por el conjunto de nodos $\mathcal{N} = {0, 1, 2, 3, 4}$ es
Step14: ```{admonition} Definición
Una red $\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es una spanning subgraph de $\mathcal{G}$ si $\mathcal{N}' = \mathcal{N}$ y $\mathcal{A}' \subseteq \mathcal{A}$. Esto es, spans todos los nodos de $\mathcal{G}$.
```
Ejemplo
Para la red del ejemplo 1 anterior una spanning subgraph es
Step15: Caminata, walk
```{admonition} Definición
Una caminata es una secuencia de nodos (o también arcos) conectados de una red no importando la dirección de sus arcos que los conectan. La secuencia indica que se parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior una caminata conformada por la secuencia de nodos $0-2-5-6-4-2$ es
Step16: ```{admonition} Observación
Step17: Ruta o camino, path
```{admonition} Definición
Un camino es una caminata en la que no se repiten los nodos y parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior un camino conformado por la secuencia de nodos $1-4-6-5$ es
Step18: ```{admonition} Observación
Step19: Y tenemos unas funciones para determinar si son directed paths o no
Step20: ```{admonition} Observación
Step21: Otro ciclo conformado por la secuencia de nodos $4-2-5-6$ es
Step22: Ciclo dirigido
```{admonition} Definición
Un ciclo dirigido contiene el arco $(\text{nodo fin},\text{nodo inicio})$ con $\text{nodo fin}$ el último nodo del directed path y $\text{nodo inicio}$ el primero del directed path.
```
Ejemplo
Para la red del ejemplo 1 anterior podemos volver a usar la función de find_cycle con el argumento original para encontrar un ciclo dirigido
Step23: Red conectada
```{admonition} Definición
Dos nodos están conectados si existe al menos un camino entre éstos. Una red es conectada si cada par de nodos son conectados.
```
Ejemplo
La red del ejemplo 1 anterior es conectada
Step24: Un ejemplo de un subgrafo no conectado que resulta de eliminar algunos arcos de la red anterior
Step25: ```{margin}
Ver to_undirected
```
Step26: Red fuertemente conectada
Una red es fuertemente conectada si existe al menos un directed path entre cada par de nodos.
Ejemplo
La red del ejemplo 1 anterior no es fuertemente conectada
Step27: Pero cambiando la dirección de dos arcos obtenemos una red que sí es fuertemente conectada
Step28: Red acíclica, árbol y bosque
```{margin}
Ver tree para tales definiciones en el contexto del paquete networkx.
```
```{margin}
Recuérdese que se consideran redes dirigidas en las definiciones. Para el caso de una red no dirigida el adjetivo acíclico se refiere a no tener ciclos.
```
```{admonition} Definiciones
Una red acíclica es aquella que no contiene ciclos dirigidos.
Una red conectada que no tiene ciclos se le nombra árbol.
Una red que no tiene ciclos se le nombra bosque.
```
Ejemplo
La red del ejemplo 1 no es acíclica
Step29: Y eliminando el arco $(1, 3)$ obtenemos un subgrafo que sí es acíclico
Step30: Ejemplo
Un ejemplo de árbol de la red del ejemplo 1 es
Step31: ```{margin}
Ver is_tree
```
Step32: Otro ejemplo de árbol en el caso de una red no dirigida
Step33: Un ejemplo de árbol que no inicia de el nodo '0'
Step34: Ejemplo
Un ejemplo de bosque para la red del ejemplo 1 es
Step35: ```{margin}
Ver is_forest
```
Step36: Obsérvese que si se elimina el arco $(0, 2)$ continúa siendo un bosque
Step37: ```{admonition} Comentarios
Un subgrafo de un árbol conectado es un subtree.
Un árbol de $n$ nodos tiene exactamente $n-1$ arcos.
Todo árbol con $n \geq 2$ nodos tiene al menos dos nodos hoja. Un nodo hoja es aquél que tiene un grado de uno.
En un árbol dos nodos están conectados por un único camino.
```
(ARBORYSPANTREE)=
Arborescencia y spanning tree
```{admonition} Definición
Una arborescencia es un árbol en el que se tiene un nodo designado como "raíz" y existe un directed path que parte de la raíz a cada nodo.
Dada una red $\mathcal{G} = (\mathcal{N}, \mathcal{A})$, un spanning tree es un árbol que incluye todos los nodos en $\mathcal{N}$ (es una spanning subgraph que además es un árbol).
```
Ejemplo
Para la red del ejemplo 1 se tiene una arborescencia y un spanning tree siguiente
Step38: ```{admonition} Comentarios
Aquellos arcos que pertenecen al spanning tree se les nombra tree arcs y los que no pertenecen nontree arcs.
En una arborescencia se tiene un indegree máximo de uno para todos los nodos (la raíz tiene un indegree de cero).
Una metodología para crear un spanning tree es la siguiente
Step39: Los nontree arcs son
Step40: Si eliminamos el arco $(1, 2)$ obtenemos un spanning tree
Step41: ```{admonition} Observación
Step42: Los tree arcs son
Step43: Los arcos de la red del ejemplo 1 en el corte fundamental son
Step44: ```{admonition} Observación
Step45: Cada valor de los arcos representan las distancias entre cada nodo.
```{margin}
Ver shortest_path
```
Step46: Red solución
Step47: ```{admonition} Ejercicio
Step48: Una solución factible es enviar $7$ tranvías al día
Step49: Red solución (se omiten los arcos con flujo igual a cero)
Step50: Entonces desde el nodo 'O' salen 14 tranvías
Step51: En la red anterior el arco $(D, E)$ tiene costo igual a $3$ y el arco $(E, D)$ tiene costo igual a $2$.
El objetivo es satisfacer la "demanda" de todos los nodos de acuerdo a las capacidades de cada uno de ellos al menor costo posible.
```{margin}
Ver min_cost_flow
```
Step52: Red solución (únicamente se muestran los arcos con flujo distinto de cero)
Step53: Con costo igual a | Python Code:
import matplotlib.pyplot as plt
import networkx as nx
nodes_pos_ex_1 = [[0.09090909090909091, 0.4545454545454546],
[0.36363636363636365, 0.7272727272727273],
[0.36363636363636365, 0.18181818181818182],
[0.6363636363636364, 0.7272727272727273],
[0.6363636363636364, 0.4545454545454546],
[0.6363636363636364, 0.18181818181818182],
[0.9090909090909092, 0.4545454545454546]]
nodes = range(len(nodes_pos_ex_1))
G_ex_1 = nx.DiGraph()
G_ex_1.add_nodes_from(nodes)
G_ex_1.add_edge(0,1)
G_ex_1.add_edge(0,2)
G_ex_1.add_edge(1,2)
G_ex_1.add_edge(1,3)
G_ex_1.add_edge(2,5)
G_ex_1.add_edge(3,4)
G_ex_1.add_edge(3,6)
G_ex_1.add_edge(4,1)
G_ex_1.add_edge(4,2)
G_ex_1.add_edge(4,6)
G_ex_1.add_edge(5,6)
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: (DGFLUJOREDES)=
4.2 Definiciones generales de flujo en redes
```{admonition} Notas para contenedor de docker:
Comando de docker para ejecución de la nota de forma local:
nota: cambiar <ruta a mi directorio> por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y <versión imagen de docker> por la versión más actualizada que se presenta en la documentación.
docker run --rm -v <ruta a mi directorio>:/datos --name jupyterlab_optimizacion_2 -p 8888:8888 -d palmoreck/jupyterlab_optimizacion_2:<versión imagen de docker>
password para jupyterlab: qwerty
Detener el contenedor de docker:
docker stop jupyterlab_optimizacion_2
Documentación de la imagen de docker palmoreck/jupyterlab_optimizacion_2:<versión imagen de docker> en liga.
```
```{admonition} Al final de esta nota la comunidad lectora:
:class: tip
Tendrá una lista de definiciones de flujo en redes que servirán de referencia para las notas del capítulo.
Aprenderá cuáles son los problemas/modelos de flujo en redes estándar.
Tendrá una lista de aplicaciones en el área de investigación de operaciones.
```
```{sidebar} Un poco de historia ...
El área de investigación de operaciones (IDO) tuvo un gran desarrollo entre los años 40's y 50's principalmente para resolver la asignación de recursos disponibles en actividades militares (de hecho el nombre hace referencia a operaciones militares). Métodos como el símplex de Dantzig fueron desarrollados en esta época y establecieron triunfos importantes del lado de Estados Unidos y Gran Bretaña en batallas militares. Posterior a la segunda guerra mundial la complejidad de la división del trabajo y organización en empresas plantearon problemas en esencia iguales que los que se debían resolver en las guerras.
Esta área resuelve problemáticas como son las relacionadas con la conducción y coordinación de actividades en una organización. Ha sido aplicada de manera extensa en manufactura, transporte, construcción, telecomunicaciones, planeación financiera, cuidado de la salud, fuerzas armadas y servicios públicos, entre otros. Un nombre que también le han dado a la IDO es el de management science o ciencia de la administración.
```
Muchas aplicaciones en el área de investigación de operaciones ayudan a modelar y resolver situaciones en forma de una red de nodos conectados como las siguientes:
Diseño de una tubería en una zona para conectar las locaciones de suministro de cierto producto con puntos de descarga del mismo con el objetivo de minimizar el costo de construír tal tubería.
Diseño de un cableado de telefonía subterránea para establecer la comunicación entre cualquier par de domicilios de personas con el objetivo de minimizar la cantidad de kilómetros que se usarán de cable.
Determinar la ruta más corta entre dos ciudades en una red de transporte que involucra más de dos ciudades.
Determinar la capacidad máxima de cierta sustancia que puede soportar una tubería que conecta dos o más plantas de suministro.
Determinar la asignación de personas en tranvías para llegar a destinos mediante varias rutas en un parque de diversiones de modo que se maximice el número total de viajes que se pueden hacer al día por diferentes rutas, las cuales tienen cierto límite de viajes en cada ruta.
Determinar la agenda y planeación de actividades incluyendo fechas de inicio y término de un proyecto.
Determinar la distribución de flujo con costo mínimo de campos de petróleo hacia refinerías a través de una red de tuberías.
```{admonition} Observación
:class: tip
Aunque el nombre de red y el de grafo se distinguen en que en la red se definen capacidades y costos mientras que en el grafo no, en este capítulo se utilizan ambos nombres como sinónimos.
```
Problemas/Modelos de flujo en redes, network flow problems/models, estándar
Un buen número de problemas de optimización de redes son en realidad tipos especiales de problemas de programación lineal, por ejemplo el {ref}problema de transporte <EJPROBTRANSPORTE> en el que se resuelve cómo determinar la manera óptima de transportar bienes. Otro problema es el definido en un problema de asignación que incluye aplicaciones como la asignación de personas a tareas. Aunque los métodos clásicos del símplex o puntos interiores (ver {ref}introducción a los métodos de puntos interiores <INTMETPIN>) podrían utilizarse para resolver tales problemas, existen métodos especializados (como network simplex o el símplex dual) aplicados a redes que modelan tales problemas y tienen un mejor desempeño que los clásicos.
Es típico encontrar en la literatura el estudio de tres preguntas básicas relacionadas con problemas específicos:
Problema del camino o ruta más corta, shortest path. ¿Cuál es la mejor manera para recorrer una red y llegar de un punto a otro de modo que sea lo más barato posible?
Problema de flujo máximo, maximum flow. Si una red tiene capacidades en sus arcos ¿cómo podemos enviar la mayor cantidad de flujo posible entre dos puntos en la red manteniendo los límites de capacidades en sus arcos?
Problema del flujo con costo mínimo, minimum cost flow. Si se incurre en un costo por unidad de flujo en una red con capacidades en sus arcos y necesitamos enviar unidades de un bien que residen en uno o más puntos en la red hacia uno o más puntos distintos en la misma ¿cómo podemos enviar tales unidades al mínimo costo posible?
A los problemas anteriores los nombraremos como problemas/modelos de flujo en redes o network flow problems/models estándar.
```{admonition} Comentarios
La solución de los problemas estándar puede realizarse enumerando las posibles alternativas para cada problema (piénsese por ejemplo en el problema de encontrar la ruta más corta entre dos ciudades en una red de transporte), esto si bien resuelve las preguntas planteadas, no es práctico por la cantidad enorme de alternativas que resultarían. Por esto se requieren algoritmos cuyo tiempo de cómputo sea pequeño o al menos razonable.
Los problemas estándar han sido estudiados y descritos ampliamente en la literatura de IDO principalmente por ser modelos abstractos que han permitido el desarrollo de algoritmos para resolver problemas en aplicaciones que surgen en la práctica más complejos y que comparten similitudes con los estándar.
Algunos ejemplos de asociación de la terminología de flujo en redes con sus componentes más importantes está dada por la siguiente tabla:
| Nodos | Arcos | Flujo |
|:---:|:---:|:---:|
|Puertos | Caminos | Vehículos|
|Aeropuertos| Líneas aéreas | Aviones |
|Puntos de conmutación | Cables, canales | Mensajes |
|Estaciones de bombeo | Tuberías | Líquidos |
```
Definiciones generales para flujo en redes
```{margin}
Otros modelos en los que se usan las redes son las redes Bayesianas y las redes neuronales artificiales
```
La representación con redes las encontramos en muchas áreas como producción, distribución, planeación de proyectos, localización de instalaciones, administración de recursos y planeación financiera. Además, provee una visualización conceptual poderosa para mostrar las relaciones entre las componentes de sistemas científicos, sociales y económicos por mencionar algunos.
A continuación se presentan algunas definiciones utilizadas en la literatura sobre flujo en redes.
Redes dirigidas
```{admonition} Definición
Una red dirigida, digraph, $\mathcal{G} = (\mathcal{N}, \mathcal{A})$ consiste de un conjunto $\mathcal{N}$ de nodos (vértices, puntos) y un conjunto $\mathcal{A}$ de arcos (aristas, ramas, líneas), edges, cuyos elementos son pares ordenados para nodos distintos.
```
```{admonition} Comentario
Consideramos una red como aquella con más de un nodo.
```
Ejemplo
La siguiente red dirigida tiene como nodos $\mathcal{N} = {0, 1, 2, 3, 4, 5, 6}$ y arcos $\mathcal{A} = {(0,1), (0,2), (1,2), (1,3), (2,5), (3,4), (3,6), (4,1), (4,2), (4,6), (5,6) }$.
End of explanation
print(nx.degree(G_ex_1))
Explanation: ```{admonition} Comentarios
Los siguientes nombres son utilizados para referirse a algunas componentes de un arco:
El arco $(i,j)$ tiene dos endpoints: $i$ y $j$.
El arco $(i,j)$ tiene tail $i$ y head $j$.
El arco $(i,j)$ es incidente a los nodos $i,j$, sale (emana) del nodo $i$ y termina (entra) en el nodo $j$.
```
Grado de un nodo
```{admonition} Definición
El indegree de un nodo es el número arcos que entran al nodo y su outdegree es el número de arcos que salen del mismo. El grado de un nodo es la suma de su indegree y outdegree.
```
Para el grafo anterior se tiene:
End of explanation
nodes_pos_ex_2 = [[0.09090909090909091, 0.5454545454545454],
[0.2727272727272727, 0.7272727272727273],
[0.2727272727272727, 0.2727272727272727],
[0.5454545454545454, 0.7272727272727273],
[0.5454545454545454, 0.2727272727272727],
[0.8181818181818182, 0.5454545454545454]]
nodes = range(len(nodes_pos_ex_2))
G_ex_2 = nx.Graph()
G_ex_2.add_nodes_from(nodes)
edge_labels = {(0,1): 4.7,
(0,2): 1.5,
(1,3): -1.3,
(1,4): 3.2,
(2,3): -0.8,
(2,4): 9.4,
(3,5): 1.2,
(4,5): -0.4
}
G_ex_2.add_edges_from(edge_labels)
for e in G_ex_2.edges():
try:
G_ex_2[e[0]][e[1]]["weight"] = edge_labels[e]
except:
G_ex_2[e[1]][e[0]]["weight"] = edge_labels[(e[1],e[0])]
plt.figure(figsize=(9,7))
nx.draw_networkx_edges(G_ex_2, pos=nodes_pos_ex_2,
alpha=0.3, min_source_margin=8,
min_target_margin=8)
nx.draw_networkx_edge_labels(G_ex_2, pos=nodes_pos_ex_2,
edge_labels = edge_labels,
font_size=15)
nx.draw_networkx_labels(G_ex_2, pos=nodes_pos_ex_2)
nx.draw_networkx_nodes(G_ex_2, pos=nodes_pos_ex_2, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Entonces el nodo $0$ tiene un grado de $2$ y el nodo $6$ de $3$.
Redes no dirigidas
```{admonition} Definición
Se define una red no dirigida al igual que el caso dirigido considerando que los arcos son pares no ordenados de nodos distintos. En una red no dirigida podemos referirnos al arco que une al par de nodos $i$ y $j$ como $(i,j)$ o $(j,i)$ indistintamente.
```
```{admonition} Observación
:class: tip
En problemas de redes en los que se tiene un flujo:
La interpretación de un arco $(i,j)$ no dirigido se interpreta indicando que el flujo se permite en ambas direcciones, de $i$ a $j$ o $j$ a $i$. En el dirigido, el flujo sólo se permite en una dirección.
Si se permite que el flujo a través de un arco no dirigido ocurra en cualquier dirección, se supone que el flujo será sólo en una dirección, en la seleccionada, y no se tendrán flujos simultáneos en direcciones opuestas. Este último caso requiere usar un par de arcos dirigidos en direcciones opuestas y también el flujo real será el flujo neto, esto es, la diferencia de los flujos asignados en las dos direcciones. Por ejemplo, si se asigna un flujo de $10$ en una dirección y después un flujo de $4$ en la dirección opuesta, el efecto real es la cancelación de $4$ unidades de la asignación original, lo que reduce el flujo en la dirección original de $10$ a $6$.
```
Ejemplo
La siguiente red no dirigida tiene como nodos $\mathcal{N} = {0, 1, 2, 3, 4, 5}$ y arcos $\mathcal{A} = {(1,0), (0,2), (1,3), (4,1), (2, 3), (2, 4), (3, 5), (4, 5) }$:
```{margin}
La interpretación de la red presentada en este ejemplo en un problema con flujo, es que se permite el flujo en los arcos $(i,j)$ y $(j, i)$.
```
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: La red anterior tiene arcos con valores numéricos asociados que ayudan a representar típicamente costos, capacidades y/o suministro o demanda.
```{admonition} Definición
En la terminología de flujo en redes, la capacidad de un arco es la cantidad máxima de flujo que puede circular en el mismo.
```
```{admonition} Comentarios
Los arcos en una red no dirigida en ocasiones se les nombra ligaduras.
Una red no dirigida puede convertirse en una dirigida sustituyendo sus ligaduras por dos arcos en direcciones opuestas entre el par de nodos involucrados. La interpretación en este caso puede ser un "flujo neto" entre ambos nodos. Por tanto, las definiciones también son aplicables a redes no dirigidas y las definiciones que se dan en esta nota asumen que se tiene una red dirigida. Ver por ejemplo to_directed para una función en Python que convierte a una red no dirigida en una dirigida añadiendo un par de arcos entre los nodos.
```
Adyacencia
```{admonition} Definición
Si existe un arco $(i,j) \in \mathcal{A}$ entonces el nodo $j$ es adyacente al nodo $i$.
```
Representación de redes: matriz de adyacencia
Nos ayuda a representar las adyacencias entre nodos. Almacena la red como una matriz $\mathcal{M}_{A}$ de tamaño $n \times n$ con $n$ el número de nodos, tiene un renglón y columna por cada nodo. Su entrada $i,j$ es igual a $1$ si $(i,j) \in \mathcal{A}$ y $0$ en otro caso.
```{admonition} Comentario
Si tenemos $m$ arcos entonces $m$ elementos de $\mathcal{M}_{A}$ son distintos de cero. Si los arcos tienen costos o capacidades éstos se pueden almacenar en matrices del mismo tamaño siguiendo la misma representación anterior en la que se sustituye el $1$ por el costo o capacidad.
```
Ejemplo
Para la red del ejemplo 1 anterior se tiene la función adjacency_matrix para una representación sparse de $\mathcal{M_A}$:
End of explanation
print(nx.to_scipy_sparse_matrix(G_ex_1))
print(nx.adjacency_matrix(G_ex_1))
Explanation: ```{margin}
El resultado de la función adjacency_matrix es equivalente al de to_scipy_sparse_matrix
```
End of explanation
print(nx.to_numpy_array(G_ex_1))
Explanation: Y se puede usar to_numpy_array para obtener una matriz de tamaño $7 \times 7$:
End of explanation
plt.figure(figsize=(9,7))
nx.draw_networkx_edges(G_ex_2, pos = nodes_pos_ex_2, alpha = 0.3, min_source_margin=8, min_target_margin=8)
nx.draw_networkx_edge_labels(G_ex_2, pos = nodes_pos_ex_2, edge_labels = edge_labels,
font_size=15)
nx.draw_networkx_labels(G_ex_2, pos = nodes_pos_ex_2)
nx.draw_networkx_nodes(G_ex_2, pos = nodes_pos_ex_2, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Podemos leer la matriz de adyacencia por renglón o columna. En el renglón $i$ si nos encontramos con una entrada distinta de cero nos indica que en la red existe un arco dirigido que sale de $i$, esto es $(i,j) \in \mathcal{A}$. En la columna $j$ si nos encontramos con una entrada distinta de cero nos indica que en la red existe un arco dirigido que entra al nodo $j$, esto es $(i,j) \in \mathcal{A}$.
Ejemplo
Para la red del ejemplo 2 anterior la matriz $\mathcal{M}_A$ es:
End of explanation
print(nx.adjacency_matrix(G_ex_2).todense())
Explanation: ```{margin}
Obsérvese que se utiliza la función todense pues se almacena el NumPy array en formato sparse.
```
End of explanation
print(nx.to_pandas_adjacency(G_ex_2))
Explanation: ```{admonition} Observación
:class: tip
Obsérvese que para una red no dirigida su matriz de adyacencia es simétrica.
```
Ejemplo: otras representaciones en Python
Ver converting-to-and-from-other-data-formats para otros formatos. Por ejemplo un DataFrame de Pandas:
End of explanation
import pprint
pprint.pprint(nx.to_dict_of_dicts(G_ex_2))
Explanation: Una representación de la matriz de adyacencia usando diccionarios en Python es con la función to_dict_of_dicts:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
print(-1*nx.incidence_matrix(G_ex_1, oriented=True).todense())
Explanation: (MATINCIDNODOARCO)=
Representación de redes: matriz de incidencia nodo-arco
Nos ayuda a representar los arcos de una red como una matriz $\mathcal{M}_{I}$ de tamaño $n \times m$ con $n$ el número de nodos y $m$ el número de arcos que contiene un renglón por cada nodo de la red y una columna por cada arco. La columna correspondiente al arco $(i,j)$ tiene el número $1$ en el renglón correspondiente al nodo $i$ y un $-1$ en el renglón correspondiente al nodo $j$.
```{margin}
Multiplicamos por $-1$ pues el resultado de la función incidence_matrix está volteado respecto a la definición de la matriz $\mathcal{M}_I$.
```
```{margin}
Obsérvese que se utiliza la función todense pues se almacena el NumPy array en formato sparse.
```
Ejemplo
Para la red del ejemplo 1 anterior la matriz $\mathcal{M}_I$ es:
End of explanation
import numpy as np
print(np.linalg.matrix_rank(nx.incidence_matrix(G_ex_1, oriented=True).todense()))
Explanation: ```{admonition} Comentarios
Si consideramos a los renglones de la matriz de incidencia como vectores y hacemos la suma de los mismos entonces el resultado es cero, esto es equivalente a escribir que la matriz de incidencia no tiene rank igual a $n$. Puede verificarse que tiene rank igual a $n-1$.
Sólo $2m$ de las $nm$ entradas son distintas de cero. Aún más, el número de $1$'s en un renglón equivale al outdegree del correspondiente nodo y el número de $-1$'s por renglón corresponde al indegree del nodo.
```
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_subgraph_ex_1 = nx.subgraph(G_ex_1, [0, 1, 2, 3, 4]).copy()
G_subgraph_ex_1.remove_edges_from([(1,2), (4,1)])
nx.draw(G_subgraph_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Subgrafo, subgrafo inducido, spanning subgraph
```{admonition} Definición
Una red $\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es un subgrafo de $\mathcal{G}$ si $\mathcal{N}' \subseteq \mathcal{N}$ y $\mathcal{A}' \subseteq \mathcal{A}$.
```
Ejemplo
Para la red del ejemplo 1 anterior un subgrafo es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_induced_subgraph_ex_1 = nx.subgraph(G_ex_1, [0, 1, 2, 3, 4])
nx.draw(G_induced_subgraph_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: ```{admonition} Definición
$\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es un subgrafo de $\mathcal{G}$ inducido por $\mathcal{N}'$ si $\mathcal{A}'$ contiene todos los arcos en $\mathcal{A}$ (ambos endpoints deben estar en $\mathcal{N}'$).
```
Ejemplo
Para la red del ejemplo 1 anterior un subgrafo inducido por el conjunto de nodos $\mathcal{N} = {0, 1, 2, 3, 4}$ es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_spanning_subgraph_ex_1 = G_ex_1.copy()
G_spanning_subgraph_ex_1.remove_edges_from([(1,2), (4,1)])
nx.draw(G_spanning_subgraph_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: ```{admonition} Definición
Una red $\mathcal{G}' = (\mathcal{N}', \mathcal{A}')$ es una spanning subgraph de $\mathcal{G}$ si $\mathcal{N}' = \mathcal{N}$ y $\mathcal{A}' \subseteq \mathcal{A}$. Esto es, spans todos los nodos de $\mathcal{G}$.
```
Ejemplo
Para la red del ejemplo 1 anterior una spanning subgraph es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_walk_ex_1 =nx.subgraph(G_ex_1, [0, 2, 5, 6, 4]).copy()
nx.draw(G_walk_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Caminata, walk
```{admonition} Definición
Una caminata es una secuencia de nodos (o también arcos) conectados de una red no importando la dirección de sus arcos que los conectan. La secuencia indica que se parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior una caminata conformada por la secuencia de nodos $0-2-5-6-4-2$ es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_directed_walk_ex_1 =nx.subgraph(G_ex_1, [0, 1, 3, 4, 1, 2]).copy()
G_directed_walk_ex_1.remove_edges_from([(0,2), (4,2)])
nx.draw(G_directed_walk_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: ```{admonition} Observación
:class: tip
En una caminata se pueden repetir nodos.
```
Caminata dirigida, directed walk
```{admonition} Definición
Una caminata dirigida es una secuencia de nodos (o también arcos) de una red en la que sí importa la dirección de los arcos. La secuencia indica que se parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior una caminata dirigida compuesta por la secuencia de nodos $0-1-3-4-1-2$ es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_path_ex_1 =nx.subgraph(G_ex_1, [1, 4, 6, 5]).copy()
nx.draw(G_path_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Ruta o camino, path
```{admonition} Definición
Un camino es una caminata en la que no se repiten los nodos y parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior un camino conformado por la secuencia de nodos $1-4-6-5$ es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_directed_path_ex_1 =nx.subgraph(G_ex_1, [0, 1, 2, 5, 6]).copy()
G_directed_path_ex_1.remove_edge(0, 2)
nx.draw(G_directed_path_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: ```{admonition} Observación
:class: tip
En los caminos se pueden distinguir aquellos arcos como forward arcs o backward arcs. Para el camino anterior el arco $(4, 6)$ es un forward arc y $(1, 4), (5, 6)$ son backward arcs.
```
Ruta dirigida, directed path
```{admonition} Definición
Una ruta dirigida es un camino dirigido que parte desde un nodo y llega hasta otro.
```
Ejemplo
Para la red del ejemplo 1 anterior un directed path de la secuencia de nodos $0-1-2-5-6$ es:
End of explanation
print(nx.classes.function.is_path(G_ex_1, [0, 1, 2, 5, 6]))
print(nx.classes.function.is_path(G_ex_1, [0, 1, 4]))
Explanation: Y tenemos unas funciones para determinar si son directed paths o no:
End of explanation
G_cycle_ex_1 = nx.find_cycle(G_ex_1, source=0, orientation="ignore")
G_cycle_ex_1
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_plot_cycle_ex_1 = nx.subgraph(G_ex_1, [1, 2, 3, 4, 5, 6]).copy()
G_plot_cycle_ex_1.remove_edges_from([(1, 3), (4, 6), (4, 2)])
nx.draw(G_plot_cycle_ex_1, pos=nodes_pos_ex_1,
with_labels= True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: ```{admonition} Observación
:class: tip
En una ruta dirigida todos los arcos son forward.
```
Ciclo
```{admonition} Definición
Un ciclo es un camino cerrado (comienza y termina en el mismo nodo).
```
Ejemplo
Para la red del ejemplo 1 anterior podemos usar la función find_cycle con el argumento ignore para encontrar uno de los ciclos:
End of explanation
G_cycle_2_ex_1 = nx.subgraph(G_ex_1, [4, 2, 5, 6]).copy()
nx.draw(G_cycle_2_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Otro ciclo conformado por la secuencia de nodos $4-2-5-6$ es:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_directed_cycle_ex_1 = nx.find_cycle(G_ex_1, source=0, orientation="original")
G_directed_cycle_ex_1
G_plot_directed_cycle_ex_1 = nx.subgraph(G_ex_1, [1, 3, 4]).copy()
nx.draw(G_plot_directed_cycle_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Ciclo dirigido
```{admonition} Definición
Un ciclo dirigido contiene el arco $(\text{nodo fin},\text{nodo inicio})$ con $\text{nodo fin}$ el último nodo del directed path y $\text{nodo inicio}$ el primero del directed path.
```
Ejemplo
Para la red del ejemplo 1 anterior podemos volver a usar la función de find_cycle con el argumento original para encontrar un ciclo dirigido:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
print(nx.is_connected(G_ex_1.to_undirected()))
Explanation: Red conectada
```{admonition} Definición
Dos nodos están conectados si existe al menos un camino entre éstos. Una red es conectada si cada par de nodos son conectados.
```
Ejemplo
La red del ejemplo 1 anterior es conectada:
End of explanation
G_not_connected_ex_1 = G_ex_1.copy()
G_not_connected_ex_1.remove_edges_from([(0, 1), (1, 2),
(4, 2), (2,5)])
nx.draw(G_not_connected_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: Un ejemplo de un subgrafo no conectado que resulta de eliminar algunos arcos de la red anterior:
End of explanation
print(nx.is_connected(G_not_connected_ex_1.to_undirected()))
Explanation: ```{margin}
Ver to_undirected
```
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
print(nx.is_strongly_connected(G_ex_1))
Explanation: Red fuertemente conectada
Una red es fuertemente conectada si existe al menos un directed path entre cada par de nodos.
Ejemplo
La red del ejemplo 1 anterior no es fuertemente conectada:
End of explanation
G_strongly_connected = G_ex_1.copy()
G_strongly_connected.remove_edges_from([(0, 2), (4, 6)])
G_strongly_connected.add_edges_from([(2, 0), (6, 4)])
nx.draw(G_strongly_connected, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
print(nx.is_strongly_connected(G_strongly_connected))
Explanation: Pero cambiando la dirección de dos arcos obtenemos una red que sí es fuertemente conectada:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
print(nx.is_directed_acyclic_graph(G_ex_1))
Explanation: Red acíclica, árbol y bosque
```{margin}
Ver tree para tales definiciones en el contexto del paquete networkx.
```
```{margin}
Recuérdese que se consideran redes dirigidas en las definiciones. Para el caso de una red no dirigida el adjetivo acíclico se refiere a no tener ciclos.
```
```{admonition} Definiciones
Una red acíclica es aquella que no contiene ciclos dirigidos.
Una red conectada que no tiene ciclos se le nombra árbol.
Una red que no tiene ciclos se le nombra bosque.
```
Ejemplo
La red del ejemplo 1 no es acíclica:
End of explanation
G_acyclic_ex_1 = G_ex_1.copy()
G_acyclic_ex_1.remove_edges_from([(1, 3)])
nx.draw(G_acyclic_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
print(nx.is_directed_acyclic_graph(G_acyclic_ex_1))
Explanation: Y eliminando el arco $(1, 3)$ obtenemos un subgrafo que sí es acíclico:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_tree_ex_1 = G_ex_1.copy()
G_tree_ex_1.remove_edges_from([(1, 3), (3, 6), (0, 2), (4, 2),
(2, 5)])
nx.draw(G_tree_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: Ejemplo
Un ejemplo de árbol de la red del ejemplo 1 es:
End of explanation
print(nx.is_tree(G_tree_ex_1))
Explanation: ```{margin}
Ver is_tree
```
End of explanation
G_tree_2_ex_1 = nx.minimum_spanning_tree(G_ex_1.to_undirected())
nx.draw(G_tree_2_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
print(nx.is_tree(G_tree_2_ex_1))
Explanation: Otro ejemplo de árbol en el caso de una red no dirigida:
End of explanation
plt.figure(figsize=(5,5))
G_tree_3_ex_1 = G_tree_2_ex_1.subgraph([1, 2, 3, 4, 5, 6]).copy()
G_tree_3_ex_1.add_edge(5, 6)
nx.draw(G_tree_3_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
print(nx.is_tree(G_tree_3_ex_1))
Explanation: Un ejemplo de árbol que no inicia de el nodo '0':
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_forest_ex_1 = G_ex_1.copy()
G_forest_ex_1.remove_edges_from([(0, 1), (1, 2),
(4, 2), (2,5),
(1, 3), (3, 6)])
nx.draw(G_forest_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: Ejemplo
Un ejemplo de bosque para la red del ejemplo 1 es:
End of explanation
print(nx.is_forest(G_forest_ex_1))
Explanation: ```{margin}
Ver is_forest
```
End of explanation
G_forest_2_ex_1 = G_ex_1.copy()
G_forest_2_ex_1.remove_edges_from([(0, 1), (1, 2),
(4, 2), (2,5),
(1, 3), (3, 6),
(0,2)])
nx.draw(G_forest_2_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
print(nx.is_forest(G_forest_2_ex_1))
Explanation: Obsérvese que si se elimina el arco $(0, 2)$ continúa siendo un bosque:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
G_arborescence_ex_1 = nx.algorithms.minimum_spanning_arborescence(G_ex_1)
nx.draw(G_arborescence_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: ```{admonition} Comentarios
Un subgrafo de un árbol conectado es un subtree.
Un árbol de $n$ nodos tiene exactamente $n-1$ arcos.
Todo árbol con $n \geq 2$ nodos tiene al menos dos nodos hoja. Un nodo hoja es aquél que tiene un grado de uno.
En un árbol dos nodos están conectados por un único camino.
```
(ARBORYSPANTREE)=
Arborescencia y spanning tree
```{admonition} Definición
Una arborescencia es un árbol en el que se tiene un nodo designado como "raíz" y existe un directed path que parte de la raíz a cada nodo.
Dada una red $\mathcal{G} = (\mathcal{N}, \mathcal{A})$, un spanning tree es un árbol que incluye todos los nodos en $\mathcal{N}$ (es una spanning subgraph que además es un árbol).
```
Ejemplo
Para la red del ejemplo 1 se tiene una arborescencia y un spanning tree siguiente:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
nx.draw(G_tree_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: ```{admonition} Comentarios
Aquellos arcos que pertenecen al spanning tree se les nombra tree arcs y los que no pertenecen nontree arcs.
En una arborescencia se tiene un indegree máximo de uno para todos los nodos (la raíz tiene un indegree de cero).
Una metodología para crear un spanning tree es la siguiente: considerar una red sin arcos y con $n$ nodos. Añadir arcos en la siguiente manera: "el primer arco puede ir en cualquier lugar de modo que conecte algún par de nodos. De ahí en adelante, cada arco nuevo debe agregarse entre un nodo que ya haya sido conectado a otros nodos y a un nuevo nodo no conectado. Si se agregan arcos de esta manera, se evita que se forme un ciclo y además se asegura que el número de nodos conectados sea uno más que el número de arcos. Cada nuevo arco crea un árbol más grande que no contiene ciclos. Una vez agregado el (n – 1)-ésimo arco, el proceso se detiene porque el árbol resultante se expande, spans, (conecta) hacia todos los n nodos.
```
Algunos resultados útiles de árboles
(CICLOFUND)=
Ciclo fundamental
```{admonition} Definición
Si tenemos un spanning tree de una red $\mathcal{G}$ y se le añade algún nontree arc entonces se crea exactamente un ciclo al que se le nombra ciclo fundamental.
```
Sobre eliminar ciclos en un spanning tree
Supóngase que se ha creado un ciclo fundamental en un spanning tree. Si se borra tal ciclo fundamental del spanning tree se obtiene nuevamente un spanning tree.
Ejemplo
La siguiente red es un spanning tree del ejemplo 1:
End of explanation
G_tree_example_fundamental_cycle = G_tree_ex_1.copy()
G_tree_example_fundamental_cycle.add_edge(0, 2)
nx.draw(G_tree_example_fundamental_cycle, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Los nontree arcs son: $(0, 2), (4, 2), (2, 5), (1, 3), (3, 6)$. Añadimos $(0, 2)$ al spanning tree y obtenemos un ciclo fundamental:
End of explanation
G_tree_example_fundamental_cycle.remove_edge(1, 2)
nx.draw(G_tree_example_fundamental_cycle, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: Si eliminamos el arco $(1, 2)$ obtenemos un spanning tree:
End of explanation
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
nx.draw(G_tree_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: ```{admonition} Observación
:class: tip
Obsérvese que si una red tiene $n$ nodos y $m$ arcos entonces existen $m - n + 1$ ciclos fundamentales (mismo número que nontree arcs).
```
Sobre cortes fundamentales en un spanning tree
Si tenemos un spanning tree de una red $\mathcal{G}$ al que se le elimina algún tree arc entonces se genera una red no conectada en la que se tienen dos subtrees. Aquellos arcos cuyos endpoints pertenecen a diferentes subtrees constituyen lo que se conoce como cortes fundamentales de la red $\mathcal{G}$ respecto al spanning tree. Si se añade algún arco perteneciente a los cortes fundamentales se obtiene un spanning tree.
Ejemplo
La siguiente red es un spanning tree del ejemplo 1:
End of explanation
G_tree_example_fundamental_cut = G_tree_ex_1.copy()
G_tree_example_fundamental_cut.remove_edge(4, 1)
nx.draw(G_tree_example_fundamental_cut, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
nx.draw(G_ex_1, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000, alpha=0.5)
plt.show()
Explanation: Los tree arcs son: $(0, 1), (1, 2), (4, 1), (3, 4), (4, 6), (5, 6)$. Si eliminamos el arco $(4, 1)$ del spanning tree anterior se obtienen dos árboles:
End of explanation
G_tree_example_fundamental_cut.add_edge(2, 5)
nx.draw(G_tree_example_fundamental_cut, pos=nodes_pos_ex_1,
with_labels=True,
node_color='r', node_size=1000,alpha=0.5)
plt.show()
Explanation: Los arcos de la red del ejemplo 1 en el corte fundamental son: $(4, 1), (1, 3), (4, 2), (2, 5)$. Si añadimos el arco $(2, 5)$ obtenemos un spanning tree:
End of explanation
nodes_pos_ex_std_model = [[0.0, 0.5454545454545454],
[0.09090909090909091, 0.7272727272727273],
[0.2727272727272727, 0.5454545454545454],
[0.18181818181818182, 0.36363636363636365],
[0.6363636363636364, 0.5454545454545454],
[0.5454545454545454, 0.36363636363636365],
[0.8181818181818182, 0.6363636363636364]]
nodes_std_model = ['O', 'A', 'B', 'C', 'D', 'E', 'T' ]
nodes_and_pos_std_model = dict(zip(nodes_std_model, nodes_pos_ex_std_model))
G_shortest_path = nx.Graph()
G_shortest_path.add_nodes_from(nodes_std_model)
edge_labels_shortest_path = {('O', 'A'): 2,
('O', 'B'): 5,
('O', 'C'): 4,
('A', 'B'): 2,
('A', 'D'): 7,
('B', 'D'): 4,
('B', 'E'): 3,
('C', 'B'): 1,
('C', 'E'): 4,
('D', 'E'): 1,
('D', 'T'): 5,
('E', 'T'): 7
}
G_shortest_path.add_edges_from(edge_labels_shortest_path)
for e in G_shortest_path.edges():
try:
G_shortest_path[e[0]][e[1]]["weight"] = edge_labels_shortest_path[e]
except:
G_shortest_path[e[1]][e[0]]["weight"] = edge_labels_shortest_path[(e[1],e[0])]
plt.figure(figsize=(7, 7))
nx.draw_networkx_edges(G_shortest_path, pos=nodes_and_pos_std_model,
alpha=0.3, min_source_margin=8,
min_target_margin=8)
nx.draw_networkx_edge_labels(G_shortest_path, pos=nodes_and_pos_std_model,
edge_labels=edge_labels_shortest_path,
font_size=15)
nx.draw_networkx_labels(G_shortest_path, pos=nodes_and_pos_std_model)
nx.draw_networkx_nodes(G_shortest_path, pos=nodes_and_pos_std_model, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: ```{admonition} Observación
:class: tip
Un spanning tree tiene $n-1$ cortes fundamentales respecto a cualquier red.
```
```{admonition} Ejercicio
:class: tip
Para fortalecer las definiciones y resultados anteriores realiza ejemplos de redes de cada definición y resultado.
```
Ejemplo de problemas/modelos estándar de flujo en redes
Camino o ruta más corta
Dada la siguiente red se desea encontrar el camino más corto entre los nodos 'O' y 'T' de la siguiente red:
End of explanation
sh_path = nx.shortest_path(G_shortest_path,source='O',
target='T', weight="weight")
print(sh_path)
edges_sh_path = [(sh_path[k-1], sh_path[k]) for k in range(1, len(sh_path))]
edge_att = nx.get_edge_attributes(G_shortest_path, "weight")
distances_sh_path = {k: edge_att[k] for k in edges_sh_path}
print(distances_sh_path)
print(nx.shortest_path_length(G_shortest_path, source='O',
target='T', weight="weight"))
Explanation: Cada valor de los arcos representan las distancias entre cada nodo.
```{margin}
Ver shortest_path
```
End of explanation
nodes_pos_sh_path_sol = [nodes_and_pos_std_model[k] for k in sh_path]
nodes_sh_path_sol = sh_path
nodes_and_pos_sh_path_sol = dict(zip(nodes_sh_path_sol, nodes_pos_sh_path_sol))
G_shortest_path_sol = nx.Graph()
G_shortest_path_sol.add_nodes_from(nodes_sh_path_sol)
G_shortest_path_sol.add_edges_from(distances_sh_path)
plt.figure(figsize=(7, 7))
nx.draw_networkx_edges(G_shortest_path_sol, pos=nodes_and_pos_sh_path_sol,
alpha=0.3, min_source_margin=8,
min_target_margin=8)
nx.draw_networkx_edge_labels(G_shortest_path_sol, pos=nodes_and_pos_sh_path_sol,
edge_labels=distances_sh_path,
font_size=15)
nx.draw_networkx_labels(G_shortest_path_sol, pos=nodes_and_pos_sh_path_sol)
nx.draw_networkx_nodes(G_shortest_path_sol, pos=nodes_and_pos_sh_path_sol, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Red solución:
End of explanation
G_maximum_flow = nx.DiGraph()
G_maximum_flow.add_nodes_from(nodes_std_model)
edge_capacities = {('O', 'A'): 5,
('O', 'B'): 7,
('O', 'C'): 4,
('A', 'B'): 1,
('A', 'D'): 3,
('B', 'D'): 4,
('B', 'E'): 5,
('B', 'C'): 2,
('C', 'E'): 4,
('D', 'T'): 9,
('E', 'D'): 1,
('E', 'T'): 6
}
G_maximum_flow.add_edges_from(edge_capacities)
for e in G_maximum_flow.edges():
G_maximum_flow[e[0]][e[1]]["capacity"] = edge_capacities[e]
plt.figure(figsize=(9, 7))
nx.draw_networkx_edges(G_maximum_flow, pos=nodes_and_pos_std_model,
alpha=0.3,
min_target_margin=20)
nx.draw_networkx_edge_labels(G_maximum_flow, pos=nodes_and_pos_std_model,
edge_labels=edge_capacities,
font_size=15)
nx.draw_networkx_labels(G_maximum_flow, pos=nodes_and_pos_std_model)
nx.draw_networkx_nodes(G_maximum_flow, pos=nodes_and_pos_std_model, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: ```{admonition} Ejercicio
:class: tip
Encuentra la ruta más corta entre 'O' y 'T' para la red dirigida $\mathcal{G}$ con los arcos siguientes: {(O, B): 5, (O, C): 4, (A, O): 2, (B, A): 2, (A, D): 7, (B, D): 4, (B, E): 3, (C, B): 1, (C, E): 4, (D, E): 1, (D, T): 5, (E, T): 7}.
Visualiza a la red, calcula la distancia de la solución y visualiza la red solución con sus distancias.
```
```{admonition} Comentario
El problema de la ruta más corta se puede reescribir en términos de flujos de redes si se interpreta al flujo como una variable de decisión que toma valores en ${0,1}$. Un flujo en un arco $(i,j)$ de $1$ indica que tal arco debe incluirse en la ruta y un valor de $0$ indica que no tenemos que incluir tal arco. Entonces:
$$x_{i,j} = \begin{cases}
0 & \text{si } (i,j) \text{ no está incluido en la ruta} \
1 & \text{si } (i,j) \text{ está incluido en la ruta}
\end{cases}
$$
El flujo neto generado en un nodo es el flujo que sale menos el flujo que entra considerando que un nodo tiene un flujo de $1$ si está en la ruta y sin flujo en otro caso. Así, el origen tiene un flujo neto de $1$, el destino un flujo neto de $0$ y el resto de los nodos $0$.
```
Flujo máximo
Supóngase que la red del ejemplo de la ruta más corta anterior representa a un parque con diferentes caminos por el que transitan un número de tranvías. Se desea determinar las rutas de algunos viajes de tranvías desde el nodo 'O' hasta el nodo 'T' de manera que el número de viajes sea máximo. (Cada tranvía debe regresar por la misma ruta que tomó de ida, por lo que el análisis se hace sólo sobre los viajes de ida). Para evitar perturbaciones innecesarias a la ecología y a la vida silvestre se impusieron límites superiores sobre el número de viajes de salida permitidos hacia el nodo 'T' para cada camino individual en la dirección de ida. Se tiene entonces la siguiente red dirigida en la que el número que aparece en cada arco representa el límite superior de viajes en la dirección de salida:
End of explanation
flow_value, flow_dict = nx.maximum_flow(G_maximum_flow, 'O', 'T')
print(flow_value)
pprint.pprint(flow_dict)
Explanation: Una solución factible es enviar $7$ tranvías al día: $5$ por la ruta O-B-E-T, $1$ por la ruta $O-B-C-E-T$ y $1$ por la ruta $O-B-C-E-D-T$. Obsérvese que esta solución bloquea el uso de cualquier ruta que comience con el arco $(O,C)$ debido a que las capacidades del arco $(E,T)$ y $(E,D)$ están saturadas.
El problema anterior se conoce como de flujo máximo.
```{margin}
Ver maximum_flow
```
End of explanation
G_maximum_flow_solution = nx.DiGraph()
G_maximum_flow_solution.add_nodes_from(nodes_std_model)
edge_flows = {('O', 'A'): 4,
('O', 'B'): 6,
('O', 'C'): 4,
('A', 'B'): 1,
('A', 'D'): 3,
('B', 'D'): 4,
('B', 'E'): 3,
('C', 'E'): 4,
('D', 'T'): 8,
('E', 'D'): 1,
('E', 'T'): 6
}
G_maximum_flow_solution.add_edges_from(edge_flows)
plt.figure(figsize=(9, 7))
nx.draw_networkx_edges(G_maximum_flow_solution, pos=nodes_and_pos_std_model,
alpha=0.3,
min_target_margin=15)
nx.draw_networkx_edge_labels(G_maximum_flow_solution, pos=nodes_and_pos_std_model,
edge_labels=edge_flows,
font_size=15)
nx.draw_networkx_labels(G_maximum_flow_solution, pos=nodes_and_pos_std_model)
nx.draw_networkx_nodes(G_maximum_flow_solution, pos=nodes_and_pos_std_model, node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Red solución (se omiten los arcos con flujo igual a cero):
End of explanation
nodes_pos = [[0.18181818181818182, 0.7272727272727273],
[0.18181818181818182, 0.2727272727272727],
[0.5454545454545454, 0.2727272727272727],
[0.5454545454545454, 0.7272727272727273],
[0.36363636363636365, 0.5454545454545454]]
nodes = ['A', 'B', 'E', 'D', 'C']
nodes_and_pos = dict(zip(nodes, nodes_pos))
G_min_cost_flow = nx.DiGraph()
G_min_cost_flow.add_node('A', demand = -50, node_and_demand="A [-50]")
G_min_cost_flow.add_node('B', demand = -40, node_and_demand="B [-40]")
G_min_cost_flow.add_node('C', demand = 0, node_and_demand="C [0]")
G_min_cost_flow.add_node('D', demand = 30, node_and_demand="D [30]")
G_min_cost_flow.add_node('E', demand = 60, node_and_demand="E [60]")
edge_labels_min_cost_flow = {('A', 'B'): {"capacity": 10,
"weight": 2},
('A', 'C'): {"weight": 4},
('A', 'D'): {"weight": 9},
('B', 'C'): {"weight": 3},
('C', 'E'): {"capacity": 80,
"weight": 1},
('E', 'D'): {"weight": 2},
('D', 'E'): {"weight": 3}
}
G_min_cost_flow.add_edges_from(edge_labels_min_cost_flow)
for e in G_min_cost_flow.edges():
if e == ('A', 'B') or e == ('C', 'E'):
G_min_cost_flow[e[0]][e[1]]["capacity"] = edge_labels_min_cost_flow[e]["capacity"]
G_min_cost_flow[e[0]][e[1]]["weight"] = edge_labels_min_cost_flow[e]["weight"]
else:
G_min_cost_flow[e[0]][e[1]]["weight"] = edge_labels_min_cost_flow[e]["weight"]
plt.figure(figsize=(9, 9))
nx.draw_networkx_edges(G_min_cost_flow, pos=nodes_and_pos,
alpha=0.3,
min_target_margin=25, connectionstyle="arc3, rad = 0.1")
nx.draw_networkx_edge_labels(G_min_cost_flow, pos=nodes_and_pos,
edge_labels=edge_labels_min_cost_flow, label_pos=0.4,
font_size=10)
nodes_pos_modified = {}
y_off = 0.03
nodes_and_pos_modified = nodes_and_pos.copy()
for node in G_min_cost_flow.nodes():
if node == 'B' or node == 'E':
nodes_and_pos_modified[node] = [nodes_and_pos_modified[node][0],
nodes_and_pos_modified[node][1] - y_off]
else:
nodes_and_pos_modified[node] = [nodes_and_pos_modified[node][0],
nodes_and_pos_modified[node][1] + y_off]
labels = nx.get_node_attributes(G_min_cost_flow, "node_and_demand")
nx.draw_networkx_labels(G_min_cost_flow, pos=nodes_and_pos_modified,
labels=labels)
nx.draw_networkx_nodes(G_min_cost_flow, pos=nodes_and_pos,
node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Entonces desde el nodo 'O' salen 14 tranvías: $4$ hacia el nodo 'A', $6$ hacia el nodo 'B' y $4$ hacia el nodo 'C'.
(EJREDFLUJOCOSTOMIN)=
Flujo con costo mínimo
Al igual que el problema de flujo máximo, el problema de flujo con costo mínimo toma en cuenta un flujo en una red con capacidades en los arcos limitadas. Al igual que en el problema de la ruta más corta, considera un costo (o distancia) del flujo a través de un arco. Al igual que en el {ref}problema de transporte <EJPROBTRANSPORTE> o en el de asignación puede manejar varios orígenes (nodos fuente) y varios destinos (nodos demanda) del flujo, de nuevo con costos asociados. En realidad, estos cuatro problemas son casos especiales del problema del flujo de costo mínimo.
En el problema de costo mínimo se tienen las siguientes características:
Al menos uno de los nodos es un nodo fuente.
Al menos uno de los nodos es un nodo demanda.
El resto de los nodos son nodos de transbordo.
El costo del flujo a través del arco es proporcional a la cantidad de ese flujo. El costo es por unidad.
El objetivo es minimizar el costo total y enviar el suministro disponible a través de la red para satisfacer la demanda dada. (Un objetivo alternativo puede ser el de maximizar la ganancia total del envío).
A continuación se muestra una red ejemplo de tal problema en la que se tienen nodos con etiquetas "A, B, C, D" y "E". Al lado de cada nodo en corchetes se presentan las "demandas". Los nodos origen tienen demanda negativa (proporcionan o suministran) que en la red son los nodos "A" y "B" (por ejemplo fábricas). Los nodos destino tienen demanda positiva (almacenan o reciben) que en la red son los nodos "D" y "E" (por ejemplo clientes). El único nodo de transbordo es el nodo "C" que tiene demanda igual a cero (centro de distribución por ejemplo). Los valores de los costos se muestran en los arcos. Únicamente se tienen capacidades en los arcos $(A, B)$ igual a $10$ y el arco $(C, E)$ igual a $80$:
End of explanation
flowDict = nx.min_cost_flow(G_min_cost_flow)
pprint.pprint(flowDict)
Explanation: En la red anterior el arco $(D, E)$ tiene costo igual a $3$ y el arco $(E, D)$ tiene costo igual a $2$.
El objetivo es satisfacer la "demanda" de todos los nodos de acuerdo a las capacidades de cada uno de ellos al menor costo posible.
```{margin}
Ver min_cost_flow
```
End of explanation
G_min_cost_flow_solution = nx.DiGraph()
G_min_cost_flow_solution.add_nodes_from(nodes)
edge_flows = {('A', 'C'): 40,
('A', 'D'): 10,
('B', 'C'): 40,
('C', 'E'): 80,
('E', 'D'): 20
}
G_min_cost_flow_solution.add_edges_from(edge_flows)
plt.figure(figsize=(9, 9))
nx.draw_networkx_edges(G_min_cost_flow_solution, pos=nodes_and_pos,
alpha=0.3,
min_target_margin=25, connectionstyle="arc3, rad = 0.1")
nx.draw_networkx_edge_labels(G_min_cost_flow_solution, pos=nodes_and_pos,
edge_labels=edge_flows, label_pos=0.4,
font_size=12)
nx.draw_networkx_labels(G_min_cost_flow_solution, pos=nodes_and_pos)
nx.draw_networkx_nodes(G_min_cost_flow_solution, pos=nodes_and_pos,
node_size=1000, alpha=0.6)
plt.axis("off")
plt.show()
Explanation: Red solución (únicamente se muestran los arcos con flujo distinto de cero):
End of explanation
print(nx.min_cost_flow_cost(G_min_cost_flow))
Explanation: Con costo igual a:
End of explanation |
9,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auto MPG Data
Step1: 2D Binning
Default behavior for 2d binning is to bin the dimensions provided, then count the rows that fall into each bin. This is visualizing how the source data represents all possible combinations of mpg and displacement.
Step2: Binning and Aggregating Values
For each x and y bin, the stat is used on values. After the binning operation, the aggregated values are then binned a second time so that a discrete color can be mapped to the aggregated value.
Step3: Specifying the Number of Bins
Step4: Mixing binning and non-binned data
Step5: The Size of the Glyph Can be Altered
Step6: Applying a Custom Palette
The number of discrete colors in the palette determines the number of bins used for applying colors.
Example with 6 Colors
Step7: Example with 9 Colors
Step8: Viewing Color Bins in Legend
Step9: Build Fruit Data
Step10: Without Dimension Binning
Each x and y value are treated as the coordinates of a bin. The values column is then binned and assigned to the color attribute, so discrete colors can be assigned.
Step11: Unemployment Data
Prepare the Data
Current data is in a pivoted form, which is difficult to use when defining plots by specifying columns. We will use the pandas melt function to de-pivot the columns with numerical data we want to use, while keeping the categorical data.
Step12: De-pivoted Data
We now have the 3 dimensions that we want to map to attributes of the plot. | Python Code:
df.head()
Explanation: Auto MPG Data
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'))
show(hm)
Explanation: 2D Binning
Default behavior for 2d binning is to bin the dimensions provided, then count the rows that fall into each bin. This is visualizing how the source data represents all possible combinations of mpg and displacement.
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), values='cyl', stat='mean')
show(hm)
Explanation: Binning and Aggregating Values
For each x and y bin, the stat is used on values. After the binning operation, the aggregated values are then binned a second time so that a discrete color can be mapped to the aggregated value.
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ', bin_count=15),
values='cyl', stat='mean')
show(hm)
Explanation: Specifying the Number of Bins
End of explanation
hm = HeatMap(df, x=bins('mpg'), y='cyl', values='displ', stat='mean')
show(hm)
Explanation: Mixing binning and non-binned data
End of explanation
hm = HeatMap(df, y=bins('displ'), x=bins('mpg'), values='cyl', stat='mean',
spacing_ratio=0.9)
show(hm)
Explanation: The Size of the Glyph Can be Altered
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), stat='mean', values='cyl',
palette=RdYlGn6)
show(hm)
Explanation: Applying a Custom Palette
The number of discrete colors in the palette determines the number of bins used for applying colors.
Example with 6 Colors
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), stat='mean', values='cyl',
palette=RdYlGn9)
show(hm)
Explanation: Example with 9 Colors
End of explanation
hm = HeatMap(df, x=bins('mpg'), y=bins('displ'), values='cyl',
stat='mean', legend='top_right')
show(hm)
Explanation: Viewing Color Bins in Legend
End of explanation
fruits = {'fruit': ['apples', 'apples', 'apples', 'apples', 'apples',
'pears', 'pears', 'pears', 'pears', 'pears',
'bananas', 'bananas', 'bananas', 'bananas', 'bananas'],
'fruit_count': [4, 5, 8, 12, 4, 6, 5, 4, 8, 7, 1, 2, 4, 8, 12],
'year': [2009, 2010, 2011, 2012, 2013, 2009, 2010, 2011, 2012, 2013, 2009, 2010,
2011, 2012, 2013]}
fruits['year'] = [str(yr) for yr in fruits['year']]
fruits_df = pd.DataFrame(fruits)
fruits_df.head()
Explanation: Build Fruit Data
End of explanation
hm = HeatMap(fruits, y='year', x='fruit', values='fruit_count', stat=None)
show(hm)
Explanation: Without Dimension Binning
Each x and y value are treated as the coordinates of a bin. The values column is then binned and assigned to the color attribute, so discrete colors can be assigned.
End of explanation
unempl_data = data.copy()
unempl_data.head()
# Remove the annual column if we don't want to show the total
del unempl_data['Annual']
# Convert numerical year to strings
unempl_data['Year'] = unempl_data['Year'].astype(str)
# de-pivot all columns, except for Year, into two columns.
# One column will have the values and the second will have the labels
unempl = pd.melt(unempl_data, var_name='Month', value_name='Unemployment', id_vars=['Year'])
Explanation: Unemployment Data
Prepare the Data
Current data is in a pivoted form, which is difficult to use when defining plots by specifying columns. We will use the pandas melt function to de-pivot the columns with numerical data we want to use, while keeping the categorical data.
End of explanation
unempl.head()
hm = HeatMap(unempl, x='Year', y='Month', values='Unemployment', stat=None,
sort_dim={'x': False}, width=1000)
show(hm)
Explanation: De-pivoted Data
We now have the 3 dimensions that we want to map to attributes of the plot.
End of explanation |
9,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Maps in IPython
This notebook demonstrates the basics of mapping data in IPython. All you need is a simple dataset, containing coordinate values.
Step1: And now let's test if the Basemap package is loaded and the graphics displayed correctly.
Step2: Now to the cool part! | Python Code:
%pylab inline
from pylab import *
pylab.rcParams['figure.figsize'] = (8.0, 6.4)
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
Explanation: Simple Maps in IPython
This notebook demonstrates the basics of mapping data in IPython. All you need is a simple dataset, containing coordinate values.
End of explanation
map = Basemap(projection='ortho', lat_0=50, lon_0=-100,
resolution='l', area_thresh=1000.0)
map.drawcoastlines()
plt.show()
Explanation: And now let's test if the Basemap package is loaded and the graphics displayed correctly.
End of explanation
import csv
# Open the cities population data file.
filename = 'city_longlat.csv'
# Create empty lists for the latitudes, longitudes and population.
lats, lons, population = [], [], []
# Read through the entire file, skip the first line,
# and pull out the data we need.
with open(filename) as f:
# Create a csv reader object.
reader = csv.reader(f)
# Ignore the header row.
next(reader)
# Store the latitudes, longitudes and populations in the appropriate lists.
for row in reader:
lats.append(float(row[1]))
lons.append(float(row[2]))
population.append(float(row[3]))
# --- Build Map ---
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
def get_marker_color(population):
if population < 2000000:
return ('ro')
elif population < 7000000:
return ('yo')
else:
return ('go')
map = Basemap(projection='merc', resolution = 'h', area_thresh = 1000.0,
lat_0=0, lon_0=-130,
llcrnrlon=-18.968978, llcrnrlat=33.679432,
urcrnrlon=41.968945, urcrnrlat=58.940191)
map.drawcoastlines()
map.drawcountries()
map.bluemarble()
map.drawmapboundary()
map.drawmeridians(np.arange(0, 360, 30))
map.drawparallels(np.arange(-90, 90, 30))
for lons, lats, population in zip(lons, lats, population):
x,y = map(lats, lons)
marker_string = get_marker_color(population)
map.plot(x, y, marker_string, markersize=population/150000)
title_string = "Most Populous Cities in Europe"
figsize(18, 12)
plt.title(title_string)
plt.show()
Explanation: Now to the cool part!
End of explanation |
9,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Security-Constrained Optimisation
In this example, the dispatch of generators is optimised using the security-constrained linear OPF, to guaranteed that no branches are overloaded by certain branch outages.
Step1: There are some infeasibilities without line extensions.
Step2: Performing security-constrained linear OPF
Step3: For the PF, set the P to the optimised P.
Step4: Check no lines are overloaded with the linear contingency analysis
Step5: Check loading as per unit of s_nom in each contingency | Python Code:
import pypsa, os
import numpy as np
network = pypsa.examples.scigrid_de(from_master=True)
Explanation: Security-Constrained Optimisation
In this example, the dispatch of generators is optimised using the security-constrained linear OPF, to guaranteed that no branches are overloaded by certain branch outages.
End of explanation
for line_name in ["316", "527", "602"]:
network.lines.loc[line_name, "s_nom"] = 1200
now = network.snapshots[0]
Explanation: There are some infeasibilities without line extensions.
End of explanation
branch_outages = network.lines.index[:15]
network.sclopf(now, branch_outages=branch_outages, solver_name="cbc")
Explanation: Performing security-constrained linear OPF
End of explanation
network.generators_t.p_set = network.generators_t.p_set.reindex(
columns=network.generators.index
)
network.generators_t.p_set.loc[now] = network.generators_t.p.loc[now]
network.storage_units_t.p_set = network.storage_units_t.p_set.reindex(
columns=network.storage_units.index
)
network.storage_units_t.p_set.loc[now] = network.storage_units_t.p.loc[now]
Explanation: For the PF, set the P to the optimised P.
End of explanation
p0_test = network.lpf_contingency(now, branch_outages=branch_outages)
p0_test
Explanation: Check no lines are overloaded with the linear contingency analysis
End of explanation
max_loading = (
abs(p0_test.divide(network.passive_branches().s_nom, axis=0)).describe().loc["max"]
)
max_loading
np.allclose(max_loading, np.ones((len(max_loading))))
Explanation: Check loading as per unit of s_nom in each contingency
End of explanation |
9,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data
We have the CSV file output of a git blame result.
Step1: Main Contributors
The blame file incorporates every single line of code with the author that changed that line at last.
Step2: No-Go Areas
We want to find the components, where knowledge is probably outdated.
Step3: These are the oldest 10 components
Step4: For all components, we create an overview with a bar chart. | Python Code:
import pandas as pd
blame_log = pd.read_csv("../demos/dataset/linux_blame_log.csv")
blame_log.head()
blame_log.info()
Explanation: Data
We have the CSV file output of a git blame result.
End of explanation
top10 = blame_log.author.value_counts().head(10)
top10
%matplotlib inline
top10_authors.plot.pie();
Explanation: Main Contributors
The blame file incorporates every single line of code with the author that changed that line at last.
End of explanation
blame_log.timestamp = pd.to_datetime(blame_log.timestamp)
blame_log.head()
blame_log['age'] = pd.Timestamp('today') - blame_log.timestamp
blame_log.head()
blame_log['component'] = blame_log.path.str.split("/").str[:2].str.join(":")
blame_log.head()
age_per_component = blame_log.groupby('component') \
.age.min().sort_values()
age_per_component.head()
Explanation: No-Go Areas
We want to find the components, where knowledge is probably outdated.
End of explanation
age_per_component.tail(10)
Explanation: These are the oldest 10 components
End of explanation
age_per_component.plot.bar(figsize=[15,5])
Explanation: For all components, we create an overview with a bar chart.
End of explanation |
9,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_stats_cluster_source_rANOVA
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: .. _tut_stats_cluster_source_rANOVA:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id, copy=False)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
fsave_vertices = [np.arange(10242), np.array([], int)] # right hemi is empty
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately (and you might want to use morph_data
instead), but here since all estimates are on 'sample' we can use one
morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) x time x space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language). Without this also
the main effects will be returned.
End of explanation
def stat_fun(*args):
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
# get f-values only.
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
Note. for further details on this ANOVA function consider the
corresponding
:ref:time frequency tutorial <tut_stats_cluster_sensor_rANOVA_tfr>.
End of explanation
source_space = grade_to_tris(5)
# as we only have one hemisphere we need only need half the connectivity
lh_source_space = source_space[source_space[:, 0] < 10242]
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(lh_source_space)
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, colormap='mne',
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses
End of explanation |
9,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 映画レビューを使ったテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: IMDB データセットをダウンロードする
IMDB データセットは、TensorFlow データセットで提供されています。次のコードを使って、IMDB データセットをマシン(または Colab ランタイム)にダウンロードしてください。
Step3: データの観察
データの形式を確認してみましょう。各サンプルは、映画レビューを表す文章と対応するラベルです。文章はまったく事前処理されていません。ラベルは 0 または 1 の整数値で、0 は否定的なレビューで 1 は肯定的なレビューを示します。
Step4: 最初の 10 個のサンプルを出力しましょう。
Step5: 最初の 10 個のラベルも出力しましょう。
Step6: モデルを構築する
ニューラルネットワークは、レイヤーのスタックによって作成されています。これには、次の 3 つのアーキテクチャ上の決定が必要です。
どのようにテキストを表現するか。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは文章で構成されています。予測するラベルは、0 または 1 です。
テキストの表現方法としては、文章を埋め込みベクトルに変換する方法があります。トレーニング済みのテキスト埋め込みを最初のレイヤーとして使用することで、次のような 2 つのメリットを得ることができます。
テキストの事前処理を心配する必要がない。
転移学習を利用できる。
この例では、TensorFlow Hub のモデルである「google/nnlm-en-dim50/2」を使います。
このチュートリアルのためにテストできるモデルが、ほかに 2 つあります。
google/nnlm-en-dim50-with-normalization/2 - google/nnlm-en-dim50/2 と同じものですが、句読点を削除するためのテキスト正規化が含まれています。このため、入力テキストのトークンに使用する語彙内埋め込みのカバレッジを改善することができます。
google/nnlm-en-dim128-with-normalization/2 - 50 次元未満でなく、128 の埋め込み次元を備えたより大規模なモデルです。
では始めに、TensorFlow Hub モデル を使用して文章を埋め込む Keras レイヤーを作成し、いくつかの入力サンプルで試してみましょう。生成される埋め込みの出力形状は、(num_examples, embedding_dimension) であるところに注意してください。
Step7: 今度は、完全なモデルを構築しましょう。
Step8: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
最初のレイヤーは、TensorFlow Hub レイヤーです。このレイヤーは文章から埋め込みベクトルにマッピングする事前トレーニング済みの SavedModel を使用します。使用しているモデル (google/nnlm-en-dim50/2) は文章とトークンに分割し、各トークンを埋め込んで、埋め込みを組み合わせます。その結果、次元は (num_examples, embedding_dimension) となります。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ全結合(Dense)レイヤーに受け渡されます。
最後のレイヤーは単一の出力ノードで密に接続されます。これは、ロジットを出力します。モデルに応じた、真のクラスの対数オッズです。
非表示ユニット
上記のモデルには、入力と出力の間に 2 つの中間または「非表示」レイヤーがあります。出力数(ユニット数、ノード数、またはニューロン数)はレイヤーの表現空間の次元で、言い換えると、内部表現を学習する際にネットワークが許可された自由の量と言えます。
モデルの非表示ユニット数やレイヤー数が増えるほど(より高次元の表現空間)、ネットワークはより複雑な表現を学習できますが、ネットワークの計算がより高価となり、トレーニングデータでのパフォーマンスを改善してもテストデータでのパフォーマンスは改善されない不要なパターンが学習されることになります。この現象を過適合と呼び、これについては後の方で説明します。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイドアクティベーションを持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
これは、損失関数の唯一の選択肢ではありません。たとえば、mean_squared_error を使用することもできます。ただし、一般的には、確率を扱うには binary_crossentropy の方が適しているといえます。これは、確率分布間、またはこのケースではグランドトゥルース分布と予測間の「距離」を測定するためです。
後の方で、回帰問題(家の価格を予測するなど)を考察する際に、平均二乗誤差と呼ばれる別の損失関数の使用方法を確認します。
では、オプティマイザと損失関数を使用するようにモデルを構成します。
Step9: 検証セットを作成する
トレーニングの際、モデルが遭遇したことのないデータでモデルの精度を確認したいと思います。そこで、元のトレーニングデータから 10,000 個のサンプルを取り出して検証セットを作成します(ここでテストセットを使用しないのは、トレーニングデータのみを使用してモデルの構築と調整を行った上で、テストデータを一度だけ使用して精度を評価することを目標としているからです)。
Step10: モデルのトレーニング
モデルを 512 サンプルのミニバッチで 40 エポック、トレーニングします。これは、x_train と y_train テンソルのすべてのサンプルを 40 回イテレーションします。トレーニング中、検証セットの 10,000 個のサンプルで、モデルの損失と精度を監視します。
Step11: モデルを評価する
モデルのパフォーマンスを見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確率です。
Step12: このかなり単純なアプローチで、約 87% の正解率が達成されます。より高度なアプローチを使えば、95% に近づくでしょう。
経時的な精度と損失のグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を詰まったディクショナリを含む History オブジェクトを返します。
Step13: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の精度を比較したグラフを作成することができます。 | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
Explanation: 映画レビューを使ったテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
<td><a href="https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを見る</a></td>
</table>
このノートブックでは、映画レビューのテキストを使用して、それが肯定的であるか否定的であるかに分類します。これは二項分類の例で、機械学習問題では重要な分類法として広く適用されます。
Internet Movie Database より、50,000 件の映画レビューテキストが含まれる IMDB データセットを使用します。これは、トレーニング用の 25,000 件のレビューとテスト用の 25,000 件のレビューに分割されます。トレーニングセットとテストセットは均衡が保たれており、肯定的評価と否定的評価が同じ割合で含まれます。
このノートブックでは、TensorFlow でモデルを構築してトレーニングするための tf.keras という高レベル API と、転移学習用のライブラリ兼プラットフォームである TensorFlow Hub を使用します。tf.keras を使用した、より高度なテキスト分類チュートリアルについては、MLCC Text Classification Guide をご覧ください。
その他のモデル
テキスト埋め込みの生成に使用できる、より表現豊かなモデルや効率性の高いモデルをこちらでご覧ください。
MNIST モデルをビルドする
End of explanation
train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"],
batch_size=-1, as_supervised=True)
train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)
Explanation: IMDB データセットをダウンロードする
IMDB データセットは、TensorFlow データセットで提供されています。次のコードを使って、IMDB データセットをマシン(または Colab ランタイム)にダウンロードしてください。
End of explanation
print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples)))
Explanation: データの観察
データの形式を確認してみましょう。各サンプルは、映画レビューを表す文章と対応するラベルです。文章はまったく事前処理されていません。ラベルは 0 または 1 の整数値で、0 は否定的なレビューで 1 は肯定的なレビューを示します。
End of explanation
train_examples[:10]
Explanation: 最初の 10 個のサンプルを出力しましょう。
End of explanation
train_labels[:10]
Explanation: 最初の 10 個のラベルも出力しましょう。
End of explanation
model = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True)
hub_layer(train_examples[:3])
Explanation: モデルを構築する
ニューラルネットワークは、レイヤーのスタックによって作成されています。これには、次の 3 つのアーキテクチャ上の決定が必要です。
どのようにテキストを表現するか。
モデルにはいくつのレイヤーを使用するか。
各レイヤーにはいくつの非表示ユニットを使用するか。
この例では、入力データは文章で構成されています。予測するラベルは、0 または 1 です。
テキストの表現方法としては、文章を埋め込みベクトルに変換する方法があります。トレーニング済みのテキスト埋め込みを最初のレイヤーとして使用することで、次のような 2 つのメリットを得ることができます。
テキストの事前処理を心配する必要がない。
転移学習を利用できる。
この例では、TensorFlow Hub のモデルである「google/nnlm-en-dim50/2」を使います。
このチュートリアルのためにテストできるモデルが、ほかに 2 つあります。
google/nnlm-en-dim50-with-normalization/2 - google/nnlm-en-dim50/2 と同じものですが、句読点を削除するためのテキスト正規化が含まれています。このため、入力テキストのトークンに使用する語彙内埋め込みのカバレッジを改善することができます。
google/nnlm-en-dim128-with-normalization/2 - 50 次元未満でなく、128 の埋め込み次元を備えたより大規模なモデルです。
では始めに、TensorFlow Hub モデル を使用して文章を埋め込む Keras レイヤーを作成し、いくつかの入力サンプルで試してみましょう。生成される埋め込みの出力形状は、(num_examples, embedding_dimension) であるところに注意してください。
End of explanation
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
Explanation: 今度は、完全なモデルを構築しましょう。
End of explanation
model.compile(optimizer='adam',
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')])
Explanation: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
最初のレイヤーは、TensorFlow Hub レイヤーです。このレイヤーは文章から埋め込みベクトルにマッピングする事前トレーニング済みの SavedModel を使用します。使用しているモデル (google/nnlm-en-dim50/2) は文章とトークンに分割し、各トークンを埋め込んで、埋め込みを組み合わせます。その結果、次元は (num_examples, embedding_dimension) となります。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ全結合(Dense)レイヤーに受け渡されます。
最後のレイヤーは単一の出力ノードで密に接続されます。これは、ロジットを出力します。モデルに応じた、真のクラスの対数オッズです。
非表示ユニット
上記のモデルには、入力と出力の間に 2 つの中間または「非表示」レイヤーがあります。出力数(ユニット数、ノード数、またはニューロン数)はレイヤーの表現空間の次元で、言い換えると、内部表現を学習する際にネットワークが許可された自由の量と言えます。
モデルの非表示ユニット数やレイヤー数が増えるほど(より高次元の表現空間)、ネットワークはより複雑な表現を学習できますが、ネットワークの計算がより高価となり、トレーニングデータでのパフォーマンスを改善してもテストデータでのパフォーマンスは改善されない不要なパターンが学習されることになります。この現象を過適合と呼び、これについては後の方で説明します。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイドアクティベーションを持つ単一ユニットレイヤー)を出力するため、binary_crossentropy 損失関数を使用します。
これは、損失関数の唯一の選択肢ではありません。たとえば、mean_squared_error を使用することもできます。ただし、一般的には、確率を扱うには binary_crossentropy の方が適しているといえます。これは、確率分布間、またはこのケースではグランドトゥルース分布と予測間の「距離」を測定するためです。
後の方で、回帰問題(家の価格を予測するなど)を考察する際に、平均二乗誤差と呼ばれる別の損失関数の使用方法を確認します。
では、オプティマイザと損失関数を使用するようにモデルを構成します。
End of explanation
x_val = train_examples[:10000]
partial_x_train = train_examples[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
Explanation: 検証セットを作成する
トレーニングの際、モデルが遭遇したことのないデータでモデルの精度を確認したいと思います。そこで、元のトレーニングデータから 10,000 個のサンプルを取り出して検証セットを作成します(ここでテストセットを使用しないのは、トレーニングデータのみを使用してモデルの構築と調整を行った上で、テストデータを一度だけ使用して精度を評価することを目標としているからです)。
End of explanation
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
Explanation: モデルのトレーニング
モデルを 512 サンプルのミニバッチで 40 エポック、トレーニングします。これは、x_train と y_train テンソルのすべてのサンプルを 40 回イテレーションします。トレーニング中、検証セットの 10,000 個のサンプルで、モデルの損失と精度を監視します。
End of explanation
results = model.evaluate(test_examples, test_labels)
print(results)
Explanation: モデルを評価する
モデルのパフォーマンスを見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確率です。
End of explanation
history_dict = history.history
history_dict.keys()
Explanation: このかなり単純なアプローチで、約 87% の正解率が達成されます。より高度なアプローチを使えば、95% に近づくでしょう。
経時的な精度と損失のグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を詰まったディクショナリを含む History オブジェクトを返します。
End of explanation
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
Explanation: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の精度を比較したグラフを作成することができます。
End of explanation |
9,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using pymldb Tutorial
Interactions with MLDB occurs via a REST API. Interacting with a REST API over HTTP from a Notebook interface can be a little bit laborious if you're using a general-purpose Python library like requests directly, so MLDB comes with a Python library called pymldb to ease the pain.
Connections
The pymldb library includes a class called Connection. The recommended usage pattern is shown here
Step1: Accessing the REST API
Once you have a connection object, you can easily make calls to the REST API
Step2: Here we create a dataset and insert two rows of two columns into it
Step3: SQL Queries
Now that we have a dataset, we can use the query() method on the connection to run an SQL query and get the results back as a Pandas DataFrame | Python Code:
from pymldb import Connection
mldb = Connection("http://localhost")
Explanation: Using pymldb Tutorial
Interactions with MLDB occurs via a REST API. Interacting with a REST API over HTTP from a Notebook interface can be a little bit laborious if you're using a general-purpose Python library like requests directly, so MLDB comes with a Python library called pymldb to ease the pain.
Connections
The pymldb library includes a class called Connection. The recommended usage pattern is shown here:
End of explanation
mldb.get("/v1/types")
#keyword arguments to get() are appended to the GET query string
mldb.get("/v1/types", x="y")
#dictionaries arguments to put() and post() are sent as JSON via PUT or POST
mldb.put("/v1/datasets/sample", {"type": "sparse.mutable"} )
Explanation: Accessing the REST API
Once you have a connection object, you can easily make calls to the REST API:
End of explanation
mldb.put( "/v1/datasets/demo", {"type":"sparse.mutable"})
mldb.post("/v1/datasets/demo/rows", {"rowName": "first", "columns":[["a",1,0],["b",2,0]]})
mldb.post("/v1/datasets/demo/rows", {"rowName": "second", "columns":[["a",3,0],["b",4,0]]})
mldb.post("/v1/datasets/demo/commit")
Explanation: Here we create a dataset and insert two rows of two columns into it:
End of explanation
df = mldb.query("select * from demo")
print type(df)
df
Explanation: SQL Queries
Now that we have a dataset, we can use the query() method on the connection to run an SQL query and get the results back as a Pandas DataFrame:
End of explanation |
9,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This challenge will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
Step1: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
Step2: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
Step3: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
Step4: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
Step5: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
Step6: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
Step7: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
Step8: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players. | Python Code:
import random
Explanation: This challenge will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
End of explanation
gameStake = 50
cards = range(10)
Explanation: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
End of explanation
class Player:
# in the __init__() function, use the two input variables to initialize the ID and starting pot of each player
def __init__(self, inputID, startingPot):
# [CREATE YOUR INITIALIZATIONS HERE]
# make sure you initialize two local variables to store a unique ID for each player
# and the player's current 'pot' of money
# create a function for playing the game. This function starts by taking an input for the dealer's card
# and picking a random number from the 'cards' list for the player's card
def play(self, dealerCard):
# we use the random.choice() function to select a random item from a list
playerCard = random.choice(cards)
# here we should have a conditional that tests the player's card value against the dealer card
# and returns a statement saying whether the player won or lost the hand
# before returning the statement, make sure to either add or subtract the stake from the player's pot so that
# the 'pot' variable tracks the player's money
if playerCard < dealerCard:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
else:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
# create an accessor function to return the current value of the player's pot
def returnPot(self):
# [FILL IN THE RETURN STATEMENT]
# create an accessor function to return the player's ID
def returnID(self):
# [FILL IN THE RETURN STATEMENT]
Explanation: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
End of explanation
def playHand(players):
for player in players:
dealerCard = random.choice(cards)
#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
Explanation: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
End of explanation
def checkBalances(players):
for player in players:
#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
Explanation: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
End of explanation
players = []
Explanation: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
End of explanation
for i in range(5):
players.append(Player(i, 500))
Explanation: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
End of explanation
for i in range(10):
print('')
print('start game ' + str(i))
playHand(players)
Explanation: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
End of explanation
print('')
print('game results:')
checkBalances(players)
Explanation: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
End of explanation |
9,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tarea 2, parte 2
<hr>
Pregunta 1
Según <a href="https
Step1: <hr>
Pregunta 3
A simple vista, aproximadamente, se tiene
$T_T = 0.7$ d
$T_F = 0.4$ d
$\Delta F = 0.003$
Con esto, se calculan los cuatro parámetros a partir de observables (sin considerar en este punto una ley de potencias que relacione masa y radio).
Como las ecuaciones son las del trabajo de Seager & Mallén-Ornelas, las suposiciones necesarias para determinar las cantidades son las que allí se hacen.
Step2: <hr>
Pregunta 4
Se calculan los coeficientes según se detalla en en <a href="https
Step3: Se ejecuta batman como se explica en la documentación, entregando como parámetros los valores obtenidos a lo largo de este trabajo.
Step4: <hr>
Pregunta Bonus
Se repiten los pasos para el planeta anterior. Dado que los valores del flujo son más dispersos, se realiza el paso de median filter con ventanas de 25 valores y se centra el gráfico en un posible tránsito, que corresponde al sector alrededor de la zona de mínimo flujo (obviando un punto, seguramente espuria, en phase ~ 0.35). Si el planeta fuera del radio de la Tierra y su estrella fuera del radio del Sol, entonces
$$R_p/R_* = 0.009 \rightarrow \Delta F = 0,000081$$
Esto se acerca a lo que muestra el gráfico. | Python Code:
import numpy as np
from scipy.signal import medfilt
import matplotlib.pyplot as plt
import kplr
%matplotlib inline
client = kplr.API()
koi = client.koi(1274.01)
lcs = koi.get_light_curves(short_cadence=True)
p = 704.2
time, flux, ferr, med = [], [], [], []
for lc in lcs:
with lc.open() as f:
# The lightcurve data are in the first FITS HDU.
hdu_data = f[1].data
time.append(hdu_data["time"][~np.isnan(hdu_data["pdcsap_flux"])])
flux.append(hdu_data["pdcsap_flux"][~np.isnan(hdu_data["pdcsap_flux"])])
ferr.append(hdu_data["pdcsap_flux_err"][~np.isnan(hdu_data["pdcsap_flux"])])
# Ignora los NaN al hacer append
normFlux, normFerr, phase = flux, ferr, time
for i in range(0,len(flux)):
med.append(np.median(flux[i]))
prom = np.mean(med)
for i in range(0,len(flux)):
normFlux[i] = normFlux[i] - (med[i] - prom)
normFlux[i] = medfilt(normFlux[i], 11)
fig, ax = plt.subplots(2,1,figsize=(15,20))
for i in range(0,len(ax)):
ax[i].set_ylim(0.996,1.0007)
ax[i].set_title('KOI-1274.01')
ax[i].set_xlabel('Phase',size=16)
ax[i].set_ylabel('Normalized flux',size=14)
for i in range(0,len(normFlux)):
normFlux[i] = normFlux[i]/prom
normFerr[i] = normFerr[i]/prom
phase[i] = time[i]/p %1
ax[0].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3)
ax[0].plot(phase[i], normFlux[i],'k.')
ax[1].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3)
ax[1].plot(phase[i], normFlux[i],'k--', alpha=.2)
ax[1].set_xlim(0.699,0.7005)
plt.show()
plt.close()
Explanation: Tarea 2, parte 2
<hr>
Pregunta 1
Según <a href="https://ui.adsabs.harvard.edu/#abs/2003ApJ...585.1038S/abstract">Seager & Mallén-Ornelas (2003)</a>,
La geometría del tránsito puede ser descrita por (se presentan imágenes del trabajo de Seager & Mallén-Ornelas):
<img src="transitshape.png">
y la duración total como:
<img src="transittotaltime.png">
<img src="fig1.png">
<em>a</em> es el semieje mayor.
A partir de las características de la curva de luz de un tránsito, teniendo en cuenta la geometría del evento y la Tercera Ley de Kepler, se pueden obtener cuatro parámetros derivables de observables del sistema:
<ol>
<li><strong>La razón entre el radio planetario y el estelar</strong> $$R_P/R_* = \sqrt{\Delta F}$$ directamente de definir $\Delta F \equiv (F_{no transit}-F_{transit})/F_{no transit} = (R_P/R*)^2$</li>
<li><strong>El parámetro de impacto</strong>
<img src="b.png">
definido como la distancia proyectada entre el los centros del planeta y de la estrella durante la mitad del tránsito, en términos de $R_*$. Se deriva directamente de la ecuación anterior y de la ecuación de la forma del tránsito.</li>
<li><strong>La razón entre el semieje mayor de la órbita y el radio estelar</strong>
<img src="ar.png">
a partir de la ecuación de la duración del tránsito.</li>
<li><strong>La densidad estelar</strong>
<img src="rho.png">
a partir de la ecuación anterior y de la Tercera Ley de Kepler cuando $M_P \ll M_*$. Depende del parámetro de impacto puesto que $b$ afecta la duración del tránsito.
</ol>
Si se considera la relación masa-radio para una estrella
$$R_ = kM_^x $$
para algunas constantes $k,x$ se obtiene un sistema de cinco ecuaciones y cinco incógnitas, y se pueden derivar las cantidades físicas una por una.
<hr>
Pregunta 2
En este paso se reliza una estandarización de los cuartos de modo que queden en torno al promedio de las medianas. Luego, se realizan dos pasos de <em>median filter</em> con ventanas de 11 valores y se normaliza el flujo. Para el período sugerido (704.2 días), se grafica el flujo en función de la fase. En verde se incluyen las barras de error asociadas al flujo.
End of explanation
df = 0.003
tt = 0.7
tf = 0.4
sintf = np.sin(tf*np.pi/p)**2 # un par de variables auxiliares
sintt = np.sin(tt*np.pi/p)**2
ratio = np.sqrt(df) #Rp/R*
b = np.sqrt( ((1-ratio)**2 - (sintf)/(sintt) *(1+ratio)**2) /(1-(sintf/sintt)) )
aR = np.sqrt( ((1+ratio)**2 - b**2 *(1-sintt)) /sintt )
i = np.arccos(b/aR)
i = np.degrees(i)
rho = aR**3 * 365.25**2 / 215**3 / p**2
print 'Rp/R* \t = \t' + repr(ratio)
print 'b \t = \t' + repr(b)
print 'a/R* \t = \t' + repr(aR)
print 'i \t = \t' + repr(i)
print 'rho \t = \t' + repr(rho) + ' densidades solares'
Explanation: <hr>
Pregunta 3
A simple vista, aproximadamente, se tiene
$T_T = 0.7$ d
$T_F = 0.4$ d
$\Delta F = 0.003$
Con esto, se calculan los cuatro parámetros a partir de observables (sin considerar en este punto una ley de potencias que relacione masa y radio).
Como las ecuaciones son las del trabajo de Seager & Mallén-Ornelas, las suposiciones necesarias para determinar las cantidades son las que allí se hacen.
End of explanation
from scipy.optimize import leastsq
from scipy.interpolate import UnivariateSpline
import scipy.integrate as integrate
w, r = np.loadtxt('kepler_response_hires1.txt', unpack=True)
w = 10*w
S = UnivariateSpline(w,r,s=0,k=1)
min_w = min(w)
max_w = max(w)
idx = np.where((w>min_w)&(w<max_w))[0]
S_wav = np.append(np.append(min_w,w[idx]),max_w)
S_res = np.append(np.append(S(min_w),r[idx]),S(max_w))
I = np.array([])
wavelengths = np.array([])
f = open('grav_4.5_lh_1.25.dat','r')
counter = 0
while(True):
l = f.readline()
if(l==''):
break
# If no jump of line or comment, save the intensities:
if(l[0]!='#' and l[:3]!='\n'):
splitted = l.split('\t')
if(len(splitted)==18):
splitted[-1] = (splitted[-1])[:-1] # The last one always has a jump of line (\n), so erase it.
wavelength = np.double(splitted[0])*10 # Convert wavelengths, which are in nanometers, to angstroms.
intensities = np.double(np.array(splitted[1:])) # Get the intensities.
ndigits = len(str(int(intensities[1])))
# Only if I(1) is different from zero, fit the LDs:
if(intensities[0]!=0.0):
intensities[1:] = intensities[1:]/1e5 # Kurucz doesn't put points on his files (e.g.: 0.8013 is 8013).
intensities[1:] = intensities[1:]*intensities[0] # All the rest of the intensities are normalized w/r to the center one.
if(counter == 0):
I = intensities
else:
I = np.vstack((I,intensities))
wavelengths = np.append(wavelengths,wavelength)
counter = counter + 1
f.close()
mu = np.array([1.0,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.25,0.2,0.15,0.125,0.1,0.075,0.05,0.025,0.01])
# Define the number of mu angles at which we will perform the integrations:
nmus = len(mu)
# Now integrate intensity through each angle:
I_l = np.array([])
for i in range(nmus):
# Interpolate the intensities:
Ifunc = UnivariateSpline(wavelengths,I[:,i],s=0,k=1)
integrand = S_res*Ifunc(S_wav)
integration_results = np.trapz(integrand, x=S_wav)
I_l = np.append(I_l,integration_results)
I0 = I_l/(I_l[0]) # Normalize profile with respect to I(mu = 1):
# Define A matrix for the linear system:
A = np.zeros([2,2])
# Define b vector for the linear system:
b = np.zeros(2)
# Obtain the alpha_n_k and beta_k that fill the A matrix and b vector:
for n in range(1,3,1):
for k in range(1,3,1):
A[n-1,k-1] = sum(((1.0-mu)**n)*((1.0-mu)**k))
b[n-1] = sum(((1.0-mu)**n)*(1.0-I0))
u = list(np.linalg.solve(A,b))
print u
Explanation: <hr>
Pregunta 4
Se calculan los coeficientes según se detalla en en <a href="https://arxiv.org/pdf/1503.07020v3.pdf">Espinoza & Jordán 2015</a>, a partir de una versión modificada del código disponible en <a href="https://github.com/nespinoza/limb-darkening">https://github.com/nespinoza/limb-darkening</a>.
End of explanation
import batman
params = batman.TransitParams() #object to store transit parameters
params.t0 = 0. #time of inferior conjunction
params.per = p #orbital period
params.rp = ratio #planet radius (in units of stellar radii)
params.a = aR #semi-major axis (in units of stellar radii)
params.inc = i #orbital inclination (in degrees)
params.ecc = 0. #eccentricity
params.w = 90. #longitude of periastron (in degrees)
params.limb_dark = "quadratic" #limb darkening model
params.u = u #limb darkening coefficients
t = np.linspace(-0.025, 0.025, 100) #times at which to calculate light curve
m = batman.TransitModel(params, t) #initializes model
fluxBatman = m.light_curve(params) #calculates light curve
plt.plot(t, fluxBatman)
plt.xlabel("Time from central transit")
plt.ylabel("Relative flux")
plt.show()
##############
#oc
Explanation: Se ejecuta batman como se explica en la documentación, entregando como parámetros los valores obtenidos a lo largo de este trabajo.
End of explanation
koi = client.koi(7016.01)
lcs = koi.get_light_curves(short_cadence=True)
p = koi.koi_period
time, flux, ferr, med = [], [], [], []
for lc in lcs:
with lc.open() as f:
# The lightcurve data are in the first FITS HDU.
hdu_data = f[1].data
time.append(hdu_data["time"][~np.isnan(hdu_data["pdcsap_flux"])])
flux.append(hdu_data["pdcsap_flux"][~np.isnan(hdu_data["pdcsap_flux"])])
ferr.append(hdu_data["pdcsap_flux_err"][~np.isnan(hdu_data["pdcsap_flux"])])
# Ignora los NaN al hacer append
normFlux, normFerr, phase = flux, ferr, time
for i in range(0,len(flux)):
med.append(np.median(flux[i]))
prom = np.mean(med)
for i in range(0,len(flux)):
normFlux[i] = normFlux[i] - (med[i] - prom)
normFlux[i] = medfilt(normFlux[i], 25)
fig, ax = plt.subplots(2,1,figsize=(15,20))
for i in range(0,len(ax)):
ax[i].set_ylim(0.996,1.0007)
ax[i].set_title('KOI-7016.01')
ax[i].set_xlabel('Phase',size=16)
ax[i].set_ylabel('Normalized flux',size=14)
for i in range(0,len(normFlux)):
normFlux[i] = normFlux[i]/prom
normFerr[i] = normFerr[i]/prom
phase[i] = time[i]/p %1
ax[0].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3)
ax[0].plot(phase[i], normFlux[i],'k.')
ax[1].errorbar(phase[i], normFlux[i], normFerr[i], fmt='g.', ecolor='green', ms = 3)
ax[1].plot(phase[i], normFlux[i],'k.', alpha=.2)
ax[1].set_xlim(0.762,0.782)
ax[1].set_ylim(0.9985,1.001)
plt.show()
plt.close()
Explanation: <hr>
Pregunta Bonus
Se repiten los pasos para el planeta anterior. Dado que los valores del flujo son más dispersos, se realiza el paso de median filter con ventanas de 25 valores y se centra el gráfico en un posible tránsito, que corresponde al sector alrededor de la zona de mínimo flujo (obviando un punto, seguramente espuria, en phase ~ 0.35). Si el planeta fuera del radio de la Tierra y su estrella fuera del radio del Sol, entonces
$$R_p/R_* = 0.009 \rightarrow \Delta F = 0,000081$$
Esto se acerca a lo que muestra el gráfico.
End of explanation |
9,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Eight schools
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: The Data
From Bayesian Data Analysis, section 5.5 (Gelman et al. 2013)
Step4: Model
To capture the data, we use a hierarchical normal model. It follows the generative process,
$$
\begin{align}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \
\text{for } & i=1\ldots 8
Step5: Bayesian Inference
Given data, we perform Hamiltonian Monte Carlo (HMC) to calculate the posterior distribution over the model's parameters.
Step6: We can observe the shrinkage toward the group avg_effect above.
Step7: Criticism
To get the posterior predictive distribution, i.e., a model of new data $y^*$ given the observed data $y$
Step8: We can look at the residuals between the treatment effects data and the predictions of the model posterior. These correspond with the plot above which shows the shrinkage of the estimated effects toward the population average.
Step9: Because we have a distribution of predictions for each school, we can consider the distribution of residuals as well. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
import warnings
tf.enable_v2_behavior()
plt.style.use("ggplot")
warnings.filterwarnings('ignore')
Explanation: Eight schools
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Eight_Schools"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The eight schools problem (Rubin 1981) considers the effectiveness of SAT coaching programs conducted in parallel at eight schools. It has become a classic problem (Bayesian Data Analysis, Stan) that illustrates the usefulness of hierarchical modeling for sharing information between exchangeable groups.
The implemention below is an adaptation of an Edward 1.0 tutorial.
Imports
End of explanation
num_schools = 8 # number of schools
treatment_effects = np.array(
[28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects
treatment_stddevs = np.array(
[15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE
fig, ax = plt.subplots()
plt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs)
plt.title("8 Schools treatment effects")
plt.xlabel("School")
plt.ylabel("Treatment effect")
fig.set_size_inches(10, 8)
plt.show()
Explanation: The Data
From Bayesian Data Analysis, section 5.5 (Gelman et al. 2013):
A study was performed for the Educational Testing Service to analyze the effects of special coaching programs for SAT-V (Scholastic Aptitude Test-Verbal) in each of eight high schools. The outcome variable in each study was the score on a special administration of the SAT-V, a standardized multiple choice test administered by the Educational Testing Service and used to help colleges make admissions decisions; the scores can vary between 200 and 800, with mean about 500 and standard deviation about 100. The SAT examinations are designed to be resistant to short-term efforts directed specifically toward improving performance on the test; instead they are designed to reflect knowledge acquired and abilities developed over many years of education. Nevertheless, each of the eight schools in this study considered its short-term coaching program to be very successful at increasing SAT scores. Also, there was no prior reason to believe that any of the eight programs was more effective than any other or that some were more similar in effect to each other than to any other.
For each of the eight schools ($J = 8$), we have an estimated treatment effect $y_j$ and a standard error of the effect estimate $\sigma_j$. The treatment effects in the study were obtained by a linear regression on the treatment group using PSAT-M and PSAT-V scores as control variables. As there was no prior belief that any of the schools were more or less similar or that any of the coaching programs would be more effective, we can consider the treatment effects as exchangeable.
End of explanation
model = tfd.JointDistributionSequential([
tfd.Normal(loc=0., scale=10., name="avg_effect"), # `mu` above
tfd.Normal(loc=5., scale=1., name="avg_stddev"), # `log(tau)` above
tfd.Independent(tfd.Normal(loc=tf.zeros(num_schools),
scale=tf.ones(num_schools),
name="school_effects_standard"), # `theta_prime`
reinterpreted_batch_ndims=1),
lambda school_effects_standard, avg_stddev, avg_effect: (
tfd.Independent(tfd.Normal(loc=(avg_effect[..., tf.newaxis] +
tf.exp(avg_stddev[..., tf.newaxis]) *
school_effects_standard), # `theta` above
scale=treatment_stddevs),
name="treatment_effects", # `y` above
reinterpreted_batch_ndims=1))
])
def target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard):
Unnormalized target density as a function of states.
return model.log_prob((
avg_effect, avg_stddev, school_effects_standard, treatment_effects))
Explanation: Model
To capture the data, we use a hierarchical normal model. It follows the generative process,
$$
\begin{align}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \
\text{for } & i=1\ldots 8:\
& \theta_i \sim \text{Normal}\left(\text{loc}{=}\mu,\ \text{scale}{=}\tau \right) \
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align}
$$
where $\mu$ represents the prior average treatment effect and $\tau$ controls how much variance there is between schools. The $y_i$ and $\sigma_i$ are observed. As $\tau \rightarrow \infty$, the model approaches the no-pooling model, i.e., each of the school treatment effect estimates are allowed to be more independent. As $\tau \rightarrow 0$, the model approaches the complete-pooling model, i.e., all of the school treatment effects are closer to the group average $\mu$. To restrict the standard deviation to be positive, we draw $\tau$ from a lognormal distribution (which is equivalent to drawing $log(\tau)$ from a normal distribution).
Following Diagnosing Biased Inference with Divergences, we transform the model above into an equivalent non-centered model:
$$
\begin{align}
\mu &\sim \text{Normal}(\text{loc}{=}0,\ \text{scale}{=}10) \
\log\tau &\sim \text{Normal}(\text{loc}{=}5,\ \text{scale}{=}1) \
\text{for } & i=1\ldots 8:\
& \theta_i' \sim \text{Normal}\left(\text{loc}{=}0,\ \text{scale}{=}1 \right) \
& \theta_i = \mu + \tau \theta_i' \
& y_i \sim \text{Normal}\left(\text{loc}{=}\theta_i,\ \text{scale}{=}\sigma_i \right)
\end{align}
$$
We reify this model as a JointDistributionSequential instance:
End of explanation
num_results = 5000
num_burnin_steps = 3000
# Improve performance by tracing the sampler using `tf.function`
# and compiling it using XLA.
@tf.function(autograph=False, jit_compile=True)
def do_sampling():
return tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
tf.zeros([], name='init_avg_effect'),
tf.zeros([], name='init_avg_stddev'),
tf.ones([num_schools], name='init_school_effects_standard'),
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.4,
num_leapfrog_steps=3))
states, kernel_results = do_sampling()
avg_effect, avg_stddev, school_effects_standard = states
school_effects_samples = (
avg_effect[:, np.newaxis] +
np.exp(avg_stddev)[:, np.newaxis] * school_effects_standard)
num_accepted = np.sum(kernel_results.is_accepted)
print('Acceptance rate: {}'.format(num_accepted / num_results))
fig, axes = plt.subplots(8, 2, sharex='col', sharey='col')
fig.set_size_inches(12, 10)
for i in range(num_schools):
axes[i][0].plot(school_effects_samples[:,i].numpy())
axes[i][0].title.set_text("School {} treatment effect chain".format(i))
sns.kdeplot(school_effects_samples[:,i].numpy(), ax=axes[i][1], shade=True)
axes[i][1].title.set_text("School {} treatment effect distribution".format(i))
axes[num_schools - 1][0].set_xlabel("Iteration")
axes[num_schools - 1][1].set_xlabel("School effect")
fig.tight_layout()
plt.show()
print("E[avg_effect] = {}".format(np.mean(avg_effect)))
print("E[avg_stddev] = {}".format(np.mean(avg_stddev)))
print("E[school_effects_standard] =")
print(np.mean(school_effects_standard[:, ]))
print("E[school_effects] =")
print(np.mean(school_effects_samples[:, ], axis=0))
# Compute the 95% interval for school_effects
school_effects_low = np.array([
np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools)
])
school_effects_med = np.array([
np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools)
])
school_effects_hi = np.array([
np.percentile(school_effects_samples[:, i], 97.5)
for i in range(num_schools)
])
fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True)
ax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60)
ax.scatter(
np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60)
plt.plot([-0.2, 7.4], [np.mean(avg_effect),
np.mean(avg_effect)], 'k', linestyle='--')
ax.errorbar(
np.array(range(8)),
school_effects_med,
yerr=[
school_effects_med - school_effects_low,
school_effects_hi - school_effects_med
],
fmt='none')
ax.legend(('avg_effect', 'HMC', 'Observed effect'), fontsize=14)
plt.xlabel('School')
plt.ylabel('Treatment effect')
plt.title('HMC estimated school treatment effects vs. observed data')
fig.set_size_inches(10, 8)
plt.show()
Explanation: Bayesian Inference
Given data, we perform Hamiltonian Monte Carlo (HMC) to calculate the posterior distribution over the model's parameters.
End of explanation
print("Inferred posterior mean: {0:.2f}".format(
np.mean(school_effects_samples[:,])))
print("Inferred posterior mean se: {0:.2f}".format(
np.std(school_effects_samples[:,])))
Explanation: We can observe the shrinkage toward the group avg_effect above.
End of explanation
sample_shape = [5000]
_, _, _, predictive_treatment_effects = model.sample(
value=(tf.broadcast_to(np.mean(avg_effect, 0), sample_shape),
tf.broadcast_to(np.mean(avg_stddev, 0), sample_shape),
tf.broadcast_to(np.mean(school_effects_standard, 0),
sample_shape + [num_schools]),
None))
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(predictive_treatment_effects[:, 2*i].numpy(),
ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect posterior predictive".format(2*i))
sns.kdeplot(predictive_treatment_effects[:, 2*i + 1].numpy(),
ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect posterior predictive".format(2*i + 1))
plt.show()
# The mean predicted treatment effects for each of the eight schools.
prediction = np.mean(predictive_treatment_effects, axis=0)
Explanation: Criticism
To get the posterior predictive distribution, i.e., a model of new data $y^*$ given the observed data $y$:
$$ p(y^|y) \propto \int_\theta p(y^ | \theta)p(\theta |y)d\theta$$
we override the values of the random variables in the model to set them to the mean of the posterior distribution, and sample from that model to generate new data $y^*$.
End of explanation
treatment_effects - prediction
Explanation: We can look at the residuals between the treatment effects data and the predictions of the model posterior. These correspond with the plot above which shows the shrinkage of the estimated effects toward the population average.
End of explanation
residuals = treatment_effects - predictive_treatment_effects
fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)
fig.set_size_inches(12, 10)
fig.tight_layout()
for i, ax in enumerate(axes):
sns.kdeplot(residuals[:, 2*i].numpy(), ax=ax[0], shade=True)
ax[0].title.set_text(
"School {} treatment effect residuals".format(2*i))
sns.kdeplot(residuals[:, 2*i + 1].numpy(), ax=ax[1], shade=True)
ax[1].title.set_text(
"School {} treatment effect residuals".format(2*i + 1))
plt.show()
Explanation: Because we have a distribution of predictions for each school, we can consider the distribution of residuals as well.
End of explanation |
9,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction using the bottom up method
This notebook details the process of prediction from which homework a notebook came after featurizing the notebook using the bottom up method. This is done by gathering all templates in each notebook after running the algorithm, then using countvectorizer to featurize the notebooks, and finally using random forests to make the prediction
Step1: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
Step2: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows
Step3: Clustering
Lastly, we try unsupervised learning, clustering based on the features we've extracted, and measure using sillouette score.
Step4: Trying to restrict features
The problem above is that there are too many unimportant features -- all this noise makes it hard to seperate the different classes. To try to counteract this, I'll try ranking the features using tfidf and only take some of them
Step5: What's happening
Figuring out what is going on is a bit difficult, but we can look at the top templates generated from the random forest, and see why they might have been chosen | Python Code:
import sys
home_directory = '/dfs/scratch2/fcipollone'
sys.path.append(home_directory)
import numpy as np
from nbminer.notebook_miner import NotebookMiner
hw_filenames = np.load('../homework_names_jplag_combined_per_student.npy')
hw_notebooks = [[NotebookMiner(filename) for filename in temp[:59]] for temp in hw_filenames]
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
from nbminer.preprocess.resample_by_node import ResampleByNode
from nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer
from nbminer.preprocess.feature_encoding import FeatureEncoding
from nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder
from nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity
from nbminer.results.prediction.corpus_identifier import CorpusIdentifier
a = Features(hw_notebooks[2], 'hw2')
a.add_notebooks(hw_notebooks[3], 'hw3')
a.add_notebooks(hw_notebooks[4], 'hw4')
a.add_notebooks(hw_notebooks[5], 'hw5')
gastf = GetASTFeatures()
rbn = ResampleByNode()
gi = GetImports()
agr = ASTGraphReducer(a, threshold=8, split_call=False)
ci = CorpusIdentifier()
pipe = Pipeline([gastf, rbn, gi, agr, ci])
a = pipe.transform(a)
import tqdm
X, y = ci.get_data_set()
similarities = np.zeros((len(X), len(X)))
for i in tqdm.tqdm(range(len(X))):
for j in range(len(X)):
if len(set.union(set(X[i]), set(X[j]))) == 0:
continue
similarities[i][j] = len(set.intersection(set(X[i]), set(X[j]))) / (len(set.union(set(X[i]), set(X[j]))))
Explanation: Prediction using the bottom up method
This notebook details the process of prediction from which homework a notebook came after featurizing the notebook using the bottom up method. This is done by gathering all templates in each notebook after running the algorithm, then using countvectorizer to featurize the notebooks, and finally using random forests to make the prediction
End of explanation
def get_avg_inter_intra_sims(X, y, val):
inter_sims = []
intra_sims = []
for i in range(len(X)):
for j in range(i+1, len(X)):
if y[i] == y[j] and y[i] == val:
intra_sims.append(similarities[i][j])
else:
inter_sims.append(similarities[i][j])
return np.array(intra_sims), np.array(inter_sims)
for i in np.unique(y):
intra_sims, inter_sims = get_avg_inter_intra_sims(X, y, i)
print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims))
print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims))
print('----')
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 5, 15
def get_all_sims(X, y, val):
sims = []
for i in range(len(X)):
for j in range(i+1, len(X)):
if y[i] == val or y[j] == val:
sims.append(similarities[i][j])
return sims
fig, axes = plt.subplots(4)
for i in range(4):
axes[i].hist(get_all_sims(X,y,i), bins=30)
axes[i].set_xlabel("Similarity Value")
axes[i].set_ylabel("Number of pairs")
Explanation: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
End of explanation
import sklearn
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join(el) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
p = np.random.permutation(len(X.todense()))
X = X.todense()[p]
y = np.array(y)[p]
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=3)
scores = cross_val_score(clf, X, y, cv=10)
print(scores)
print(np.mean(scores))
from sklearn.ensemble import AdaBoostClassifier
clf = sklearn.ensemble.AdaBoostClassifier(n_estimators=700)
scores = cross_val_score(clf, X, y, cv=10)
print(scores)
print(np.mean(scores))
X.shape
Explanation: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows:
Split the data into train and test
Vectorize based on templates that exist
Build a random forest classifier that uses this feature representation, and measure the performance
End of explanation
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join(el) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
clusterer = sklearn.cluster.KMeans(n_clusters = 4).fit(X)
cluster_score = (sklearn.metrics.silhouette_score(X, clusterer.labels_))
cheat_score = (sklearn.metrics.silhouette_score(X, y))
print('Silhouette score using the actual labels:', cheat_score)
print('Silhouette score using the cluster labels:', cluster_score)
x_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X.todense())
plt.rcParams['figure.figsize'] = 5, 10
fig, axes = plt.subplots(2)
axes[0].scatter(x_reduced[:,0], x_reduced[:,1], c=y)
axes[0].set_title('PCA Reduced notebooks with original labels')
axes[1].scatter(x_reduced[:,0], x_reduced[:,1], c=clusterer.labels_)
axes[1].set_title('PCA Reduced notebooks with kmean cluster labels')
Explanation: Clustering
Lastly, we try unsupervised learning, clustering based on the features we've extracted, and measure using sillouette score.
End of explanation
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.TfidfVectorizer()
X_list = [" ".join(el) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
#X = X.todense()
feature_array = np.array(tfidf.get_feature_names())
tfidf_sorting = np.argsort(X.toarray()).flatten()[::-1]
top_n = feature_array[tfidf_sorting][:4]
print(top_n)
X, y = ci.get_data_set()
countvec = sklearn.feature_extraction.text.CountVectorizer()
X_list = [" ".join([val for val in el if val in top_n]) for el in X]
countvec.fit(X_list)
X = countvec.transform(X_list)
X = X.todense()
np.array([X[n,0] for n in range(len(X))]).shape
x_reduced = sklearn.decomposition.PCA(n_components=2).fit_transform(X)
print(x_reduced.shape)
plt.rcParams['figure.figsize'] = 5, 5
plt.scatter(x_reduced[:,0], x_reduced[:,1], c=y)
Explanation: Trying to restrict features
The problem above is that there are too many unimportant features -- all this noise makes it hard to seperate the different classes. To try to counteract this, I'll try ranking the features using tfidf and only take some of them
End of explanation
'''
Looking at the output below, it's clear that the bottom up method is recognizing very specific
structures of ast graph, which makes sense because some structures are exactly repeated in
homeworks. For example:
treatment = pd.Series([0]*4 + [1]*2)
is a line in all of the homework one notebooks.
Example function call lines:
var = call(a_bunch_of_operations)
var = a[10] + call(10)
var = sklearn.linear_model.linear_regression(X[a:b], y[a:b])
'''
clf.fit(X,y)
fnames= countvec.get_feature_names()
clfi = clf.feature_importances_
sa = []
for i in range(len(clfi)):
sa.append((clfi[i], fnames[i]))
sra = [el for el in reversed(sorted(sa))]
import astor
for temp in sra:
temp = temp[1]
print(temp, agr.templates.get_examples(temp)[1])
for i in range(5):
print ('\t',astor.to_source(agr.templates.get_examples(temp)[0][i]))
Explanation: What's happening
Figuring out what is going on is a bit difficult, but we can look at the top templates generated from the random forest, and see why they might have been chosen
End of explanation |
9,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One Queue or Two
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: This notebook presents a case study from Modeling and Simulation in Python. It explores a question related to queueing theory, which is the study of systems that involve waiting in lines, also known as "queues".
Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters.
In theory, you might expect a single line to be better, but it has some practical drawbacks
Step2: Test this function by creating a System object with lam=1/8 and mu=1/5.
Step3: Write an update function that takes as parameters x, which is the total number of customer in the store, including the one checking out; t, which is the number of minutes that have elapsed in the simulation, and system, which is a System object.
If there's a customer checking out, it should use flip to decide whether they are done. And it should use flip to decide if a new customer has arrived.
It should return the total number of customers at the end of the time step.
Step4: Test your function by calling it with x=1, t=0, and the System object you created. If you run it a few times, you should see different results.
Step6: Now we can run the simulation. Here's a version of run_simulation that creates a TimeSeries with the total number of customers in the store, including the one checking out.
Step7: Call run_simulation with your update function and plot the results.
Step9: After the simulation, we can compute L, which is the average number of customers in the system, and W, which is the average time customers spend in the store. L and W are related by Little's Law
Step10: Call compute_metrics with the results from your simulation.
Step11: Parameter sweep
Since we don't know the actual value of $\lambda$, we can sweep through a range of possibilities, from 10% to 80% of the completion rate, $\mu$. (If customers arrive faster than the completion rate, the queue grows without bound. In that case the metrics L and W just depend on how long the store is open.)
Create an array of values for lam.
Step12: Write a function that takes an array of values for lam, a single value for mu, and an update function.
For each value of lam, it should run a simulation, compute L and W, and store the value of W in a SweepSeries.
It should return the SweepSeries.
Step13: Call your function to generate a SweepSeries, and plot it.
Step14: If we imagine that this range of values represents arrival rates on different days, we can use the average value of W, for a range of values of lam, to compare different queueing strategies.
Step16: Analysis
The model I chose for this system is a common model in queueing theory, in part because many of its properties can be derived analytically.
In particular, we can derive the average time in the store as a function of $\mu$ and $\lambda$
Step17: Use this function to plot the theoretical results, then plot your simulation results again on the same graph. How do they compare?
Step18: Multiple servers
Now let's try the other two queueing strategies
Step19: Use this update function to simulate the system, plot the results, and print the metrics.
Step20: Since we have two checkout counters now, we can consider values for $\lambda$ that exceed $\mu$.
Create a new array of values for lam from 10% to 160% of mu.
Step21: Use your sweep function to simulate the two server, one queue scenario with a range of values for lam.
Plot the results and print the average value of W across all values of lam.
Step22: Multiple queues
To simulate the scenario with two separate queues, we need two state variables to keep track of customers in each queue.
Write an update function that takes x1, x2, t, and system as parameters and returns x1 and x2 as return values. f you are not sure how to return more than one return value, see compute_metrics.
When a customer arrives, which queue do they join?
Step23: Write a version of run_simulation that works with this update function.
Step24: Test your functions by running a simulation with a single value of lam.
Step25: Sweep a range of values for lam, plot the results, and print the average wait time across all values of lam.
How do the results compare to the scenario with two servers and one queue. | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: One Queue or Two
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
# Solution goes here
Explanation: This notebook presents a case study from Modeling and Simulation in Python. It explores a question related to queueing theory, which is the study of systems that involve waiting in lines, also known as "queues".
Suppose you are designing the checkout area for a new store. There is room for two checkout counters and a waiting area for customers. You can make two lines, one for each counter, or one line that serves both counters.
In theory, you might expect a single line to be better, but it has some practical drawbacks: in order to maintain a single line, you would have to install rope barriers, and customers might be put off by what seems to be a longer line, even if it moves faster.
So you'd like to check whether the single line is really better and by how much. Simulation can help answer this question.
As we did in the bikeshare model, we'll assume that a customer is equally likely to arrive during any timestep. I'll denote this probability using the Greek letter lambda, $\lambda$, or the variable name lam. The value of $\lambda$ probably varies from day to day, so we'll have to consider a range of possibilities.
Based on data from other stores, you know that it takes 5 minutes for a customer to check out, on average. But checkout times are highly variable: most customers take less than 5 minutes, but some take substantially more. A simple way to model this variability is to assume that when a customer is checking out, they have the same probability of finishing up during each time step. I'll denote this probability using the Greek letter mu, $\mu$, or the variable name mu.
If we choose $\mu=1/5$, the average number of time steps for each checkout will be 5 minutes, which is consistent with the data.
One server, one queue
Write a function called make_system that takes lam and mu as parameters and returns a System object with variables lam, mu, and duration. Set duration, which is the number of time steps to simulate, to 10 hours, expressed in minutes.
End of explanation
# Solution goes here
Explanation: Test this function by creating a System object with lam=1/8 and mu=1/5.
End of explanation
# Solution goes here
Explanation: Write an update function that takes as parameters x, which is the total number of customer in the store, including the one checking out; t, which is the number of minutes that have elapsed in the simulation, and system, which is a System object.
If there's a customer checking out, it should use flip to decide whether they are done. And it should use flip to decide if a new customer has arrived.
It should return the total number of customers at the end of the time step.
End of explanation
# Solution goes here
Explanation: Test your function by calling it with x=1, t=0, and the System object you created. If you run it a few times, you should see different results.
End of explanation
def run_simulation(system, update_func):
Simulate a queueing system.
system: System object
update_func: function object
x = 0
results = TimeSeries()
results[0] = x
for t in linrange(0, system.duration):
x = update_func(x, t, system)
results[t+1] = x
return results
Explanation: Now we can run the simulation. Here's a version of run_simulation that creates a TimeSeries with the total number of customers in the store, including the one checking out.
End of explanation
# Solution goes here
Explanation: Call run_simulation with your update function and plot the results.
End of explanation
def compute_metrics(results, system):
Compute average number of customers and wait time.
results: TimeSeries of queue lengths
system: System object
returns: L, W
L = results.mean()
W = L / system.lam
return L, W
Explanation: After the simulation, we can compute L, which is the average number of customers in the system, and W, which is the average time customers spend in the store. L and W are related by Little's Law:
$L = \lambda W$
Where $\lambda$ is the arrival rate. Here's a function that computes them.
End of explanation
# Solution goes here
Explanation: Call compute_metrics with the results from your simulation.
End of explanation
# Solution goes here
Explanation: Parameter sweep
Since we don't know the actual value of $\lambda$, we can sweep through a range of possibilities, from 10% to 80% of the completion rate, $\mu$. (If customers arrive faster than the completion rate, the queue grows without bound. In that case the metrics L and W just depend on how long the store is open.)
Create an array of values for lam.
End of explanation
# Solution goes here
Explanation: Write a function that takes an array of values for lam, a single value for mu, and an update function.
For each value of lam, it should run a simulation, compute L and W, and store the value of W in a SweepSeries.
It should return the SweepSeries.
End of explanation
# Solution goes here
# Solution goes here
Explanation: Call your function to generate a SweepSeries, and plot it.
End of explanation
# Solution goes here
Explanation: If we imagine that this range of values represents arrival rates on different days, we can use the average value of W, for a range of values of lam, to compare different queueing strategies.
End of explanation
def plot_W(lam_array, mu):
Plot the theoretical mean wait time.
lam_array: array of values for `lam`
mu: probability of finishing a checkout
W_array = 1 / (mu - lam_array)
W_series = make_series(lam_array, W_array)
W_series.plot(style='-', label='analysis')
Explanation: Analysis
The model I chose for this system is a common model in queueing theory, in part because many of its properties can be derived analytically.
In particular, we can derive the average time in the store as a function of $\mu$ and $\lambda$:
$W = 1 / (\mu - \lambda)$
The following function plots the theoretical value of $W$ as a function of $\lambda$.
End of explanation
# Solution goes here
Explanation: Use this function to plot the theoretical results, then plot your simulation results again on the same graph. How do they compare?
End of explanation
# Solution goes here
Explanation: Multiple servers
Now let's try the other two queueing strategies:
One queue with two checkout counters.
Two queues, one for each counter.
The following figure shows the three scenarios:
Write an update function for one queue with two servers.
End of explanation
# Solution goes here
Explanation: Use this update function to simulate the system, plot the results, and print the metrics.
End of explanation
# Solution goes here
Explanation: Since we have two checkout counters now, we can consider values for $\lambda$ that exceed $\mu$.
Create a new array of values for lam from 10% to 160% of mu.
End of explanation
# Solution goes here
# Solution goes here
Explanation: Use your sweep function to simulate the two server, one queue scenario with a range of values for lam.
Plot the results and print the average value of W across all values of lam.
End of explanation
# Solution goes here
Explanation: Multiple queues
To simulate the scenario with two separate queues, we need two state variables to keep track of customers in each queue.
Write an update function that takes x1, x2, t, and system as parameters and returns x1 and x2 as return values. f you are not sure how to return more than one return value, see compute_metrics.
When a customer arrives, which queue do they join?
End of explanation
# Solution goes here
Explanation: Write a version of run_simulation that works with this update function.
End of explanation
# Solution goes here
Explanation: Test your functions by running a simulation with a single value of lam.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Sweep a range of values for lam, plot the results, and print the average wait time across all values of lam.
How do the results compare to the scenario with two servers and one queue.
End of explanation |
9,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
Step1: Show MEG data | Python Code:
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(fname)
# Set up pick list: MEG + STI 014 - bad channels
want_meg = True
want_eeg = False
want_stim = False
include = ['STI 014']
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more
picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim,
include=include, exclude='bads')
some_picks = picks[:5] # take 5 first
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
data, times = raw[some_picks, start:(stop + 1)]
# save 150s of MEG data in FIF file
raw.save('sample_audvis_meg_trunc_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
Explanation: Reading and writing raw files
In this example, we read a raw file. Plot a segment of MEG data
restricted to MEG channels. And save these data in a new
raw file.
End of explanation
raw.plot()
Explanation: Show MEG data
End of explanation |
9,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
モデル化
Next >> 0_quickstart
Prev >> editing
シミュレーションを行う際に一番最初に行うのは モデル化 である.シミュレーションの結果は,どのようにモデル化を行ったかによって大きく影響される.当然ではあるが.
例えば,単振り子のシミュレーションにおいて,0_quickstartでは 摩擦 による運動の減衰を考えなかったが,これを考えてモデル化を行ってみる.
振子と天井の結点の部分で粘性摩擦を仮定し,角速度に比例した力$-c\dot{\theta}$がはたらくものとする.すると,運動方程式は
\begin{align}
ml\ddot{\theta} = -mg\sin\theta - c\dot{\theta}
\end{align}
となる.
Step1: 振り子の角度の時間変化をグラフにすると,このようになっている.
Step2: 摩擦 という要素を考えることによって運動の様相が変化したことがわかる.
シミュレーションを行う上では,考えたい物理モデルをどのようにモデル化するかによって,得られる結果が大きく変わる.所望のシミュレーションを行うためには,十分な力学の知識が必要となる.
Lagrange の運動方程式
0_quickstartでは,単振り子の運動方程式をニュートンの運動方程式から求めたが,今度は ラグランジュの運動方程式 から求める.
おもりの運動エネルギーは
\begin{align}
T = \frac{1}{2}m(l\dot{\theta})^2
\end{align}
であり,ポテンシャルエネルギー(位置エネルギー)は
\begin{align}
U = - m(-g)(l-l\cos\theta) = mgl(1 - \cos\theta)
\end{align}
である.したがって,系のラグランジアンは
\begin{align}
L = T - U = \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta)
\end{align}
であり,ラグランジュの運動方程式は
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = 0
\end{align}
である.項を一つ一つ丁寧に計算をすると,
\begin{align}
\frac{\partial L}{\partial \dot{\theta}} = \frac{\partial }{\partial \dot{\theta}} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = ml^2\dot{\theta}
\end{align}
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) = \frac{d}{dt} (ml^2\dot{\theta}) = ml^2\ddot{\theta}
\end{align}
\begin{align}
\frac{\partial L}{\partial \theta} = \frac{\partial }{\partial \theta} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = -mgl \sin\theta
\end{align}
より,
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = ml^2\ddot{\theta} - (-mgl \sin\theta) = 0
\end{align}
よって,
\begin{align}
ml^2\ddot{\theta} + mgl \sin\theta = 0
\end{align}
である.式を整理すると,
\begin{align}
\ddot{\theta} = -\frac{g}{l} \sin\theta
\end{align}
となっており,ニュートンの運動方程式から導出したものと同じ結果が得られたことがわかる.
Lagrange の運動方程式を SymPy で計算する
Lagrange の運動方程式は運動の自由度についてのミニマムな運動方程式を記述することができる.しかし,ラグランジアンとその偏微分の計算は複雑になりがちである.単振り子の例については,運動の自由度は1であり,かつ非常にシンプルな状況であるため手で計算してもよいのだが,これが他リンク系となったり,運動を2次元から3次元に拡張したりしたときには,もはや手計算で求める気力が起こらない.
そこで, Python を使って Lagrange の運動方程式を導く. SymPy の LagrangesMethod クラス を用いる.
Step3: 定数については,m = sym.symbols('m') のように定義する.
なお,時間$t$については必ず m = sym.symbols('m') を定義する必要がある.
時間とともに変化する値(一般化座標)については, theta = me.dynamicsymbols('theta') のように定義する.また,これの微分(一般加速度)については, dtheta = me.dynamicsymbols('theta, 1') のようにして定義をする.
Step4: 物理モデルに必要な定数・変数を全て定義してから,力学的エネルギーをそれぞれ記述し,ラグランジアンについても計算する.
Step5: LM = me.LagrangesMethod(ラグランジアン, [一般化座標の配列]) という関数で,ラグランジュの運動方程式を定義する.
LM.form_lagranges_equations() でラグランジュの運動方程式が出力される. | Python Code:
import numpy as np
from scipy.integrate import odeint
from math import sin
''' constants '''
m = 1 # mass of the pendulum [kg]
l = 1 # length of the pendulum [m]
g = 10 # Gravitational acceleration [m/s^2]
c = 0.3 # Damping constant [kg.m/(rad.s)]
''' time setting '''
t_end = 10 # simulation time [s]
t_fps = 50 # frame per second. This value means smoothness of produced graph and animation
t_step = 1/t_fps
t = np.arange(0, t_end, t_step)
''' initial value '''
theta_init = 0 # initial value of theta [rad]
dtheta_init = 1 # initial value of dot theta [rad/s]
s_init = np.array([theta_init, dtheta_init])
def odefunc(s, t):
theta = s[0]
dtheta = s[1]
ddtheta = -g/l*sin(theta) - c*dtheta# <- Equation of motion. *** THIS CODE CHANGED ***
return np.r_[dtheta, ddtheta]
s = odeint(odefunc, s_init, t)
print('ODE calculation finished.')
Explanation: モデル化
Next >> 0_quickstart
Prev >> editing
シミュレーションを行う際に一番最初に行うのは モデル化 である.シミュレーションの結果は,どのようにモデル化を行ったかによって大きく影響される.当然ではあるが.
例えば,単振り子のシミュレーションにおいて,0_quickstartでは 摩擦 による運動の減衰を考えなかったが,これを考えてモデル化を行ってみる.
振子と天井の結点の部分で粘性摩擦を仮定し,角速度に比例した力$-c\dot{\theta}$がはたらくものとする.すると,運動方程式は
\begin{align}
ml\ddot{\theta} = -mg\sin\theta - c\dot{\theta}
\end{align}
となる.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure()
plt.plot(t, s[:, 0])
plt.xlabel('t [s]')
plt.ylabel('theta [rad]')
plt.show()
Explanation: 振り子の角度の時間変化をグラフにすると,このようになっている.
End of explanation
import sympy as sym
import sympy.physics.mechanics as me
Explanation: 摩擦 という要素を考えることによって運動の様相が変化したことがわかる.
シミュレーションを行う上では,考えたい物理モデルをどのようにモデル化するかによって,得られる結果が大きく変わる.所望のシミュレーションを行うためには,十分な力学の知識が必要となる.
Lagrange の運動方程式
0_quickstartでは,単振り子の運動方程式をニュートンの運動方程式から求めたが,今度は ラグランジュの運動方程式 から求める.
おもりの運動エネルギーは
\begin{align}
T = \frac{1}{2}m(l\dot{\theta})^2
\end{align}
であり,ポテンシャルエネルギー(位置エネルギー)は
\begin{align}
U = - m(-g)(l-l\cos\theta) = mgl(1 - \cos\theta)
\end{align}
である.したがって,系のラグランジアンは
\begin{align}
L = T - U = \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta)
\end{align}
であり,ラグランジュの運動方程式は
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = 0
\end{align}
である.項を一つ一つ丁寧に計算をすると,
\begin{align}
\frac{\partial L}{\partial \dot{\theta}} = \frac{\partial }{\partial \dot{\theta}} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = ml^2\dot{\theta}
\end{align}
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) = \frac{d}{dt} (ml^2\dot{\theta}) = ml^2\ddot{\theta}
\end{align}
\begin{align}
\frac{\partial L}{\partial \theta} = \frac{\partial }{\partial \theta} \left( \frac{1}{2}m(l\dot{\theta})^2 - mgl(1 - \cos\theta) \right) = -mgl \sin\theta
\end{align}
より,
\begin{align}
\frac{d}{dt}\left( \frac{\partial L}{\partial \dot{\theta}} \right) - \frac{\partial L}{\partial \theta} = ml^2\ddot{\theta} - (-mgl \sin\theta) = 0
\end{align}
よって,
\begin{align}
ml^2\ddot{\theta} + mgl \sin\theta = 0
\end{align}
である.式を整理すると,
\begin{align}
\ddot{\theta} = -\frac{g}{l} \sin\theta
\end{align}
となっており,ニュートンの運動方程式から導出したものと同じ結果が得られたことがわかる.
Lagrange の運動方程式を SymPy で計算する
Lagrange の運動方程式は運動の自由度についてのミニマムな運動方程式を記述することができる.しかし,ラグランジアンとその偏微分の計算は複雑になりがちである.単振り子の例については,運動の自由度は1であり,かつ非常にシンプルな状況であるため手で計算してもよいのだが,これが他リンク系となったり,運動を2次元から3次元に拡張したりしたときには,もはや手計算で求める気力が起こらない.
そこで, Python を使って Lagrange の運動方程式を導く. SymPy の LagrangesMethod クラス を用いる.
End of explanation
''' Define constants and generalized coordinates '''
t = sym.symbols('t')
l, m, g = sym.symbols('l m g')
theta = me.dynamicsymbols('theta')
dtheta = me.dynamicsymbols('theta', 1)
Explanation: 定数については,m = sym.symbols('m') のように定義する.
なお,時間$t$については必ず m = sym.symbols('m') を定義する必要がある.
時間とともに変化する値(一般化座標)については, theta = me.dynamicsymbols('theta') のように定義する.また,これの微分(一般加速度)については, dtheta = me.dynamicsymbols('theta, 1') のようにして定義をする.
End of explanation
''' Kinetic energy '''
T = m*(l*dtheta)**2/2
''' Potential energy '''
U = -m*(-g)*(l - l*sym.cos(theta))
''' Lagurangian '''
L = T - U
Explanation: 物理モデルに必要な定数・変数を全て定義してから,力学的エネルギーをそれぞれ記述し,ラグランジアンについても計算する.
End of explanation
''' Calculating the eom '''
LM = me.LagrangesMethod(L, [theta])
print(LM.form_lagranges_equations())
Explanation: LM = me.LagrangesMethod(ラグランジアン, [一般化座標の配列]) という関数で,ラグランジュの運動方程式を定義する.
LM.form_lagranges_equations() でラグランジュの運動方程式が出力される.
End of explanation |
9,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA4GH IPython Example Notebook
This notebook provides an overview of how to call a the GA4GH reference server from an iPython notebook. Before running this notebook
Step1: Great! Now we can run calls to query ReferenceSets, References, Datasets, VariantSets, CallSets, Variants, ReadGroups, ReadGroupSets, & Reads.
Search methods return generators. Here we will explictly create lists out of these objects to allow us to directly index into the list of instances that the query returned.
Instances have a toJsonDict method that returns a dictionary representation of the object. We'll make use of this to examine the the underlying data structures.
Search/Get ReferenceSets
ReferenceSets are lists of References. Think of them like the genome version (ie. grch37, hg19).
Step2: Search/Get References
The References endpoints let you access the DNA sequences that make up the reference genome. The example dataset is from subsets of the first 3 chromosomes
Step3: In addition to fetching metadata about the reference, you can access the base sequence
Step4: Search/Get Dataset
Datasets are a collection of related data. In this example, we'll examine a collection of reads and variants.
Step5: Search/Get VariantSets
VariantSets are a collection of variants and calls that are intended to be analyzed together.
Step6: Search/Get Callset
Callsets are a collection of genotype calls. Callsets apply to the samples within a dataset.
Step7: Search/Get Variants
A Variant represents a change in DNA sequence relative to some reference. For example, a variant could represent a SNP or an insertion. You can think of them as a row in a VCF. Variants can be supported by many calls.
Step8: Search/Get Readgroupsets
A ReadGroupSet is a logical collection of ReadGroups. Typically one ReadGroupSet represents all the reads from one experimental sample.
Step9: Get ReadGroups
A ReadGroup is a set of reads derived from one physical sequencing process.
Step10: Get Reads
Each read alignment describes an alignment with additional information about the fragment and the read. A read alignment object is equivalent to a line in a SAM file.
Since there are often many reads, we won't iterate over all of them. Instead we can summerize the query and then output the dictionary representing the last one. | Python Code:
baseURL = "http://localhost:8000"
client = ga4gh.client.HttpClient(baseURL)
Explanation: GA4GH IPython Example Notebook
This notebook provides an overview of how to call a the GA4GH reference server from an iPython notebook. Before running this notebook:
git clone https://github.com/ga4gh/server.git -b develop
Download the example data (if $scripts/download_data.py$ doens't work, wget https://github.com/ga4gh/server/releases/download/data/ga4gh-example-data-v3.2.tar and tar -xvf in the server/ directory
Launch an instance of the reference server on localhost ("python server_dev.py")
Note: If you have trouble importing the ga4gh module, either symlink this notebook into the /server directory or add the path to the GA4GH development repo to your PYTHONPATH.
Connect to GA4GH Server
End of explanation
referenceSets = list(client.search_reference_sets())
print("ReferenceSets")
for referenceSet in referenceSets:
print("NCBI Taxon Id: {}".format(referenceSet.ncbiTaxonId))
referenceSet = client.get_reference_set(referenceSets[0].id)
referenceSet.toJsonDict()
Explanation: Great! Now we can run calls to query ReferenceSets, References, Datasets, VariantSets, CallSets, Variants, ReadGroups, ReadGroupSets, & Reads.
Search methods return generators. Here we will explictly create lists out of these objects to allow us to directly index into the list of instances that the query returned.
Instances have a toJsonDict method that returns a dictionary representation of the object. We'll make use of this to examine the the underlying data structures.
Search/Get ReferenceSets
ReferenceSets are lists of References. Think of them like the genome version (ie. grch37, hg19).
End of explanation
references = list(client.search_references(referenceSet.id))
print("References")
for reference in references:
print("Name: {}, Length: {}".format(reference.name, reference.length))
reference = client.get_reference(references[0].id)
reference.toJsonDict()
Explanation: Search/Get References
The References endpoints let you access the DNA sequences that make up the reference genome. The example dataset is from subsets of the first 3 chromosomes:
End of explanation
client.listReferenceBases(references[0].id, start=10000, end=10100)
Explanation: In addition to fetching metadata about the reference, you can access the base sequence:
End of explanation
datasets = list(client.search_datasets())
print("Datasets")
for dataset in datasets:
print("Name: {}".format(dataset.name))
dataset = client.get_dataset(datasets[0].id)
dataset.toJsonDict()
Explanation: Search/Get Dataset
Datasets are a collection of related data. In this example, we'll examine a collection of reads and variants.
End of explanation
variantSets = list(client.search_variant_sets(dataset.id))
print("VariantSets")
for variantSet in variantSets:
print("Name: {}".format(variantSet.name))
variantSetId = variantSets[0].id
variantSet = client.get_variant_set(variantSetId)
variantSet.toJsonDict()
Explanation: Search/Get VariantSets
VariantSets are a collection of variants and calls that are intended to be analyzed together.
End of explanation
callSets = list(client.search_call_sets(variantSetId))
print("CallSets")
for callSet in callSets:
print("Name: {}".format(callSet.name))
callSet = client.get_call_set(callSets[0].id)
callSet.toJsonDict()
Explanation: Search/Get Callset
Callsets are a collection of genotype calls. Callsets apply to the samples within a dataset.
End of explanation
variants = list(client.search_variants(variantSetId, start=100000, end=101000, referenceName = "1"))
print("Variants")
for variant in variants:
print("Reference Name: {}, Start: {}, Name: {}".format(variant.referenceName, variant.start, variant.names[0]))
variant = client.get_variant(variants[0].id)
variant.toJsonDict()
Explanation: Search/Get Variants
A Variant represents a change in DNA sequence relative to some reference. For example, a variant could represent a SNP or an insertion. You can think of them as a row in a VCF. Variants can be supported by many calls.
End of explanation
readGroupSets = list(client.search_read_group_sets(dataset.id))
print("ReadGroupSets")
for readGroup in readGroupSets:
print("Name: {}".format(readGroup.name))
readGroupSet = client.get_read_group_set(readGroupSets[0].id)
readGroupSet.toJsonDict()
Explanation: Search/Get Readgroupsets
A ReadGroupSet is a logical collection of ReadGroups. Typically one ReadGroupSet represents all the reads from one experimental sample.
End of explanation
readGroup = client.get_read_group(readGroupSet.readGroups[0].id)
readGroup.toJsonDict()
Explanation: Get ReadGroups
A ReadGroup is a set of reads derived from one physical sequencing process.
End of explanation
readGroupIds = [readGroup.id for readGroup in readGroupSet.readGroups]
readGroupDescriptions = [readGroup.description for readGroup in readGroupSet.readGroups]
reads = client.search_reads(readGroupIds, reference.id)
print("Read Alignments")
# The server paginates reads by default; here we iterate over the reads
# generate to make the appropriate requests to the server
count = 0
for read in reads:
count += 1
print("{} reads in Readgroups: {} on Reference: {}".format(
count, ', '.join(readGroupDescriptions), reference.name))
read.toJsonDict()
Explanation: Get Reads
Each read alignment describes an alignment with additional information about the fragment and the read. A read alignment object is equivalent to a line in a SAM file.
Since there are often many reads, we won't iterate over all of them. Instead we can summerize the query and then output the dictionary representing the last one.
End of explanation |
9,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a neural network on MNIST with Keras
This simple example demonstrates how to plug TensorFlow Datasets (TFDS) into a Keras model.
Copyright 2020 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step1: 手順 1
Step3: トレーニングパイプラインを構築する
次の変換を適用します。
tf.data.Dataset.map
Step4: 評価パイプラインを構築する
テストのパイプラインはトレーニングのパイプラインと似ていますが、次のようにわずかに異なります。
tf.data.Dataset.shuffle を呼び出す必要はありません。
エポック間のバッチが同一である可能性があるのでキャッシュはバッチ処理の後に行われます。
Step5: 手順 2 | Python Code:
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: Training a neural network on MNIST with Keras
This simple example demonstrates how to plug TensorFlow Datasets (TFDS) into a Keras model.
Copyright 2020 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/datasets/keras_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/datasets/keras_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/datasets/keras_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/datasets/keras_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
End of explanation
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
Explanation: 手順 1: 入力パイプラインを作成する
まず、次のガイドを参照し、有効な入力パイプラインを構築します。
TFDS パフォーマンスガイド
tf.data パフォーマンスガイド
データセットを読み込む
次の引数を使って MNIST データセットを読み込みます。
shuffle_files: MNIST データは、単一のファイルにのみ保存されていますが、ディスク上の複数のファイルを伴うより大きなデータセットについては、トレーニングの際にシャッフルすることが良い実践です。
as_supervised: dict {'image': img, 'label': label} の代わりに tuple (img, label) を返します。
End of explanation
def normalize_img(image, label):
Normalizes images: `uint8` -> `float32`.
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
Explanation: トレーニングパイプラインを構築する
次の変換を適用します。
tf.data.Dataset.map: TFDS は画像を tf.uint8 として提供しますが、モデルは tf.float32 を期待するため、画像を正規化します。
tf.data.Dataset.cache: データセットがメモリに収まる場合、シャッフル前にキャッシュすると、パフォーマンスを改善できます。<br> 注意: ランダム変換は、キャッシュの後に適用してください。
tf.data.Dataset.shuffle: 真のランダム性を得るには、シャッフルバッファをデータセットの完全なサイズに設定してください。<br> 注意: メモリに収まらない大きなデータセットについては、システムで可能な場合は buffer_size=1000 にします。
tf.data.Dataset.batch: シャッフルの後にバッチ処理を行い、各エポックで一意のバッチを取得します。
tf.data.Dataset.prefetch: プリフェッチによってパイプラインを終了し、パフォーマンスを向上させます。
End of explanation
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
Explanation: 評価パイプラインを構築する
テストのパイプラインはトレーニングのパイプラインと似ていますが、次のようにわずかに異なります。
tf.data.Dataset.shuffle を呼び出す必要はありません。
エポック間のバッチが同一である可能性があるのでキャッシュはバッチ処理の後に行われます。
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
model.fit(
ds_train,
epochs=6,
validation_data=ds_test,
)
Explanation: 手順 2: モデルを作成してトレーニングする
TFDS 入力パイプラインを簡単な Keras モデルにプラグインし、モデルをコンパイルしてトレーニングします。
End of explanation |
9,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correlating microstripline model to measurement
Target
The aim of this example is to correlate the microstripline model to the measurement over 4 frequency decades from 1MHz to 5GHz.
Plan
Two different lengths of microstripline are measured;
Multiline method is used to compute the frequency dependent relative permittivity and loss angle of the dielectric;
Microstripline model is fitted to the computed parameters by optimization;
Checking the results by embedding the connectors and comparison against measurement;
Step1: Measurement of two microstripline with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
The frequency range of interest is limited from 1MHz to 5GHz, but the measurement are up to 10GHz.
MSLxxx is a L long, W wide, T thick copper microstripline on a H height substrate with bottom ground plane.
| Name | L (mm) | W (mm) | H (mm) | T (um) | Substrate |
|
Step2: The measured data shows that the electrical length of MSL200 is approximately twice the one of MSL100. The frequency spacing between Return Loss dips is approximately the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected.
The MSL200 Insertion Loss is also about twice than MSL100, which is coherent as a longer path bring more attenuation.
Return Loss under -20dB is usually considered to be fair for microstripline, it correspond to 1% of the power being reflected.
Dielectric effective relative permittivity extraction by multiline method
The phase of the measurements transmission parameter are subtracted. Because connectors are present on both DUTs, their length effect is canceled and the remaining phase difference is related to the difference of the DUTs length.
Knowing the physical length $\Delta L$ and the phase $\Delta \phi$, the effective relative permittivity constant $\epsilon_{r,eff}$ can be computed from the relation
$$\left{ \begin{array}{ll}
\lambda = \frac{c_0}{f \cdot \sqrt{\epsilon_{r,eff}}} \
\phi = \frac{2\pi L}{\lambda}
\end{array} \right. \implies
\epsilon_{r,eff} = \left( \frac{\Delta \phi \cdot c_0}{2 \pi f \cdot \Delta L} \right)^2 $$
In the same idea, the difference of Insertion Loss of the two DUT gives the Insertion Loss of the difference of the length and cancel connectors effects.
Step3: The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model.
The Insertion Loss seems proportional to frequency, which indicate a predominance of the dielectric losses. Conductor losses are related to the square-root of frequency. Radiation losses are neglected.
Fit microstripline model to the computed parameters by optimization
Effective relative permittivity
Microstrip media model with the physical dimensions of the measured microstriplines is fitted to the computed $\epsilon_{r,eff}$ by optimization of $\epsilon_r$ and tand of the substrate at 1GHz. The dispersion model used to account for frequency variation of the parameters are Djordjevic/Svensson and Kirschning and Jansen.
Step4: As a sanity check, the model data are compared with the computed parameters
Step5: The model results shows a reasonable agreement with the measured $\epsilon_{r,eff}$ and Insertion Loss values.
Checking the results
If the model is now plotted against the measurement of the same length, the plot shows no agreement. This is because the connector effects are not captured by the model.
Step6: Connector delay and loss estimation
The delay of the connector is estimated by fitting a line to its phase contribution vs frequency.
The phase and loss of the two connector are computed by subtracting phase and loss computed without the connectors to the measurement of the same length.
Step7: The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever.
Connector impedance adjustment by time-domain reflectometry
Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance.
The plots shows the connector having an inductive behaviour (positive peak) and the microstripline being a bit too much capacitive (negative plateau).
Characteristic impedance of the connector is tuned by trial-and-error until a reasonable agreement is achieved. Optimization could have been used instead.
Step8: Final comparison | Python Code:
%load_ext autoreload
%autoreload 2
import skrf as rf
import numpy as np
from numpy import real, log10, sum, absolute, pi, sqrt
import matplotlib.pyplot as plt
from scipy.optimize import minimize, differential_evolution
rf.stylely()
Explanation: Correlating microstripline model to measurement
Target
The aim of this example is to correlate the microstripline model to the measurement over 4 frequency decades from 1MHz to 5GHz.
Plan
Two different lengths of microstripline are measured;
Multiline method is used to compute the frequency dependent relative permittivity and loss angle of the dielectric;
Microstripline model is fitted to the computed parameters by optimization;
Checking the results by embedding the connectors and comparison against measurement;
End of explanation
# Load raw measurements
MSL100_raw = rf.Network('MSL100.s2p')
MSL200_raw = rf.Network('MSL200.s2p')
# Keep only the data from 1MHz to 5GHz
MSL100 = MSL100_raw['1-5000mhz']
MSL200 = MSL200_raw['1-5000mhz']
plt.figure()
plt.title('Measured data')
MSL100.plot_s_db()
MSL200.plot_s_db()
plt.show()
Explanation: Measurement of two microstripline with different lengths
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
The frequency range of interest is limited from 1MHz to 5GHz, but the measurement are up to 10GHz.
MSLxxx is a L long, W wide, T thick copper microstripline on a H height substrate with bottom ground plane.
| Name | L (mm) | W (mm) | H (mm) | T (um) | Substrate |
| :--- | ---: | ---: | ---: | ---: | :--- |
| MSL100 | 100 | 3.00 | 1.55 | 50 | FR-4 |
| MSL200 | 200 | 3.00 | 1.55 | 50 | FR-4 |
The milling of the artwork is performed mechanically with a lateral wall of 45°. A small top ground plane chunk connected by a vias array to bottom ground is provided to solder the connector top ground legs and provide some coplanar-like transition from coax to microstrip.
The relative permittivity of the dielectric was assumed to be approximately 4.5 for design purpose.
End of explanation
c0 = 3e8
f = MSL100.f
deltaL = 0.1
deltaPhi = np.unwrap(np.angle(MSL100.s[:,1,0])) - np.unwrap(np.angle(MSL200.s[:,1,0]))
Er_eff = np.power(deltaPhi * c0 / (2 * np.pi * f * deltaL), 2)
Loss_mea = 20 * log10(absolute(MSL200.s[:,1,0] / MSL100.s[:,1,0]))
plt.figure()
plt.suptitle('Effective relative permittivity and loss')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff)
plt.ylabel('$\epsilon_{r,eff}$')
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea)
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.show()
Explanation: The measured data shows that the electrical length of MSL200 is approximately twice the one of MSL100. The frequency spacing between Return Loss dips is approximately the half for MSL200 compared to MSL100. This is coherent with the physical dimensions if the small connector length is neglected.
The MSL200 Insertion Loss is also about twice than MSL100, which is coherent as a longer path bring more attenuation.
Return Loss under -20dB is usually considered to be fair for microstripline, it correspond to 1% of the power being reflected.
Dielectric effective relative permittivity extraction by multiline method
The phase of the measurements transmission parameter are subtracted. Because connectors are present on both DUTs, their length effect is canceled and the remaining phase difference is related to the difference of the DUTs length.
Knowing the physical length $\Delta L$ and the phase $\Delta \phi$, the effective relative permittivity constant $\epsilon_{r,eff}$ can be computed from the relation
$$\left{ \begin{array}{ll}
\lambda = \frac{c_0}{f \cdot \sqrt{\epsilon_{r,eff}}} \
\phi = \frac{2\pi L}{\lambda}
\end{array} \right. \implies
\epsilon_{r,eff} = \left( \frac{\Delta \phi \cdot c_0}{2 \pi f \cdot \Delta L} \right)^2 $$
In the same idea, the difference of Insertion Loss of the two DUT gives the Insertion Loss of the difference of the length and cancel connectors effects.
End of explanation
from skrf.media import MLine
W = 3.00e-3
H = 1.51e-3
T = 50e-6
L = 0.1
Er0 = 4.5
tand0 = 0.02
f_epr_tand = 1e9
x0 = [Er0, tand0]
def model(x, freq, Er_eff, L, W, H, T, f_epr_tand, Loss_mea):
ep_r = x[0]
tand = x[1]
m = MLine(frequency=freq, z0=50, w=W, h=H, t=T,
ep_r=ep_r, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0_f)
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
return sum((real(m.ep_reff_f) - Er_eff)**2) + 0.01*sum((Loss_mod - Loss_mea)**2)
res = minimize(model, x0, args=(MSL100.frequency, Er_eff, L, W, H, T, f_epr_tand, Loss_mea),
bounds=[(4.2, 4.7), (0.001, 0.1)])
Er = res.x[0]
tand = res.x[1]
print('Er={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(Er, tand, f_epr_tand * 1e-9))
Explanation: The effective relative permittivity of the geometry shows a dispersion effect at low frequency which can be modelled by a wideband Debye model such as Djordjevic/Svensson implementation of skrf microstripline media. The value then increase slowly with frequency which correspond roughly to the Kirschning and Jansen dispersion model.
The Insertion Loss seems proportional to frequency, which indicate a predominance of the dielectric losses. Conductor losses are related to the square-root of frequency. Radiation losses are neglected.
Fit microstripline model to the computed parameters by optimization
Effective relative permittivity
Microstrip media model with the physical dimensions of the measured microstriplines is fitted to the computed $\epsilon_{r,eff}$ by optimization of $\epsilon_r$ and tand of the substrate at 1GHz. The dispersion model used to account for frequency variation of the parameters are Djordjevic/Svensson and Kirschning and Jansen.
End of explanation
m = MLine(frequency=MSL100.frequency, z0=50, w=W, h=H, t=T,
ep_r=Er, mu_r=1, rho=1.712e-8, tand=tand, rough=0.15e-6,
f_low=1e3, f_high=1e12, f_epr_tand=f_epr_tand,
diel='djordjevicsvensson', disp='kirschningjansen')
DUT = m.line(L, 'm', embed=True, z0=m.Z0_f)
DUT.name = 'DUT'
Loss_mod = 20 * log10(absolute(DUT.s[:,1,0]))
plt.figure()
plt.suptitle('Measurement vs Model')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, Er_eff, label='Measured')
plt.plot(f * 1e-9, real(m.ep_reff_f), label='Model')
plt.ylabel('$\epsilon_{r,eff}$')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, Loss_mea, label='Measured')
plt.plot(f * 1e-9, Loss_mod, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
Explanation: As a sanity check, the model data are compared with the computed parameters
End of explanation
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
DUT.plot_s_db(0, 0, color='k')
DUT.plot_s_db(1, 0, color='k')
plt.show()
Explanation: The model results shows a reasonable agreement with the measured $\epsilon_{r,eff}$ and Insertion Loss values.
Checking the results
If the model is now plotted against the measurement of the same length, the plot shows no agreement. This is because the connector effects are not captured by the model.
End of explanation
phi_conn = np.unwrap(np.angle(MSL100.s[:,1,0])) + deltaPhi
z = np.polyfit(f, phi_conn, 1)
p = np.poly1d(z)
delay = -z[0]/(2*np.pi)/2
print('Connector delay: {:.0f} ps'.format(delay * 1e12))
loss_conn_db = 20 * log10(absolute(MSL100.s[:,1,0])) - Loss_mea
alpha = 1.6*np.log(10)/20 * np.sqrt(f/1e9)
beta = 2*np.pi*f/c0
gamma = alpha + 1j*beta
mf = rf.media.DefinedGammaZ0(m.frequency, z0=50, gamma=gamma)
left = mf.line(delay*1e9, 'ns', embed=True, z0=53.2)
right = left.flipped()
check = left ** right
plt.figure()
plt.suptitle('Connector effects')
plt.subplot(2,1,1)
plt.plot(f * 1e-9, phi_conn, label='measured')
plt.plot(f * 1e-9, np.unwrap(np.angle(check.s[:,1,0])), label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
plt.plot(f * 1e-9, loss_conn_db, label='Measured')
plt.plot(f * 1e-9, 20*np.log10(np.absolute(check.s[:,1,0])), label='model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
Explanation: Connector delay and loss estimation
The delay of the connector is estimated by fitting a line to its phase contribution vs frequency.
The phase and loss of the two connector are computed by subtracting phase and loss computed without the connectors to the measurement of the same length.
End of explanation
mod = left ** DUT ** right
MSL100_dc = MSL100.extrapolate_to_dc(kind='linear')
DUT_dc = mod.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Left-right and right-left TDR')
plt.subplot(2,1,1)
MSL100_dc.s11.plot_s_time_step(pad=2000, window='hamming', label='Measured L-R')
DUT_dc.s11.plot_s_time_step(pad=2000, window='hamming', label='Model L-R')
plt.xlim(-2, 4)
plt.subplot(2,1,2)
MSL100_dc.s22.plot_s_time_step(pad=2000, window='hamming', label='Measured R-L')
DUT_dc.s22.plot_s_time_step(pad=2000, window='hamming', label='Model R-L')
plt.xlim(-2, 4)
plt.tight_layout()
plt.show()
Explanation: The phase of the model shows a good agreement, while the Insertion Loss seems to have a reasonable agreement and is small whatsoever.
Connector impedance adjustment by time-domain reflectometry
Time-domain step responses of measurement and model are used to adjust the connector model characteristic impedance.
The plots shows the connector having an inductive behaviour (positive peak) and the microstripline being a bit too much capacitive (negative plateau).
Characteristic impedance of the connector is tuned by trial-and-error until a reasonable agreement is achieved. Optimization could have been used instead.
End of explanation
plt.figure()
plt.title('Measured vs modelled data')
MSL100.plot_s_db()
mod.name = 'Model'
mod.plot_s_db(0, 0, color='k')
mod.plot_s_db(1, 0, color='k')
plt.show()
Explanation: Final comparison
End of explanation |
9,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejercicios 2
1 Ejercicio
Escribir una función que reciba como parámetro una lista de elementos y devuelva el valor True si la lista posee elementos y False en caso contrario.
Step1: 2 Ejercicio
Dada la lista $L = [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]$, escribe expresiones en Python que produzcan los siguientes resultados
Step2: 3 Ejercicio
Dada una lista de enteros $a$ y un número entero $n$, define
Step3: 4 Ejercicio
Utiliza la función range para generar listas de elementos.
Genera de forma automática las siguientes lista
Step4: 5 Ejercicio
El método upper del tipo string genera una nueva cadena de caracteres, donde todos los caracteres son mayúsculas.
Dada la siguiente cadena | Python Code:
def tieneElementos(milista):
return len(milista) > 0
print(tieneElementos([]))
print(tieneElementos([1, 3, 96]))
Explanation: Ejercicios 2
1 Ejercicio
Escribir una función que reciba como parámetro una lista de elementos y devuelva el valor True si la lista posee elementos y False en caso contrario.
End of explanation
# Sol:
l = [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
print(l[0])
print(l[-1])
print(l[-2], l[-1])
print(l[-2:])
print()
print(l[::2])
print(l[1::2])
print()
print(len(l)-1)
Explanation: 2 Ejercicio
Dada la lista $L = [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]$, escribe expresiones en Python que produzcan los siguientes resultados:
El primer elemento de la lista.
El último elemento de la lista (Sin conocer la longitud de la lista).
La sublista con los dos últimos elementos. (Sol: [8, 9])
La sublista de los elementos que ocupan posiciones pares. (Sol: [1, 3, 5, 7, 9])
La sublista de los elementos que ocupan posiciones impares. (Sol: [2, 4, 6, 8])
End of explanation
# Sol:
# Definimos la lista 'a'
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 2, 4, 6, 8, 1, 4]
# Definimos la función 'repeticiones()' tomando como base los valores de la lista 'a'
def repeticiones(a):
# mediante a.count(valor) recorremos la lista 'a' para comproobar cuantas veces se repite un valor determinado
return a.count(4)
# Mostramos el resultado
print('El número 4 se repite',repeticiones(a),'veces en la lista',a)
# En base a la lista 'a' antes definida, pasamos a averiguar el nº de elementos de la lista para calcular
# que porcentaje de veces se repite el número 'n'
def porcentaje(a):
nelementos = len(a)
nrepeticiones = a.count(4)
return nrepeticiones / nelementos * 100
print("Lo que supone un",porcentaje(a),'%')
# el 4 se repite 3 veces en la lista [1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 2, 4, 6, 8, 1, 4]
# lo que supone un 18.75 %
Explanation: 3 Ejercicio
Dada una lista de enteros $a$ y un número entero $n$, define:
1. una función que calcule el número de veces que se repite $n$ en $a$.
2. una función que calcule el porcentaje de veces que se repite $n$ en $a$.
Nota: utiliza los métodos len y count.
* Puedes ejecutar list.count? para obtener ayuda del método count.
* Puedes ejecutar help(list.count) para obtener ayuda del método count.
End of explanation
# Sol:
# Creamos una variable 'lista' cuyo rango empieza en 10 y acaba en 100, con un salto de 10.
# Su expresión matemática sería [10,100) donde se incluye el 10(inicio), pero no el 100(fin)
# de ahí que se muestre [10, 20, 30, 40, 50, 60, 70, 80, 90]
lista = range(10,100,10)
list(lista)
Explanation: 4 Ejercicio
Utiliza la función range para generar listas de elementos.
Genera de forma automática las siguientes lista: [10, 20, 30, 40, 50, 60, 70, 80, 90]
End of explanation
# Sol:
# Definimos la cadena como 's'
s = 'Hola, me llamo Iñigo Montoya, tú mataste a mi padre ...Prepárate a morir!'
# Definimos una nueva cadena como 'sMayus' para capitalizar todas las letras, como no podemos modificar una cadena
# debemos crear una nueva y modificar esta en función de la cadena original
sMayus = s.upper()
# Mostramos el resultado
print(sMayus)
# Modificamos el valor de 's' y lo ponemos todo en minúsculas
print(s.lower())
s, sMayus
Explanation: 5 Ejercicio
El método upper del tipo string genera una nueva cadena de caracteres, donde todos los caracteres son mayúsculas.
Dada la siguiente cadena:
'Hola, me llamo Iñigo Montoya, tú mataste a mi padre. ... Prepárate a morir'
Genera la cadena en mayúsculas
Ejecuta el método split. ¿Qué hace?
Escribe la expresión Python que calcula el número de palabras de la cadena.
Ejecuta el método lower.
Ejecuta el método title.
Ejecuta dir(str) para listar los métodos disponibles.
End of explanation |
9,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>WorkCamp # Maschinelles Lernen - ## Grundlagen - ###2018</h1>
<h2>Praktische Übung</h2>
<h3>Beispiel xx # Arbeiten mit Sensordaten ## Feature Selektion</h3>
Problemstellung
Step1: Problem Beschreibung
Step2: <h3>Beschreibende Statistik</h3>
Step3: <h3>Visualisierung der Daten</h3>
Step4: <h3>Univariate Feature Selektion</h3>
Uum die Merkmale auszuwählen, die am stärksten mit der Ausgabevariablen verknüpft sind, können Statistische Tests verwendet werden,. Die scikit-learn Bibliothek stellt die SelectKBest Klasse zur Verfügung, die mit einer Reihe von unterschiedlichen statistischen Tests zur Auswahl der relavantesten Features eingesetzt werden kann. Das folgende Beispiel verwendet den Chi-squared (Chi2) statistischen Test für nicht-negative Merkmale, um 5 der besten Merkmale aus den Sensordaten auszuwählen.
Step5: Im Sensor Datensatz sind also die Sensoren Sens-5, Sens-7, Sens-8, Sens-9 und Sens-10 besonders relevant.
<h3>Rekursive Feature Elimination</h3>
Die rekursive Feature Elimination (oder RFE) funktioniert durch rekursives Entfernen von Attributen und Aufbau eines Modells auf den verbleibenden Attributen. Anhand der Modellgenauigkeit wird ermittelt, welche Attribute (und die Kombination von Attributen) tragen am meisten zur Vorhersage des Zielattributs bei. Das folgende Beispiel verwendet RFE mit dem logistischen Regressionsalgorithmus, um die 3 wichtigsten Features auszuwählen. Die Wahl des Algorithmus spielt keine Rolle, solange er geschickt und konsistent ist.
Step6: Die rekursive Feature Elimination wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
<h3>Principal Component Analysis</h3>
Die Principal Component Analysis (oder PCA) verwendet lineare Algebra, um den Datensatz in eine komprimierte Form zu transformieren. Im Allgemeinen wird dies als Datenreduktionstechnik bezeichnet. Eine Eigenschaft von PCA ist, dass wir die Anzahl der Dimensionen oder Hauptkomponenten im transformierten Ergebnis wählen können. Im folgenden Beispiel verwenden wir PCA und wählen 3 Hauptkomponenten aus.
Step7: <h3>Abschätzung der Bedeutung von Merkmalen</h3>
Random Forest und Extra Trees können verwendet werden, um die Bedeutung von Merkmalen abzuschätzen. Im folgenden Beispiel konstruieren wir einen ExtraTreesClassifier für den Datensatz der Sonardaten.
Step8: Die Abschätzung der Bedeutung von Features wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
<h3>Weiteres Beispiel</h3> | Python Code:
# Laden der entsprechenden Module (kann etwas dauern !)
# Wir laden die Module offen, damit man einmal sieht, was da alles benötigt wird
# Allerdings aufpassen, dann werden die Module anderst angesprochen wie beim Standard
# zum Beispiel pyplot und nicht plt
from matplotlib import pyplot
pyplot.rcParams["figure.figsize"] = (15,12)
%matplotlib inline
import numpy as np #wird allerdings nicht benötigt
from pandas import read_csv
from pandas import set_option
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
Explanation: <h1>WorkCamp # Maschinelles Lernen - ## Grundlagen - ###2018</h1>
<h2>Praktische Übung</h2>
<h3>Beispiel xx # Arbeiten mit Sensordaten ## Feature Selektion</h3>
Problemstellung:<br>
In diesem Jupyter Notebook , werden Sie in einer Fallstudie die Aufbereitung von Daten durch Skalierung, Normalisierung, Skalenänderung und Binärisierung kennenlernen. Dies ist für einige Algorithmen für Maschinelles Lernen notwendig.
Nach Abschluss dieses Notebooks sollten Sie wissen:
<ul>
<li>Wie man ein Vorhersagemodellierungsproblem auf Basis einer Fragestelluung zur Classification durchgehend abarbeitet.
<li>Wie man bisher unbekannte Daten in panda DataFrames lädt: (csv, xlsx, xls, xml, json, hdf5 etc.).
<li>Wie man unbekannte Daten mit einer deskriptiven Statistik in python analysiert.
<li>Wie man unbekannte Daten mit python Bibliotheken visualisiert.
<li>Wie man erzeugte Plots, speichert und dokumentiert.
<li>Wie man Datentransformationen verwendet, um die Performance des Modells zu verbessern, zum Beispiel Normalisierung oder Standardisierung.
<li>Wie man Algorithmus-, oder Hyperparameter-Tuning verwendet, um die Modell-Leistung zu verbessern.
<li>Wie man Ensemble-Methoden verwendet und eine Abstimmung der Parameter zur Verbesserung der Modell-Performance durchführt.
<li>Wie man die Kreuz-Validierung zur Beurteilung der Performance von ML-Algorithmen einsetzt.
<li> Auf welcher Basis eine Beurteilung der verwendetn Classification Algorithmen stattfindet. (Classification Matrix, Confusion Matrix)
</ul>
Die Module und Bibliotheken stehen alle in der <b>Anaconda scikit-learn</b> Umgebung zum Maschinellen Lernen direkt zur Verfügung.<br>
<b>Arbeiten mit Zeitreihen:</b><br>
Insbesondere beim arbeiten mit Zeitreihen (timeseries) wird, falls notwendig, statsmodels und dessen Klassen, Bibliotheken und Module nachgeladen.<br>
<b>Tipp:</b><br>
<b>Falls in Ihrer Version statsmodels nicht vorhanden ist, mit: !pip install statsmodels in einer Jupyter Zelle
nachinstallieren.</b><br>
Informationen zu statsmodels finden Sie hier: http://www.statsmodels.org/<br>
##Eventuell Strukturbild einbauen
##Evtl. nochmals Vorgehen als Ablaufmodell
End of explanation
#Laden der Daten [12100 Datensätze mit 10 Sensoren und einer Spalte Label (12100x11)Matrix]
url = 'sensordaten-10.csv'
datensatz = read_csv(url, sep=';', header=0)
Explanation: Problem Beschreibung:<br>
Der Fokus dieses Projektes liegt auf dem Datensatz "sensordaten-10.csv". Das Problem ist die Vorhersage von guten und schlechten Werkstücken aus den 10 Sensordaten. Jedes Muster ist ein Satz von 10 Zahlen. Die Sensoren decken unterschiedliche Wertebereiche ab.Das Label, das jeder Datenreihe zugeordnet ist, enthält 0 oder 1. Wenn das Werkstück die Beurteilung gut hat steht eine 1 in der Spalte Label, sonst eine 0.<br>
<b>Aufgabe:</b><br>
Laden Sie die Daten und verschaffen Sie sich einen ersten Überblick<br>
End of explanation
# Ausgabe df.shape
print(datensatz.shape)
# Ausgabe df.dtypes
# Spalte enthält die Classifikation R oder M
set_option('display.max_rows', 50)
print(datensatz.dtypes)
# Ausgabe df.head mit vergösserter display width
set_option('display.width', 100)
print(datensatz.head(20))
# Ausgabe df.describe() mit 4 Nachkomma Stellen
set_option('precision', 4)
print(datensatz.describe())
# Ausgabe der Klassen Verteilung in der Spalte 60
print(datensatz.groupby('Label').size())
Explanation: <h3>Beschreibende Statistik</h3>
End of explanation
# Ausgabe Histogramm
pyplot.rcParams["figure.figsize"] = (15,12)
datensatz.hist()
pyplot.show()
Explanation: <h3>Visualisierung der Daten</h3>
End of explanation
from numpy import set_printoptions
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# Übergabe der Dtaen
array = datensatz.values
X = array[:,0:10]
Y = array[:,10]
# Feature Extraktion
test = SelectKBest(score_func=chi2, k=5)
fit = test.fit(X, Y)
# Zusammenfassung der Ergebnisse
set_printoptions(precision=3)
print(fit.scores_)
features = fit.transform(X)
# Ausgewählte Features
print(features[0:9,:])
Explanation: <h3>Univariate Feature Selektion</h3>
Uum die Merkmale auszuwählen, die am stärksten mit der Ausgabevariablen verknüpft sind, können Statistische Tests verwendet werden,. Die scikit-learn Bibliothek stellt die SelectKBest Klasse zur Verfügung, die mit einer Reihe von unterschiedlichen statistischen Tests zur Auswahl der relavantesten Features eingesetzt werden kann. Das folgende Beispiel verwendet den Chi-squared (Chi2) statistischen Test für nicht-negative Merkmale, um 5 der besten Merkmale aus den Sensordaten auszuwählen.
End of explanation
# Rekursives Feature Engineering
# Laden des Moduls RFE
from sklearn.feature_selection import RFE
#Verwendung der Logistischen Regression als Algorithmus
from sklearn.linear_model import LogisticRegression
# Übergabe der Werte in datensatz an ein array2
array2 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array2[:,0:10]
Y = array2[:,10]
# feature extraction
model = LogisticRegression()
rfe = RFE(model, 3)
fit = rfe.fit(X, Y)
print("Num Features: %d" % fit.n_features_)
print("Selected Features: %s" % fit.support_)
print("Feature Ranking: %s" % fit.ranking_)
Explanation: Im Sensor Datensatz sind also die Sensoren Sens-5, Sens-7, Sens-8, Sens-9 und Sens-10 besonders relevant.
<h3>Rekursive Feature Elimination</h3>
Die rekursive Feature Elimination (oder RFE) funktioniert durch rekursives Entfernen von Attributen und Aufbau eines Modells auf den verbleibenden Attributen. Anhand der Modellgenauigkeit wird ermittelt, welche Attribute (und die Kombination von Attributen) tragen am meisten zur Vorhersage des Zielattributs bei. Das folgende Beispiel verwendet RFE mit dem logistischen Regressionsalgorithmus, um die 3 wichtigsten Features auszuwählen. Die Wahl des Algorithmus spielt keine Rolle, solange er geschickt und konsistent ist.
End of explanation
# Binärisierung der Daten
# Laden des Moduls Binarizer aus sklearn.preprocessing
from sklearn.decomposition import PCA
# Übergabe der Werte in datensatz an ein array3
array3 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array3[:,0:10]
Y = array3[:,10]
# feature extraction
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_)
Explanation: Die rekursive Feature Elimination wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
<h3>Principal Component Analysis</h3>
Die Principal Component Analysis (oder PCA) verwendet lineare Algebra, um den Datensatz in eine komprimierte Form zu transformieren. Im Allgemeinen wird dies als Datenreduktionstechnik bezeichnet. Eine Eigenschaft von PCA ist, dass wir die Anzahl der Dimensionen oder Hauptkomponenten im transformierten Ergebnis wählen können. Im folgenden Beispiel verwenden wir PCA und wählen 3 Hauptkomponenten aus.
End of explanation
# Abschätzung der Bedeutung von Merkmalen
# Laden des Moduls ExtraTreesClassifier aus sklearn.ensemble
from sklearn.ensemble import ExtraTreesClassifier
# Übergabe der Werte in datensatz an ein array4
array4 = datensatz.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X
X = array4[:,0:10]
Y = array4[:,10]
# feature extraction
model = ExtraTreesClassifier()
model.fit(X, Y)
print(model.feature_importances_)
Explanation: <h3>Abschätzung der Bedeutung von Merkmalen</h3>
Random Forest und Extra Trees können verwendet werden, um die Bedeutung von Merkmalen abzuschätzen. Im folgenden Beispiel konstruieren wir einen ExtraTreesClassifier für den Datensatz der Sonardaten.
End of explanation
# Feature Importance with Extra Trees Classifier
from sklearn.ensemble import ExtraTreesClassifier
# Übergabe des Dateinamens an die Variable dateiname
dateiname = 'pima-indians-diabetes.data.csv'
# Festlegen der Spalten Namen für den DataFrame
namen = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
# Einlesen der Daten in einen panda DataFrame mit read_csv()
df = read_csv(dateiname, names=namen)
# Übergabe der Werte in df an ein array5
array5 = df.values
# Aufteilen des arrays in abhängige Variable Y und unabhängige Variable X - hier steht die Klasse in Spalte 9
X = array5[:,0:8]
Y = array5[:,8]
# feature extraction
model = ExtraTreesClassifier()
model.fit(X, Y)
print(model.feature_importances_)
Explanation: Die Abschätzung der Bedeutung von Features wählt die gleichen Sensoren aus wie bei der univariaten Auswahl.
<h3>Weiteres Beispiel</h3>
End of explanation |
9,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RateChar
RateChar is a tool for performing generalised supply-demand analysis (GSDA) [2,3]. This entails the generation data needed to draw rate characteristic plots for all the variable species of metabolic model through parameter scans and the subsequent visualisation of these data in the form of ScanFig objects.
Features
Performs parameter scans for any variable species of a metabolic model
Stores results in a structure similar to Data2D.
Saving of raw parameter scan data, together with metabolic control analysis results to disk.
Saving of RateChar sessions to disk for later use.
Generates rate characteristic plots from parameter scans (using ScanFig).
Can perform parameter scans of any variable species with outputs for relevant response, partial response, elasticity and control coefficients (with data stores as Data2D objects).
Usage and Feature Walkthrough
Workflow
Performing GSDA with RateChar usually requires taking the following steps
Step1: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation
Step2: min_concrange_factor
Step3: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan
Step4: Accessing Results
Parameter Scan Results
Parameter scan results for any particular species are saved as an attribute of the RateChar object under the name of that species. These RateCharData objects are similar to Data2D objects with parameter scan results being accessible through a scan_results DotDict
Step5: .. note
Step6: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms
Step7: Metabolic Control Analysis Results
The in addition to being able to access the data that will be used to draw rate characteristic plots, the user also has access to the values of the metabolic control analysis coefficient values at the steady state of any particular species via the mca_results field. This field represents a DotDict dictionary-like object (like scan_results), however as each key maps to exactly one result, the data can be displayed as a table (see Basic Usage)
Step8: Naturally, coefficients can also be accessed individually
Step9: Plotting Results
One of the strengths of generalised supply-demand analysis is that it provides an intuitive visual framework for inspecting results through the used of rate characteristic plots. Naturally this is therefore the main focus of RateChar. Parameter scan results for any particular species can be visualised as a ScanFig object through the plot method
Step10: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default).
Step11: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows
Step12: .. note
Step13: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case.
Step14: Similarly results may be loaded using the load_session method, either with or without a specified path
Step15: Saving Results
Results may also be exported in csv format either to a specified location or to the default directory. Unlike saving of sessions results are spread over multiple files, so here an existing folder must be specified
Step16: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files). | Python Code:
mod = pysces.model('lin4_fb.psc')
rc = psctb.RateChar(mod)
Explanation: RateChar
RateChar is a tool for performing generalised supply-demand analysis (GSDA) [2,3]. This entails the generation data needed to draw rate characteristic plots for all the variable species of metabolic model through parameter scans and the subsequent visualisation of these data in the form of ScanFig objects.
Features
Performs parameter scans for any variable species of a metabolic model
Stores results in a structure similar to Data2D.
Saving of raw parameter scan data, together with metabolic control analysis results to disk.
Saving of RateChar sessions to disk for later use.
Generates rate characteristic plots from parameter scans (using ScanFig).
Can perform parameter scans of any variable species with outputs for relevant response, partial response, elasticity and control coefficients (with data stores as Data2D objects).
Usage and Feature Walkthrough
Workflow
Performing GSDA with RateChar usually requires taking the following steps:
Instantiation of RateChar object (optionally specifying default settings).
Performing a configurable parameter scan of any combination of variable species (or loading previously saved results).
Accessing scan results through RateCharData objects corresponding to the names of the scanned species that can be found as attributes of the instantiated RateChar object.
Plotting results of a particular species using the plot method of the RateCharData object corresponding to that species.
Further analysis using the do_mca_scan method.
Session/Result saving if required.
Further Analysis
.. note:: Parameter scans are performed for a range of concentrations values between two set values. By default the minimum and maximum scan range values are calculated relative to the steady state concentration the species for which a scan is performed respectively using a division and multiplication factor. Minimum and maximum values may also be explicitly specified. Furthermore the number of points for which a scan is performed may also be specified. Details of how to access these options will be discussed below.
Object Instantiation
Like most tools provided in PySCeSToolbox, instantiation of a RateChar object requires a pysces model object (PysMod) as an argument. A RateChar session will typically be initiated as follows (here we will use the included lin4_fb.psc model):
End of explanation
rc = psctb.RateChar(mod,min_concrange_factor=100,
max_concrange_factor=100,
scan_points=255,
auto_load=False)
Explanation: Default parameter scan settings relating to a specific RateChar session can also be specified during instantiation:
End of explanation
mod.species
rc.do_ratechar()
Explanation: min_concrange_factor : The steady state division factor for calculating scan range minimums (default: 100).
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: 100).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: 256).
auto_load : If True RateChar will try to load saved data from a previous session during instantiation. Saved data is unaffected by the above options and are only subject to the settings specified during the session where they were generated. (default: False).
The settings specified with these optional arguments take effect when the corresponding arguments are not specified during a parameter scan.
Parameter Scan
After object instantiation, parameter scans may be performed for any of the variable species using the do_ratechar method. By default do_ratechar will perform parameter scans for all variable metabolites using the settings specified during instantiation. For saving/loading see Saving/Loading Sessions below.
End of explanation
rc.do_ratechar(fixed=['S1','S3'], scan_min=0.02, max_concrange_factor=110, scan_points=200)
Explanation: Various optional arguments, similar to those used during object instantiation, can be used to override the default settings and customise any parameter scan:
fixed : A string or list of strings specifying the species for which to perform a parameter scan. The string 'all' specifies that all variable species should be scanned. (default: all)
scan_min : The minimum value of the scan range, overrides min_concrange_factor (default: None).
scan_max : The maximum value of the scan range, overrides max_concrange_factor (default: None).
min_concrange_factor : The steady state division factor for calculating scan range minimums (default: None)
max_concrange_factor : The steady state multiplication factor for calculating scan range maximums (default: None).
scan_points : The number of concentration sample points that will be taken during parameter scans (default: None).
solver : An integer value that specifies which solver to use (0:Hybrd,1:NLEQ,2:FINTSLV). (default: 0).
.. note:: For details on different solvers see the PySCeS documentation:
For example in a scenario where we only wanted to perform parameter scans of 200 points for the metabolites S1 and S3 starting at a value of 0.02 and ending at a value 110 times their respective steady-state values the method would be called as follows:
End of explanation
# Each key represents a field through which results can be accessed
sorted(rc.S3.scan_results.keys())
Explanation: Accessing Results
Parameter Scan Results
Parameter scan results for any particular species are saved as an attribute of the RateChar object under the name of that species. These RateCharData objects are similar to Data2D objects with parameter scan results being accessible through a scan_results DotDict:
End of explanation
# Single value results
# scan_min value
rc.S3.scan_results.scan_min
# fixed metabolite name
rc.S3.scan_results.fixed
# 1-dimensional ndarray results (only every 10th value of 200 value arrays)
# scan_range values
rc.S3.scan_results.scan_range[::10]
# J_R3 values for scan_range
rc.S3.scan_results.J_R3[::10]
# total_supply values for scan_range
rc.S3.scan_results.total_supply[::10]
# Note that J_R3 and total_supply are equal in this case, because S3
# only has a single supply reaction
Explanation: .. note:: The DotDict data structure is essentially a dictionary with additional functionality for displaying results in table form (when appropriate) and for accessing data using dot notation in addition the normal dictionary bracket notation.
In the above dictionary-like structure each field can represent different types of data, the most simple of which is a single value, e.g., scan_min and fixed, or a 1-dimensional numpy ndarray which represent input (scan_range) or output (J_R3, J_R4, total_supply):
End of explanation
# Metabolic Control Analysis coefficient line data
# Names of elasticity coefficients related to the 'S3' parameter scan
rc.S3.scan_results.ec_names
# The x, y coordinates for two points that will be used to plot a
# visual representation of ecR3_S3
rc.S3.scan_results.ecR3_S3
# The x,y coordinates for two points that will be used to plot a
# visual representation of ecR4_S3
rc.S3.scan_results.ecR4_S3
# The ecR3_S3 and ecR4_S3 data collected into a single array
# (horizontally stacked).
rc.S3.scan_results.ec_data
Explanation: Finally data needed to draw lines relating to metabolic control analysis coefficients are also included in scan_results. Data is supplied in 3 different forms: Lists names of the coefficients (under ec_names, prc_names, etc.), 2-dimensional arrays with exactly 4 values (representing 2 sets of x,y coordinates) that will be used to plot coefficient lines, and 2-dimensional array that collects coefficient line data for each coefficient type into single arrays (under ec_data, prc_names, etc.).
End of explanation
# Metabolic control analysis coefficient results
rc.S3.mca_results
Explanation: Metabolic Control Analysis Results
The in addition to being able to access the data that will be used to draw rate characteristic plots, the user also has access to the values of the metabolic control analysis coefficient values at the steady state of any particular species via the mca_results field. This field represents a DotDict dictionary-like object (like scan_results), however as each key maps to exactly one result, the data can be displayed as a table (see Basic Usage):
End of explanation
# Control coefficient ccJR3_R1 value
rc.S3.mca_results.ccJR3_R1
Explanation: Naturally, coefficients can also be accessed individually:
End of explanation
# Rate characteristic plot for 'S3'.
S3_rate_char_plot = rc.S3.plot()
Explanation: Plotting Results
One of the strengths of generalised supply-demand analysis is that it provides an intuitive visual framework for inspecting results through the used of rate characteristic plots. Naturally this is therefore the main focus of RateChar. Parameter scan results for any particular species can be visualised as a ScanFig object through the plot method:
End of explanation
# Display plot via `interact` and enable certain lines by clicking category buttons.
# The two method calls below are equivalent to clicking the 'J_R3'
# and 'Partial Response Coefficients' buttons:
# S3_rate_char_plot.toggle_category('J_R3',True)
# S3_rate_char_plot.toggle_category('Partial Response Coefficients',True)
S3_rate_char_plot.interact()
Explanation: Plots generated by RateChar do not have widgets for each individual line; lines are enabled or disabled in batches according to the category they belong to. By default the Fluxes, Demand and Supply categories are enabled when plotting. To display the partial response coefficient lines together with the flux lines for J_R3, for instance, we would click the J_R3 and the Partial Response Coefficients buttons (in addition to those that are enabled by default).
End of explanation
S3_rate_char_plot.toggle_line('prcJR3_S3_R4', False)
S3_rate_char_plot.show()
Explanation: Modifying the status of individual lines is still supported, but has to take place via the toggle_line method. As an example prcJR3_C_R4 can be disabled as follows:
End of explanation
# This points to a file under the Pysces directory
save_file = '~/Pysces/rc_doc_example.npz'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
save_file = psctb.utils.misc.unix_to_windows_path(save_file)
else:
save_file = path.expanduser(save_file)
rc.save_session(file_name = save_file)
Explanation: .. note:: For more details on saving see the sections Saving and Default Directories and ScanFig under Basic Usage.
Saving
Saving/Loading Sessions
RateChar sessions can be saved for later use. This is especially useful when working with large data sets that take some time to generate. Data sets can be saved to any arbitrary location by supplying a path:
End of explanation
rc.save_session() # to "~/Pysces/lin4_fb/ratechar/save_data.npz"
Explanation: When no path is supplied the dataset will be saved to the default directory. (Which should be "~/Pysces/lin4_fb/ratechar/save_data.npz" in this case.
End of explanation
rc.load_session(save_file)
# OR
rc.load_session() # from "~/Pysces/lin4_fb/ratechar/save_data.npz"
Explanation: Similarly results may be loaded using the load_session method, either with or without a specified path:
End of explanation
# This points to a subdirectory under the Pysces directory
save_folder = '~/Pysces/lin4_fb/'
# Correct path depending on platform - necessary for platform independent scripts
if platform == 'win32':
save_folder = psctb.utils.misc.unix_to_windows_path(save_folder)
else:
save_folder = path.expanduser(save_folder)
rc.save_results(save_folder)
Explanation: Saving Results
Results may also be exported in csv format either to a specified location or to the default directory. Unlike saving of sessions results are spread over multiple files, so here an existing folder must be specified:
End of explanation
# Otherwise results will be saved to the default directory
rc.save_results(save_folder) # to sub folders in "~/Pysces/lin4_fb/ratechar/
Explanation: A subdirectory will be created for each metabolite with the files ec_results_N, rc_results_N, prc_results_N, flux_results_N and mca_summary_N (where N is a number starting at "0" which increments after each save operation to prevent overwriting files).
End of explanation |
9,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustering
Clustering techniques are unsupervised learning algorithms that try to group unlabelled data into "clusters", using the (typically spatial) structure of the data itself.
The easiest way to demonstrate how clustering works is to simply generate some data and show them in action. We'll start off by importing the libraries we'll be using today.
Step1: Create data
Step2: To generate our data, we're going to pick 6 random points, which we'll call centroids, and for each point we're going to generate 250 random points about it.
Step3: Below we can see each centroid marked w/ X, and the coloring associated to each respective cluster.
Step4: Mean shift
Most people that have come across clustering algorithms have learnt about k-means. Mean shift clustering is a newer and less well-known approach, but it has some important advantages
Step5: This person at the science march certainly remembered!
<img src="http
Step6: We can see that mean shift clustering has almost reproduced our original clustering. The one exception are the very close clusters, but if we really wanted to differentiate them we could lower the bandwidth.
What is impressive is that this algorithm nearly reproduced the original clusters without telling it how many clusters there should be.
Step7: We should be able to accelerate this algorithm with a GPU.
Broadcasting
Step8: Pytorch does not support broadcasting, therefore I have replaced the operators with broadcasting versions.
Step9: GPU-accelerated mean shift in pytorch
One advantage of pytorch is that it's very similar to numpy. For instance, the definition of gaussian is identical, except for the namespace.
Step10: And the implementation of meanshift is nearly identical too!
Step11: This implementation actually takes longer. Oh dear! What do you think is causing this?
Step12: All the computation is happening in the <tt>for</tt> loop, which isn't accelerated by pytorch. Each iteration launches a new cuda kernel, which takes time and slows the algorithm down as a whole. Furthermore, each iteration doesn't have enough processing to do to fill up all of the threads of the GPU. But at least the results are correct...
Step13: GPU batched algorithm
To truly accelerate the algorithm, we need to be performing updates on a batch of points per iteration, instead of just one as we were doing.
Step14: Although each iteration still has to launch a new cuda kernel, there are now fewer iterations, and the acceleration from updating a batch of points more than makes up for it.
Step15: That's more like it! We've gone from 914ms to 44ms, which is a speedup of over 2000%. Oh, and it even gives the right answer | Python Code:
%matplotlib inline
import math, numpy as np, matplotlib.pyplot as plt, operator, torch
Explanation: Clustering
Clustering techniques are unsupervised learning algorithms that try to group unlabelled data into "clusters", using the (typically spatial) structure of the data itself.
The easiest way to demonstrate how clustering works is to simply generate some data and show them in action. We'll start off by importing the libraries we'll be using today.
End of explanation
n_clusters=6
n_samples =250
Explanation: Create data
End of explanation
centroids = np.random.uniform(-35, 35, (n_clusters, 2))
slices = [np.random.multivariate_normal(centroids[i], np.diag([5., 5.]), n_samples)
for i in range(n_clusters)]
data = np.concatenate(slices).astype(np.float32)
Explanation: To generate our data, we're going to pick 6 random points, which we'll call centroids, and for each point we're going to generate 250 random points about it.
End of explanation
def plot_data(centroids, data, n_samples):
colour = plt.cm.rainbow(np.linspace(0,1,len(centroids)))
for i, centroid in enumerate(centroids):
samples = data[i*n_samples:(i+1)*n_samples]
plt.scatter(samples[:,0], samples[:,1], c=colour[i], s=1)
plt.plot(centroid[0], centroid[1], markersize=10, marker="x", color='k', mew=5)
plt.plot(centroid[0], centroid[1], markersize=5, marker="x", color='m', mew=2)
plot_data(centroids, data, n_samples)
Explanation: Below we can see each centroid marked w/ X, and the coloring associated to each respective cluster.
End of explanation
def gaussian(d, bw):
return np.exp(-0.5*((d/bw))**2) / (bw*math.sqrt(2*math.pi))
Explanation: Mean shift
Most people that have come across clustering algorithms have learnt about k-means. Mean shift clustering is a newer and less well-known approach, but it has some important advantages:
* It doesn't require selecting the number of clusters in advance, but instead just requires a bandwidth to be specified, which can be easily chosen automatically
* It can handle clusters of any shape, whereas k-means (without using special extensions) requires that clusters be roughly ball shaped.
The algorithm is as follows:
* For each data point x in the sample X, find the distance between that point x and every other point in X
* Create weights for each point in X by using the Gaussian kernel of that point's distance to x
* This weighting approach penalizes points further away from x
* The rate at which the weights fall to zero is determined by the bandwidth, which is the standard deviation of the Gaussian
* Update x as the weighted average of all other points in X, weighted based on the previous step
This will iteratively push points that are close together even closer until they are next to each other.
So here's the definition of the gaussian kernel, which you may remember from high school...
End of explanation
def meanshift(data):
X = np.copy(data)
for it in range(5):
for i, x in enumerate(X):
dist = np.sqrt(((x-X)**2).sum(1))
weight = gaussian(dist, 2.5)
X[i] = (np.expand_dims(weight,1)*X).sum(0) / weight.sum()
return X
%time X=meanshift(data)
Explanation: This person at the science march certainly remembered!
<img src="http://i.imgur.com/nijQLHw.jpg" width=400>
In our implementation, we choose the bandwidth to be 2.5.
One easy way to choose bandwidth is to find which bandwidth covers one third of the data.
End of explanation
plot_data(centroids+2, X, n_samples)
Explanation: We can see that mean shift clustering has almost reproduced our original clustering. The one exception are the very close clusters, but if we really wanted to differentiate them we could lower the bandwidth.
What is impressive is that this algorithm nearly reproduced the original clusters without telling it how many clusters there should be.
End of explanation
v=np.array([1,2,3]); v, v.shape
m=np.array([v,v*2,v*3]); m, m.shape
m+v
v1=np.expand_dims(v,-1); v1, v1.shape
m+v1
Explanation: We should be able to accelerate this algorithm with a GPU.
Broadcasting
End of explanation
def unit_prefix(x, n=1):
for i in range(n): x = x.unsqueeze(0)
return x
def align(x, y, start_dim=2):
xd, yd = x.dim(), y.dim()
if xd > yd: y = unit_prefix(y, xd - yd)
elif yd > xd: x = unit_prefix(x, yd - xd)
xs, ys = list(x.size()), list(y.size())
nd = len(ys)
for i in range(start_dim, nd):
td = nd-i-1
if ys[td]==1: ys[td] = xs[td]
elif xs[td]==1: xs[td] = ys[td]
return x.expand(*xs), y.expand(*ys)
def aligned_op(x,y,f): return f(*align(x,y,0))
def add(x, y): return aligned_op(x, y, operator.add)
def sub(x, y): return aligned_op(x, y, operator.sub)
def mul(x, y): return aligned_op(x, y, operator.mul)
def div(x, y): return aligned_op(x, y, operator.truediv)
Explanation: Pytorch does not support broadcasting, therefore I have replaced the operators with broadcasting versions.
End of explanation
def gaussian(d, bw):
return torch.exp(-0.5*((d/bw))**2) / (bw*math.sqrt(2*math.pi))
Explanation: GPU-accelerated mean shift in pytorch
One advantage of pytorch is that it's very similar to numpy. For instance, the definition of gaussian is identical, except for the namespace.
End of explanation
def meanshift(data):
X = torch.FloatTensor(np.copy(data))
for it in range(5):
for i, x in enumerate(X):
dist = torch.sqrt((sub(x, X)**2).sum(1))
weight = gaussian(dist, 3)
num = mul(weight, X).sum(0)
X[i] = num / weight.sum()
return X
Explanation: And the implementation of meanshift is nearly identical too!
End of explanation
%time X = meanshift(data).numpy()
Explanation: This implementation actually takes longer. Oh dear! What do you think is causing this?
End of explanation
plot_data(centroids+2, X, n_samples)
Explanation: All the computation is happening in the <tt>for</tt> loop, which isn't accelerated by pytorch. Each iteration launches a new cuda kernel, which takes time and slows the algorithm down as a whole. Furthermore, each iteration doesn't have enough processing to do to fill up all of the threads of the GPU. But at least the results are correct...
End of explanation
def dist_b(a,b):
return torch.sqrt((sub(a.unsqueeze(0),b.unsqueeze(1))**2).sum(2))
a=torch.rand(2,2)
b=torch.rand(3,2)
dist_b(b, a).squeeze(2)
# def gaussian(d, bw):
# return torch.exp(-0.5*((d/bw))**2) / (bw*math.sqrt(2*math.pi))
def sum_sqz(a,axis): return a.sum(axis).squeeze(axis)
def meanshift(data, bs=500):
n = len(data)
X = torch.FloatTensor(np.copy(data)).cuda()
for it in range(5):
for i in range(0,n,bs):
s = slice(i,min(n,i+bs))
weight = gaussian(dist_b(X, X[s]), 2)
num = sum_sqz(mul(weight, X), 1)
X[s] = div(num, sum_sqz(weight, 1))
return X
Explanation: GPU batched algorithm
To truly accelerate the algorithm, we need to be performing updates on a batch of points per iteration, instead of just one as we were doing.
End of explanation
%time X = meanshift(data).cpu().numpy()
Explanation: Although each iteration still has to launch a new cuda kernel, there are now fewer iterations, and the acceleration from updating a batch of points more than makes up for it.
End of explanation
plot_data(centroids+2, X, n_samples)
Explanation: That's more like it! We've gone from 914ms to 44ms, which is a speedup of over 2000%. Oh, and it even gives the right answer:
End of explanation |
9,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate mean width and lenght from test images
Step1: Size mean dimension will be used for the resizing process. All the images will be scaled to (149, 149) since it's the average of the test images.
Show some test examples
Step2: Making batches (resized)
Step3: Preprocessing data
Step4: Storing preprocessed batches on disk
Step5: NOTE
Since here the data is already processed and saved as pickle files.
Building the Network
Step6: Testing model | Python Code:
import os, random
from scipy.misc import imread, imresize
width = 0
lenght = 0
num_test_images = len(test_image_names)
for i in range(num_test_images):
path_file = os.path.join(test_root_path, test_image_names[i])
image = imread(path_file)
width += image.shape[0]
lenght += image.shape[1]
width_mean = width//num_test_images
lenght_mean = lenght//num_test_images
dim_size = (width_mean + lenght_mean) // 2
print("Width mean: {}".format(width_mean))
print("Lenght mean: {}".format(lenght_mean))
print("Size mean dimension: {}".format(dim_size))
Explanation: Calculate mean width and lenght from test images
End of explanation
import matplotlib.pyplot as plt
idx = random.randint(0, num_test_images)
sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1]
path_file = os.path.join(test_root_path, sample_file)
sample_image = imread(path_file)
print("Label:{}, Image:{}, Shape:{}".format('_'.join(sample_name), idx, sample_image.shape))
plt.figure(figsize=(3,3))
plt.imshow(sample_image)
plt.axis('off')
plt.show()
Explanation: Size mean dimension will be used for the resizing process. All the images will be scaled to (149, 149) since it's the average of the test images.
Show some test examples
End of explanation
def get_num_of_samples():
count = 0
for _,character in enumerate(character_directories):
path = os.path.join(train_root_path, character)
count += len(listdir(path))
return count
def get_batch(batch_init, batch_size):
data = {'image':[], 'label':[]}
character_batch_size = batch_size//len(character_directories)
character_batch_init = batch_init//len(character_directories)
character_batch_end = character_batch_init + character_batch_size
for _,character in enumerate(character_directories):
path = os.path.join(train_root_path, character)
images_list = listdir(path)
for i in range(character_batch_init, character_batch_end):
if len(images_list) == 0:
continue
#if this character has small number of features
#we repeat them
if i >= len(images_list):
p = i % len(images_list)
else:
p = i
path_file = os.path.join(path, images_list[p])
image = imread(path_file)
#all with the same shape
image = imresize(image, (dim_size, dim_size))
data['image'].append(image)
data['label'].append(character)
return data
def get_batches(num_batches, batch_size, verbose=False):
#num max of samples
num_samples = get_num_of_samples()
#check number of batches with the maximum
max_num_batches = num_samples//batch_size - 1
if verbose:
print("Number of samples:{}".format(num_samples))
print("Batches:{} Size:{}".format(num_batches, batch_size))
assert num_batches <= max_num_batches, "Surpassed the maximum number of batches"
for i in range(0, num_batches):
init = i * batch_size
if verbose:
print("Batch-{} yielding images from {} to {}...".format(i, init, init+batch_size))
yield get_batch(init, batch_size)
#testing generator
batch_size = 500
for b in get_batches(10, batch_size, verbose=True):
print("\t|- retrieved {} images".format(len(b['image'])))
Explanation: Making batches (resized)
End of explanation
from sklearn import preprocessing
#num characters
num_characters = len(character_directories)
#normalize
def normalize(x):
#we use the feature scaling to have all the batches
#in the same space, that is (0,1)
return (x - np.amin(x))/(np.amax(x) - np.amin(x))
#one-hot encode
lb = preprocessing.LabelBinarizer()
lb = lb.fit(character_directories)
def one_hot(label):
return lb.transform([label])
Explanation: Preprocessing data
End of explanation
num_batches = 40
batch_size = 500
import pickle
import numpy as np
cnt_images = 0
for cnt, b in enumerate(get_batches(num_batches, batch_size)):
data = {'image':[], 'label':[]}
for i in range( min(len(b['image']), batch_size) ):
image = np.array( b['image'][i] )
label = np.array( b['label'][i] )
#label = label.reshape([-1,:])
if len(image.shape) == 3:
data['image'].append(normalize(image))
data['label'].append(one_hot(label)[-1,:])
cnt_images += 1
else:
print("Dim image < 3")
with open("simpson_train_{}.pkl".format(cnt), 'wb') as file:
pickle.dump(data, file, pickle.HIGHEST_PROTOCOL)
print("Loaded {} train images and stored on disk".format(cnt_images))
#testing load from file
import pickle
with open('simpson_train_0.pkl', 'rb') as file:
data = pickle.load(file)
print("Example of onehot encoded:\n{}".format(data['label'][0]))
print("Data shape: {}".format(data['image'][0].shape))
Explanation: Storing preprocessed batches on disk
End of explanation
import torch
import torchvision
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
import torch.nn as nn
import torch.nn.functional as F
num_characters = 47
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 34 * 34, num_characters)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
#print("shape: {}".format(x.size()))
x = x.view(x.size(0), -1)
x = self.fc1(x)
return x
net = Net()
#move the neural network to the GPU
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
net = nn.DataParallel(net)
net.to(device)
import torch.optim as optim
loss_fn = nn.CrossEntropyLoss() #buit-in softmax, we can use logits directly
optimizer = optim.Adam(net.parameters())
import os
import pickle
from sklearn.model_selection import train_test_split
def getDatasetsFromPickle(file):
#print("Processing: {}".format(fname))
data = pickle.load(file)
X_train, X_val, y_train, y_val = train_test_split(data['image'], data['label'], test_size=0.2)
inputs_train, labels_train = torch.FloatTensor(X_train), torch.FloatTensor(y_train)
inputs_val, labels_val = torch.FloatTensor(X_train), torch.FloatTensor(y_train)
#permute image as (samples, x, y, channels) to (samples, channels, x, y)
inputs_train = inputs_train.permute(0, 3, 1, 2)
inputs_val = inputs_val.permute(0, 3, 1, 2)
#move the inputs and labels to the GPU
return inputs_train.to(device), labels_train.to(device), inputs_val.to(device), labels_val.to(device)
stats = {'train_loss':[], 'val_loss':[], 'acc':[]}
for epoch in range(3): # loop over the dataset multiple times
for i in range(100):
fname = "simpson_train_{}.pkl".format(i)
if os.path.exists(fname):
with open(fname, 'rb') as file:
#retrieve the data
inputs_train, labels_train, inputs_val, labels_val = getDatasetsFromPickle(file)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs_train)
#cross entropy loss doesn't accept onehot encoded targets
# |-> use the index class instead
lbls_no_onehot_encoded = torch.argmax(labels_train, dim=1)
loss = loss_fn(outputs, lbls_no_onehot_encoded)
loss.backward()
optimizer.step()
#statistics
stats['train_loss'].append(loss.item())
with torch.no_grad():
outputs = net(inputs_val)
label_val_classes = torch.argmax(labels_val, dim=1)
output_classes = torch.argmax(outputs, dim=1)
stats['val_loss'].append( loss_fn(outputs, label_val_classes).item() )
stats['acc'].append( (output_classes == label_val_classes).sum().item() / label_val_classes.size(0) )
#printouts
if i % 20 == 19:
printout = "Epoch: {} Batch: {} Training loss: {:.3f} Validation loss: {:.3f} Accuracy: {:.3f}"
print(printout.format(epoch + 1, i + 1, stats['train_loss'][-1], stats['val_loss'][-1], stats['acc'][-1],))
else:
break
print('Finished Training')
import matplotlib.pyplot as plt
plt.plot(stats['train_loss'], label='Train Loss')
plt.plot(stats['val_loss'], label='Validation Loss')
plt.plot(stats['acc'], label='Accuracy')
plt.legend()
Explanation: NOTE
Since here the data is already processed and saved as pickle files.
Building the Network
End of explanation
import warnings
warnings.filterwarnings('ignore')
#select random image
idx = random.randint(0, num_test_images)
sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1]
path_file = os.path.join(test_root_path, sample_file)
#read them
test_image = normalize(imresize(imread(path_file), (dim_size, dim_size)))
test_label_onehot = one_hot('_'.join(sample_name))[-1,:]
#move to tensors
test_image, test_label_onehot = torch.FloatTensor(test_image), torch.FloatTensor(test_label_onehot)
#permute image as (samples, x, y, channels) to (samples, channels, x, y)
test_image = test_image.permute(2, 0, 1)
test_image.unsqueeze_(0)
#move to GPU
test_image, test_label_onehot = test_image.to(device), test_label_onehot.to(device)
##
with torch.no_grad():
output = net(test_image)
predicted_character = torch.argmax(output.data, 1)
actual_character = torch.argmax(test_label_onehot)
print("Right!!") if (predicted_character == actual_character) else print("Wrong..")
#showing
actual_name = ' '.join([s.capitalize() for s in sample_name])
print("Label: {}".format(actual_name))
pred_name = lb.inverse_transform(output.cpu().numpy()).item() #copy from cuda to cpu, then to numpy
prediction = ' '.join([s.capitalize() for s in pred_name.split('_')])
print("Prediction: {}".format(prediction))
plt.figure(figsize=(3,3))
plt.imshow(test_image.permute(0, 2, 3, 1).squeeze())
plt.axis('off')
plt.show()
Explanation: Testing model
End of explanation |
9,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preferential Attachment
Outline
- Basic Simulation and Plot
Step1: Simulation
Step2: Experiment 1
- Alpha = 1,
- hypothesis
Step3: Experiment 2
- Alpha < 1,
- Hypothesis
Step4: Experiment 3
- Alpha > 1,
- Hypothesis
Step5: Networkx
- computes the preferential attachment score,
- Preferential Attachment Score | Python Code:
import networkx as netx
import numpy as np
import matplotlib.pyplot as plt
import warnings
import random
import itertools
def power_law_graph(G):
histo = netx.degree_histogram(G)
_ = plt.loglog(histo, 'b-', marker='o')
_ = plt.ylabel("k(x)")
_ = plt.xlabel("k")
plt.show()
def plot(T,skew=True):
plt.axis('off')
pos=netx.spring_layout(T)
if skew:
D = dict(netx.degree(T))
sizes = [v * 200 for k,v in D.items()]
netx.draw_networkx(T, pos=pos, with_labels=True, nodelist=D.keys(), node_size=sizes)
else:
netx.draw_networkx(T, pos=pos, with_labels=True)
plt.show()
def hist(T,bins=25):
degree_lookup = dict(netx.degree(T))
degrees = list(degree_lookup.values())
_ = plt.title("Degree Distribution")
_ = plt.hist(degrees,bins=bins)
plt.show()
def generate(nodes):
n = 0
while n < nodes:
n = n + 1
yield n
# number of attachments per round
m = 1
# nodes at time t = 0
a = 1
t0 = m + a
# build graph
G = netx.Graph()
paths = list(generate(t0))
G.add_path(paths)
plot(G)
## builds a distribution node sequence
def linear_distribute(G,disregard=[]):
for node in G.nodes(data = False):
if node in disregard:
continue
for edge in netx.edges(G,node):
yield node
# randomly gets attachments of the new node
def get_linear_attachments(G,m=1):
if m < 1:
return []
nodes = []
i = 0
while i < m:
distribution = list(linear_distribute(G,disregard=nodes))
nodes.append(random.choice(distribution))
i = i + 1
return nodes
def simulate(G,node,m=1,rounds=100):
while node <= rounds:
attachments = get_linear_attachments(G,m=m)
G.add_node(node)
for attachment in attachments:
G.add_edge(node,attachment)
node = node + 1
Explanation: Preferential Attachment
Outline
- Basic Simulation and Plot: alpha = 1
- Experiement 1: alpha = 1
- Experiement 2: alpha < 1
- Experiement 3: alpha > 1
- How Networkx Preferential Attachment Differs
- Based on the paper: https://arxiv.org/pdf/cond-mat/0104131.pdf
End of explanation
T = G.copy()
simulate(T,m+1,m,rounds=15)
plot(T,skew=True)
Explanation: Simulation
End of explanation
# ramp up...
T1 = G.copy()
simulate(T1,m+1,m,rounds=3000)
hist(T1)
power_law_graph(T1)
Explanation: Experiment 1
- Alpha = 1,
- hypothesis: degree distribution should be exponential,
End of explanation
def draw(G,disregard=[],alpha=1):
nodes = { }
lookup = dict(G.degree())
denominator = 0.0
for k,v in lookup.items():
if k in disregard:
continue
nodes[k] = (v ** alpha)
denominator = denominator + nodes[k]
#print(np.array(list(nodes.values()))/denominator)
drawing = random.uniform(0,1)
key = None
bottom = 0.0
top = 0.0
for k,v in nodes.items():
key = k
top = bottom + (v / denominator)
if (drawing >= bottom) & (drawing < top):
key = k
break
bottom = top
# print(drawing, " ", bottom, "-", top, ":", key)
return key
# randomly gets attachments of the new node
def get_attachments(G,alpha=1,m=1):
if m < 1:
return []
nodes = []
i = 0
while i < m:
node = draw(G,disregard=nodes,alpha=alpha)
nodes.append(node)
i = i + 1
# print()
return nodes
def simulate2(G,node,alpha=1,m=1,rounds=100):
while node <= rounds:
attachments = get_attachments(G,alpha,m=m)
G.add_node(node)
for attachment in attachments:
G.add_edge(node,attachment)
node = node + 1
T2 = G.copy()
simulate2(T2,m+1,.5,m,3000)
hist(T2)
power_law_graph(T2)
Explanation: Experiment 2
- Alpha < 1,
- Hypothesis: degree distribution should be a stretched exponential,
End of explanation
T3 = G.copy()
simulate2(T3,m+1,3.5,m,3000)
hist(T3)
nodes = sorted(list(dict(T3.degree()).values()),reverse=True)
# print segment of list out ...
nodes[0:5]
power_law_graph(T3)
Explanation: Experiment 3
- Alpha > 1,
- Hypothesis: degree distribution should be gelation-like phenomenon,
- means m nodes should have a majority of the connections
End of explanation
T4 = netx.generators.random_graphs.barabasi_albert_graph(8,1)
plot(T4)
scores = netx.preferential_attachment(G=T4)
for u,v,score in scores:
print('(%d, %d) -> score = %d' % (u, v, score))
len(list(T4.neighbors(1))) * len(list(T4.neighbors(7)))
Explanation: Networkx
- computes the preferential attachment score,
- Preferential Attachment Score: Indicates the likelihood of a new link forming between two nodes (u,v).
Multiplication of the friends of u and v (1).
References:
https://books.google.com/books?id=F2cPpfcpor8C&pg=PA294&lpg=PA294&dq=what+is+preferential+attachment+score&source=bl&ots=G1yfDyOsO_&sig=JcuDjT9wQyH4a0mHcBzNgxIoalU&hl=en&sa=X&ved=0ahUKEwjAjqb6p6fXAhVPVWMKHU5cBTEQ6AEIazAL#v=onepage&q=what%20is%20preferential%20attachment%20score&f=false
End of explanation |
9,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tips for Selecting Columns in a DataFrame
Notebook to accompany this post.
Step1: Build a mapping list so we can see the index of all the columns
Step2: We can also build a dictionary
Step3: Use iloc to select just the second column (Unique Squirrel ID)
Step4: Pass a list of integers to select multiple columns by index
Step5: We can also pass a slice object to select a range of columns
Step6: If we want to combine the list and slice notation, we need to use nump.r_ to process the data into an appropriate format.
Step7: We can pass the output of np.r_ to .iloc to use multiple selection approaches
Step8: We can use the same notation when reading in a csv as well
Step9: We can also select columns using a boolean array
Step10: A lambda function can be useful for combining into 1 line.
Step11: A more complex example
Step12: Combining index and boolean arrays | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv(
'https://data.cityofnewyork.us/api/views/vfnx-vebw/rows.csv?accessType=DOWNLOAD&bom=true&format=true'
)
Explanation: Tips for Selecting Columns in a DataFrame
Notebook to accompany this post.
End of explanation
col_mapping = [f"{c[0]}:{c[1]}" for c in enumerate(df.columns)]
col_mapping
Explanation: Build a mapping list so we can see the index of all the columns
End of explanation
col_mapping_dict = {c[0]:c[1] for c in enumerate(df.columns)}
col_mapping_dict
Explanation: We can also build a dictionary
End of explanation
df.iloc[:, 2]
Explanation: Use iloc to select just the second column (Unique Squirrel ID)
End of explanation
df.iloc[:, [0,1,2]]
Explanation: Pass a list of integers to select multiple columns by index
End of explanation
df.iloc[:, 0:3]
Explanation: We can also pass a slice object to select a range of columns
End of explanation
np.r_[0:3,15:19,24,25]
Explanation: If we want to combine the list and slice notation, we need to use nump.r_ to process the data into an appropriate format.
End of explanation
df.iloc[:, np.r_[0:3,15:19,24,25]]
Explanation: We can pass the output of np.r_ to .iloc to use multiple selection approaches
End of explanation
df_2 = pd.read_csv(
'https://data.cityofnewyork.us/api/views/vfnx-vebw/rows.csv?accessType=DOWNLOAD&bom=true&format=true',
usecols=np.r_[1,2,5:8,15:25],
)
df_2.head()
Explanation: We can use the same notation when reading in a csv as well
End of explanation
run_cols = df.columns.str.contains('run', case=False)
run_cols
df.iloc[:, run_cols].head()
Explanation: We can also select columns using a boolean array
End of explanation
df.iloc[:, lambda df:df.columns.str.contains('run', case=False)].head()
Explanation: A lambda function can be useful for combining into 1 line.
End of explanation
df.iloc[:, lambda df: df.columns.str.contains('district|precinct|boundaries',
case=False)].head()
Explanation: A more complex example
End of explanation
location_cols = df.columns.str.contains('district|precinct|boundaries',
case=False)
location_cols
location_indices = [i for i, col in enumerate(location_cols) if col]
location_indices
df.iloc[:, np.r_[0:3,location_indices]].head()
Explanation: Combining index and boolean arrays
End of explanation |
9,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scikit-learn is a machine learning library for python, with a very easy to use API and great documentation.
Step1: Lets load up our trajectory. This is the trajectory that we generated in
the "Running a simulation in OpenMM and analyzing the results with mdtraj"
example.
Step2: Create a two component PCA model, and project our data down into this
reduced dimensional space. Using just the cartesian coordinates as
input to PCA, it's important to start with some kind of alignment.
Step3: Now we can plot the data on this projection.
Step4: Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA. | Python Code:
%matplotlib inline
from __future__ import print_function
import mdtraj as md
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
Explanation: scikit-learn is a machine learning library for python, with a very easy to use API and great documentation.
End of explanation
traj = md.load('ala2.h5')
traj
Explanation: Lets load up our trajectory. This is the trajectory that we generated in
the "Running a simulation in OpenMM and analyzing the results with mdtraj"
example.
End of explanation
pca1 = PCA(n_components=2)
traj.superpose(traj, 0)
reduced_cartesian = pca1.fit_transform(traj.xyz.reshape(traj.n_frames, traj.n_atoms * 3))
print(reduced_cartesian.shape)
Explanation: Create a two component PCA model, and project our data down into this
reduced dimensional space. Using just the cartesian coordinates as
input to PCA, it's important to start with some kind of alignment.
End of explanation
plt.figure()
plt.scatter(reduced_cartesian[:, 0], reduced_cartesian[:,1], marker='x', c=traj.time)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('Cartesian coordinate PCA: alanine dipeptide')
cbar = plt.colorbar()
cbar.set_label('Time [ps]')
Explanation: Now we can plot the data on this projection.
End of explanation
pca2 = PCA(n_components=2)
from itertools import combinations
# this python function gives you all unique pairs of elements from a list
atom_pairs = list(combinations(range(traj.n_atoms), 2))
pairwise_distances = md.geometry.compute_distances(traj, atom_pairs)
print(pairwise_distances.shape)
reduced_distances = pca2.fit_transform(pairwise_distances)
plt.figure()
plt.scatter(reduced_distances[:, 0], reduced_distances[:,1], marker='x', c=traj.time)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.title('Pairwise distance PCA: alanine dipeptide')
cbar = plt.colorbar()
cbar.set_label('Time [ps]')
Explanation: Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA.
End of explanation |
9,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using MBA
cmin and cmax are coordinates of the bottom-left and the top-right corners of the bounding box containing scattered data. coo and val are arrays containing coordinates and values of the data points.
Step1: Create $n \times n$ regular grid of coordinates to interpolate onto.
Step2: The plot_surface() function constructs MBA class with the given initial grid size, interpolates the input data over regular surface, and plots the results
Step3: The smaller the initial grid size, the smoother the interpolated surface.
Step4: Report some timings and statistics about the constructed hierarchy
Step5: Specifing the initial approximation
By default MBA uses linear approximation as an initial guess. Multilevel B-splines then are used to fit the difference between initial approximation and the actual data. Sometimes it may useful to provide an initial approximation that would better fit the underlying model. Here is a simple example demonstrating this | Python Code:
cmin = [0.0, 0.0]
cmax = [1.0, 1.0]
coo = uniform(0, 1, (7,2))
val = uniform(0, 1, coo.shape[0])
Explanation: Using MBA
cmin and cmax are coordinates of the bottom-left and the top-right corners of the bounding box containing scattered data. coo and val are arrays containing coordinates and values of the data points.
End of explanation
n = 100
s = linspace(0,1,n)
x = array(meshgrid(s,s)).transpose([1,2,0]).copy()
Explanation: Create $n \times n$ regular grid of coordinates to interpolate onto.
End of explanation
def plot_surface(m0):
interp = mba2(cmin, cmax, [m0,m0], coo, val)
error = amax(abs(val - interp(coo))) / amax(abs(val))
v = interp(x)
pcolormesh(s, s, v, cmap='RdBu')
scatter(x=coo[:,0], y=coo[:,1], c=val, cmap='RdBu')
xlim([0,1])
ylim([0,1])
title("$m_0 = {0:}$, error = {1:.3e}".format(m0, error))
colorbar();
Explanation: The plot_surface() function constructs MBA class with the given initial grid size, interpolates the input data over regular surface, and plots the results
End of explanation
figure(figsize=(11,5))
subplot(121); plot_surface(2)
subplot(122); plot_surface(10)
tight_layout()
Explanation: The smaller the initial grid size, the smoother the interpolated surface.
End of explanation
%%timeit
interp = mba2(cmin, cmax, [3,3], coo, val)
%%timeit interp = mba2(cmin, cmax, [3,3], coo, val)
v = interp(x)
interp = mba2(cmin, cmax, [3,3], coo, val)
print(interp)
Explanation: Report some timings and statistics about the constructed hierarchy:
End of explanation
def test_initial(x0, y0, init, desc):
interp = mba1([0], [1], [8], x0, y0, init)
x = linspace(0, 1, 100).reshape(100,1)
y = interp(x)
plot(x, y, 'k-')
plot(x, [init(x) for x in x], 'k:')
plot(x0, y0, 'ro')
ylim([0,1])
title(desc)
x = [[0.3], [0.5], [0.7]]
v = [0.45, 0.55, 0.5, ]
figure(figsize=(12, 3))
subplot(131); test_initial(x, v, lambda x: 0.5, 'y = 0.5')
subplot(132); test_initial(x, v, lambda x: x[0], 'y = x')
subplot(133); test_initial(x, v, lambda x: 1-x[0], 'y = 1-x')
tight_layout()
Explanation: Specifing the initial approximation
By default MBA uses linear approximation as an initial guess. Multilevel B-splines then are used to fit the difference between initial approximation and the actual data. Sometimes it may useful to provide an initial approximation that would better fit the underlying model. Here is a simple example demonstrating this:
End of explanation |
9,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two Object Tracking
Summary of notebook
<b> Kalman filter
Step1: Target information
Step2: The Kalman Filter Model
Step3: Motion and measurement models
Step4: Priors
Step5: Linear Kalman Filtering
Creating the model
Step6: The Kalman Filter algorithm
Step7: <img src="images/kalman/kalman_hl.gif">
Two object tracking | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
from matplotlib import pylab as plt
from mpl_toolkits import mplot3d
from canonical_gaussian import CanonicalGaussian as CG
from gaussian_mixture import GaussianMixtureModel as GMM
from calc_traj import calc_traj
from range_doppler import *
from util import *
np.set_printoptions(precision=2)
Explanation: Two Object Tracking
Summary of notebook
<b> Kalman filter: PGM implementation </b>
Nearly identical to standard implementation.
This section is just a basis for comparison.
<b> Simulation of the two object tracking </b>
It tracks the objects, but the likelihoods seem incorrect.
End of explanation
names, p, v, w = load_clubs('clubs.csv')
cpi = 40e-3
T = 12
t_sim = np.arange(0, T, cpi)
t1, p1, v1 = calc_traj(p[0, :], v[0, :], w[0, :], t_sim)
t2, p2, v2 = calc_traj(p[-1, :], v[-1, :], w[-1, :], t_sim)
sensor_locations = np.array([[-10, 28.5, 1], [-15, 30.3, 3],
[200, 30, 1.5], [220, -31, 2],
[-30, 0, 0.5], [150, 10, 0.6]])
rd_1 = range_doppler(sensor_locations, p1, v1)
pm_1 = multilateration(sensor_locations, rd_1[:, :, 1])
vm_1 = determine_velocity(t1, pm_1, rd_1[:, :, 0])
rd_2 = range_doppler(sensor_locations, p2, v2)
pm_2 = multilateration(sensor_locations, rd_2[:, :, 1])
vm_2 = determine_velocity(t2, pm_2, rd_2[:, :, 0])
Explanation: Target information
End of explanation
N = 6
if pm_1.shape < pm_2.shape:
M, _ = pm_1.shape
pm_2 = pm_2[:M]
vm_2 = pm_2[:M]
else:
M, _ = pm_2.shape
pm_1 = pm_1[:M]
vm_1 = vm_2[:M]
print(M)
dt = cpi
g = 9.81
sigma_r = 2.5
sigma_q = 0.5
prior_var = 1
Explanation: The Kalman Filter Model
End of explanation
A = np.identity(N)
A[0, 3] = A[1, 4] = A[2, 5] = dt
B = np.zeros((N, N))
B[2, 2] = B[5, 5] = 1
R = np.identity(N)*sigma_r
C = np.identity(N)
Q = np.identity(N)*sigma_q
u = np.zeros((6, 1))
u[2] = -0.5*g*(dt**2)
u[5] = -g*dt
Explanation: Motion and measurement models
End of explanation
#Object 1
mu0_1 = np.zeros((N, 1))
mu0_1[:3, :] = p1[0, :].reshape(3, 1)
mu0_1[3:, :] = v[0, :].reshape(3, 1)
prec0_1 = np.linalg.inv(prior_var*np.identity(N))
h0_1 = (prec0_1)@(mu0_1)
g0_1 = -0.5*(mu0_1.T)@(prec0_1)@(mu0_1) -3*np.log(2*np.pi)
#Object 2
mu0_2 = np.zeros((N, 1))
mu0_2[:3, :] = p2[0, :].reshape(3, 1)
mu0_2[3:, :] = v2[0, :].reshape(3, 1)
prec0_2 = np.linalg.inv(prior_var*np.identity(N))
h0_2 = (prec0_2)@(mu0_2)
g0_2 = -0.5*(mu0_2.T)@(prec0_2)@(mu0_2) -3*np.log(2*np.pi)
print(h0_1)
Explanation: Priors
End of explanation
z_t = np.empty((M, N))
z_t[:, :3] = pm_1
z_t[:, 3:] = vm_1
R_in = np.linalg.inv(R)
P_pred = np.bmat([[R_in, -(R_in)@(A)], [-(A.T)@(R_in), (A.T)@(R_in)@(A)]])
M_pred = np.zeros((2*N, 1))
M_pred[:N, :] = (B)@(u)
h_pred = (P_pred)@(M_pred)
g_pred = -0.5*(M_pred.T)@(P_pred)@(M_pred).flatten() -0.5*np.log( np.linalg.det(2*np.pi*R))
Q_in = np.linalg.inv(Q)
P_meas = np.bmat([[(C.T)@(Q_in)@(C), -(C.T)@(Q_in)], [-(Q_in)@(C), Q_in]])
h_meas = np.zeros((2*N, 1))
g_meas = -0.5*np.log( np.linalg.det(2*np.pi*Q))
L, _ = z_t.shape
X = np.arange(0, L)
Z = np.arange(L-1, 2*L-1)
C_X = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]
C_Z = [CG([X[0]], [N], h0_1, prec0_1, g0_1)]
for i in np.arange(1, L):
C_X.append(CG([X[i], X[i-1]], [N, N], h_pred, P_pred, g_pred))
C_Z.append(CG([X[i], Z[i]], [N, N], h_meas, P_meas, g_meas))
Explanation: Linear Kalman Filtering
Creating the model
End of explanation
message_out = [C_X[0]]
prediction = [C_X[0]]
mean = np.zeros((N, L))
for i in np.arange(1, L):
#Kalman Filter Algorithm
C_Z[i].introduce_evidence([Z[i]], z_t[i, :])
marg = (message_out[i-1]*C_X[i]).marginalize([X[i-1]])
message_out.append(marg*C_Z[i])
mean[:, i] = (np.linalg.inv(message_out[i]._prec)@(message_out[i]._info)).reshape((N, ))
#For plotting only
prediction.append(marg)
p_e = mean[:3, :]
fig = plt.figure(figsize=(25, 25))
ax = plt.axes(projection='3d')
ax.plot(p1[:, 0], p1[:, 1], p1[:, 2])
ax.plot(p_e[0, :], p_e[1, :], p_e[2, :], 'or')
ax.set_xlabel('x (m)', fontsize = '20')
ax.set_ylabel('y (m)', fontsize = '20')
ax.set_zlabel('z (m)', fontsize = '20')
ax.set_title('Kalman Filtering', fontsize = '20')
ax.set_ylim([-1, 1])
ax.legend(['Actual Trajectory', 'Estimated trajectory'])
plt.show()
D = 100
t = np.linspace(0, 2*np.pi, D)
xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))
gaussians = message_out + prediction + C_Z
ellipses = []
for g in gaussians:
g._vars = [1, 2, 3, 4]
g._dims = [1, 1, 1, 3]
c = g.marginalize([2, 4])
cov = np.linalg.inv(c._prec)
mu = (cov)@(c._info)
U, S, _ = np.linalg.svd(cov)
L = np.diag(np.sqrt(S))
ellipses.append(np.dot((U)@(L), xz) + mu)
for i in np.arange(0, M):
plt.figure(figsize= (15, 15))
message_out = ellipses[i]
prediction = ellipses[i+M]
measurement = ellipses[i+2*M]
plt.plot(p1[:, 0], p1[:, 2], 'k--', label='Trajectory')
plt.plot(message_out[0, :], message_out[1, :], 'r', label='After measurement update')
plt.plot(prediction[0, :], prediction[1, :], 'b', label = 'Recursive prediction')
plt.plot(measurement[0, :], measurement[1, :], 'g', label='Measurement')
plt.xlim([-3.5, 250])
plt.ylim([-3.5, 35])
plt.grid(True)
plt.xlabel('x (m)')
plt.ylabel('z (m)')
plt.legend(loc='upper left')
plt.title('x-z position for t = %d'%i)
plt.savefig('images/kalman/%d.png'%i, format = 'png')
plt.close()
Explanation: The Kalman Filter algorithm: Gaussian belief propagation
End of explanation
fig = plt.figure(figsize=(25, 25))
ax = plt.axes(projection='3d')
ax.plot(p1[:, 0], p1[:, 1], p1[:, 2])
ax.plot(p2[:, 0], p2[:, 1], p2[:, 2], 'or')
ax.set_xlabel('x (m)', fontsize = '20')
ax.set_ylabel('y (m)', fontsize = '20')
ax.set_zlabel('z (m)', fontsize = '20')
ax.set_title('', fontsize = '20')
ax.set_ylim([-20, 20])
ax.legend(['Target 1', 'Target 2'])
plt.show()
L = 10
X_1 = np.arange(0, L).tolist()
X_2 = np.arange(L, 2*L).tolist()
Z_1 = np.arange(2*L, 3*L).tolist()
Z_2 = np.arange(3*L, 4*L).tolist()
z_1 = np.empty((M, N))
z_1[:, :3] = pm_1
z_1[:, 3:] = vm_1
z_2 = np.empty((M, N))
z_2[:, :3] = pm_2
z_2[:, 3:] = vm_2
C_X = [CG([X_1[0]], [N], h0_1, prec0_1, g0_1)*CG([X_2[0]], [N], h0_2, prec0_2, g0_2)]
for i in np.arange(1, L):
C_X.append(CG([X_1[i], X_1[i-1]], [N, N], h_pred, P_pred, g_pred)
*CG([X_2[i], X_2[i-1]], [N, N], h_pred, P_pred, g_pred))
C_Z = [None]
Z_11 = CG([X_1[1], Z_1[1]], [N, N], h_meas, P_meas, g_meas)
Z_11.introduce_evidence([Z_1[1]], z_1[1, :])
Z_22 = CG([X_2[1], Z_2[1]], [N, N], h_meas, P_meas, g_meas)
Z_22.introduce_evidence([Z_2[1]], z_2[1, :])
C_Z.append(Z_11*Z_22)
for i in np.arange(2, L):
Z_11 = CG([X_1[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)
Z_11.introduce_evidence([Z_1[i]], z_1[i, :])
Z_22 = CG([X_2[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)
Z_22.introduce_evidence([Z_2[i]], z_2[i, :])
Z_12 = CG([X_1[i], Z_2[i]], [N, N], h_meas, P_meas, g_meas)
Z_12.introduce_evidence([Z_2[i]] ,z_2[i, :])
Z_21 = CG([X_2[i], Z_1[i]], [N, N], h_meas, P_meas, g_meas)
Z_21.introduce_evidence([Z_1[i]], z_1[i, :])
C_Z.append(GMM([0.5*(Z_11*Z_22), 0.5*(Z_12*Z_21)]))
predict = [C_X[0]]
for i in np.arange(1, L):
marg = (C_X[i]*predict[i-1]).marginalize([X_1[i-1], X_2[i-1]])
predict.append(C_Z[i]*marg)
D = 100
t = np.linspace(0, 2*np.pi, D)
xz = np.array([[np.cos(t)], [np.sin(t)]]).reshape((2, D))
ellipses = []
norms = []
i = 0
for p in predict:
if isinstance(p, GMM):
mix = p._mix
else:
mix = [p]
time_step = []
for m in mix:
m._vars = [1, 2, 3, 4]
m._dims = [1, 1, 1, 9]
c = m.marginalize([2, 4])
cov = np.linalg.inv(c._prec)
mu = (cov)@(c._info)
if i == 0:
print(cov)
i = 1
U, S, _ = np.linalg.svd(cov)
lambda_ = np.diag(np.sqrt(S))
norms.append(c._norm)
time_step.append(np.dot((U)@(lambda_), xz) + mu)
ellipses.append(time_step)
for i in np.arange(0, L):
plt.figure(figsize= (15, 15))
plt.plot(p1[1:, 0], p1[1:, 2], 'or', label='Trajectory 1')
plt.plot(p2[1:, 0], p2[1:, 2], 'og', label='Trajectory 2')
for e in ellipses[i]:
plt.plot(e[0, :], e[1, :], 'b')
plt.xlim([-3.5, 25])
plt.ylim([-3.5, 15])
plt.grid(True)
plt.legend(loc='upper left')
plt.xlabel('x (m)')
plt.ylabel('z (m)')
plt.title('x-z position for t = %d'%(i))
plt.savefig('images/two_objects/%d.png'%i, format = 'png')
plt.close()
Explanation: <img src="images/kalman/kalman_hl.gif">
Two object tracking
End of explanation |
9,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 5. Create a logistic regression model
Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50 | Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
import statsmodels.formula.api as smf
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
df = pd.read_csv("hanford.csv")
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
df.describe()
df.corr()
Explanation: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
End of explanation
# I could define "high exposure" as 1.5 x IQR, which would be: Q3-Q1, or 6.41-2.49
high_exposure = 4.08*1.5
df['Exposure'].describe()
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit() #notice the formula regresses Y on X (Y~X)
intercept, slope = lm.params
lm.params
Explanation: 5. Create a logistic regression model
End of explanation
#y=mx+b
Explanation: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
End of explanation |
9,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit 1
Step1: your explanation here (delete this)
2. If the temperature of an oven is 450 degrees Fahrenheit, what is it in kelvins? | Python Code:
## your code here
## or
## type what you put in calculator (safer to convert cell to markdown)
Explanation: Unit 1: Programming Basics
Lesson 1: Introduction to Python - Pre-activity
Scientific Context: Unit Conversions
The International System of Units (SI) is the modern metric system of measurement. It was designed to provide an organized system of measurement that everyone in the world could use. Due to the fact that an SI base unit is not always the most convenient unit for experimental measurements, unit conversions are a very common computation in science.
Fundamental Principles: Temperature Scales
The Fahrenheit temperature scale defines 32 for the freezing point of water, 212 for the boiling point of water. The difference between these two reference points is divided into 180 equal intervals called degrees. In contrast, the Celsius temperature scale defines 0 for the freezing point of water and 100 for the boiling point of water, thus dividing this same region into 100 parts. The following formula can be used to convert a temperature from its representation on the Fahrenheit scale to the Celsius value:
equation 1:
$$T_{Celsius} = \frac{5}{9}(T_{Farenheit} - 32)$$
The Kelvin temperature scale uses the same physical reference points of water as the Celsius scale (0°C for the freezing point of water and 100°C for the boiling point of water), thus the “degree” increment for the two scales is the same. The Kelvin scale, however, shifts the zero point by defining an absolute zero below which temperatures do not exist, i.e., molecular energy is a minimum. This point corresponds to a temperature of -273.15° on the Celsius temperature scale such that:
equation 2:
$$T_{kelvin} = T_{Celsius} + 273.15$$
Temperatures on this scale are called kelvins, not degrees kelvin, and it is the SI unit for temperature.
Pre-Activity Questions
1. If a cup of water is 38 degrees Celsius, what is it in Fahrenheit?
End of explanation
## your code here
## or
## type what you put in calculator (safer to convert cell to markdown)
Explanation: your explanation here (delete this)
2. If the temperature of an oven is 450 degrees Fahrenheit, what is it in kelvins?
End of explanation |
9,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If, Elif, Else Statements
Programming starts and ends at control flow. Decisions need to be made to carry out the functions you write. We make decisions use the if, elif, and else statement. Like any other programming language, the semantics of the statements work the same, but the syntax differs. Let's try some examples.
Step1: Imagine you've written a magic number program, and the system outputs a message to indicate your correctness. Here, the if checks the most specific condition -> if the user is right. As we move down, we become more general. This is the typical layout of control flow in programming.
Step2: Let's output a message depending on where you are going.
Step3: We can nest ifs, elifs, and elses, too! | Python Code:
you = "ready"
if you == "ready":
print("Vamanos!")
else:
print("What's wrong?")
Explanation: If, Elif, Else Statements
Programming starts and ends at control flow. Decisions need to be made to carry out the functions you write. We make decisions use the if, elif, and else statement. Like any other programming language, the semantics of the statements work the same, but the syntax differs. Let's try some examples.
End of explanation
magic_number = 24
my_guess = 29
if my_guess == magic_number:
print("Correct!")
elif magic_number == (my_guess-1) or magic_number == (my_guess+1):
print("So, so close!")
else:
print("Wrong!")
Explanation: Imagine you've written a magic number program, and the system outputs a message to indicate your correctness. Here, the if checks the most specific condition -> if the user is right. As we move down, we become more general. This is the typical layout of control flow in programming.
End of explanation
my_place = "Grocery store"
if my_place == "work":
print("Clock out early!")
elif my_place == "Grocery store":
print("Run those errands!")
else:
print("Where are you going?")
Explanation: Let's output a message depending on where you are going.
End of explanation
# A simple pet and breed printer
pet = "Dog"
breed = "Labrador"
if pet == "Cat":
print("Pet: cat")
if breed == "Siberian":
print("Breed: Siberian")
else:
print("Breed: Not specified clearly")
elif pet == "Dog":
print("Pet: Dog")
if breed == "Labrador":
print("Breed: Labrador")
elif breed == "Husky":
print("Breed: Husky")
else:
print("Breed: Not specified clearly")
else:
print("I have no idea what that is!")
Explanation: We can nest ifs, elifs, and elses, too!
End of explanation |
9,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: 1. Load a shapefile that represents the river network
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
Step2: Alright, let's see what fields we read in with this shapefile
Step3: Great! Looks like we have length (reach length), upstream drainage area (drainage area), x and y verticies of each link/reach (x and y of polyline), and bed elevation (topographic elevation).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity. In this case, "reach_length" could be equivalently calculated as the cumulative distance between verticies defined by x and y of polyline.
Step4: Our network consists of 29 links between 30 nodes. In the plot above, X and Y represent the plan-view coordinates of the node locations.
Next, we need to populate the grid with the relevant topographic and hydrologic information
Step5: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables
Step6: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which is a normalized value ranging from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
Step7: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example
Step8: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time
Step9: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
Step10: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
Step11: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component
Step12: Then, we initialize the network sediment transporter
Step13: Now we are ready to run the model forward in time
Step14: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels. For a thorough tutorial on the plotting tools, see this notebook.
We can color links by values that we calculate. For example, if we are curious about the fate of sediment that started out on link 27, we might want to plot the total volume of sediment that originated on link 27 during a later timestep
Step15: Non-network plotting
The results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total volume of parcels on the grid through time. As parcels exit the grid, the total volume decreases.
Step16: We can also plot individual parcel characteristics. The plot below shows the total transport distance of each parcel through the whole model run as a function of the parcel's grain size (during the final timestep).
Step17: The plot below is an example of accessing variables associated with the grid (grid.at_link.X, or grid.at_node.X), as well as a variable associated with this instance of NetworkModelGrid (nmg.X) | Python Code:
import warnings
warnings.filterwarnings('ignore')
import os
import pathlib
import matplotlib.pyplot as plt
import numpy as np
from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter
from landlab.data_record import DataRecord
from landlab.grid.network import NetworkModelGrid
from landlab.plot import graph
from landlab.io import read_shapefile
from landlab import ExampleData
from landlab.plot import plot_network_and_parcels
%matplotlib inline
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Using the Landlab NetworkSedimentTransporter component starting with a shapefile river network
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how to model the transport of coarse sediment through a river network using the NetworkSedimentTransporter Landlab component. For an equivalent tutorial demonstrating initialization of the NetworkSedimentTransporter with a synthetic network model grid, here.
In this example we will:
- load a river network shapefile to create a Landlab grid to represent a river network
- create sediment "parcels" that will transport through the river network, represented as items in a Landlab DataRecord
- run the component
- plot the results of the model run
Import the necessary libraries, plus a bit of magic so that we can plot within this notebook:
End of explanation
datadir = ExampleData("io/shapefile", case="methow").base
shp_file = datadir / "MethowSubBasin.shp"
points_shapefile = datadir / "MethowSubBasin_Nodes_4.shp"
grid = read_shapefile(
shp_file,
points_shapefile=points_shapefile,
node_fields=["usarea_km2", "Elev_m"],
link_fields=["usarea_km2", "Length_m"],
link_field_conversion={"usarea_km2": "drainage_area", "Slope":"channel_slope", "Length_m":"reach_length"},
node_field_conversion={
"usarea_km2": "drainage_area",
"Elev_m": "topographic__elevation",
},
threshold=0.01,
)
Explanation: 1. Load a shapefile that represents the river network
First, we need to create a Landlab NetworkModelGrid to represent the river network. Each link on the grid represents a reach of river. Each node represents a break between reaches. All tributary junctions must be associated with grid nodes.
End of explanation
grid.at_link.keys()
grid.at_node.keys()
Explanation: Alright, let's see what fields we read in with this shapefile:
End of explanation
graph.plot_graph(grid, at="node,link")
grid.number_of_links
grid.number_of_nodes
Explanation: Great! Looks like we have length (reach length), upstream drainage area (drainage area), x and y verticies of each link/reach (x and y of polyline), and bed elevation (topographic elevation).
Note that "reach_length" is defined by the user, rather than calculated as the minimum distance between nodes. This accounts for channel sinuosity. In this case, "reach_length" could be equivalently calculated as the cumulative distance between verticies defined by x and y of polyline.
End of explanation
grid.at_node["bedrock__elevation"] = grid.at_node["topographic__elevation"].copy()
grid.at_link["channel_width"] = 1 * np.ones(grid.number_of_links) # m
grid.at_link["flow_depth"] = 0.5 * np.ones(grid.number_of_links) # m
Explanation: Our network consists of 29 links between 30 nodes. In the plot above, X and Y represent the plan-view coordinates of the node locations.
Next, we need to populate the grid with the relevant topographic and hydrologic information:
End of explanation
# element_id is the link on which the parcel begins.
element_id = np.repeat(np.arange(grid.number_of_links), 50)
element_id = np.expand_dims(element_id, axis=1)
volume = 1*np.ones(np.shape(element_id)) # (m3)
active_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive
density = 2650 * np.ones(np.size(element_id)) # (kg/m3)
abrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)
# Lognormal GSD
medianD = 0.15 # m
mu = np.log(medianD)
sigma = np.log(2) #assume that D84 = sigma*D50
np.random.seed(0)
D = np.random.lognormal(
mu,
sigma,
np.shape(element_id)
) # (m) the diameter of grains in each parcel
Explanation: We must distinguish between topographic elevation (the top surface of the bed sediment) and bedrock elevation (the surface of the river in the absence of modeled sediment).
2. Create sediment 'parcels' in a DataRecord
We represent sediment in the network as discrete parcels (or packages) of grains of uniform size and characteristics. Each parcel is tracked through the network grid according to sediment transport and stratigraphic constraints.
Parcels are tracked using the Landlab DataRecord.
First, let's create arrays with all of the essential sediment parcel variables:
End of explanation
time_arrival_in_link = np.random.rand(np.size(element_id), 1)
location_in_link = np.random.rand(np.size(element_id), 1)
Explanation: In order to track sediment motion, we classify parcels as either active (representing mobile surface sediment) or inactive (immobile subsurface) during each timestep. The active parcels are the most recent parcels to arrive in the link. During a timestep, active parcels are transported downstream (increasing their location_in_link, which is a normalized value ranging from 0 to 1) according to a sediment transport formula.
We begin by assigning each parcel an arbitrary (and small) arrival time and location in the link.
End of explanation
lithology = ["quartzite"] * np.size(element_id)
Explanation: In addition to the required parcel attributes listed above, you can designate optional parcel characteristics, depending on your needs. For example:
End of explanation
variables = {
"abrasion_rate": (["item_id"], abrasion_rate),
"density": (["item_id"], density),
"lithology": (["item_id"], lithology),
"time_arrival_in_link": (["item_id", "time"], time_arrival_in_link),
"active_layer": (["item_id", "time"], active_layer),
"location_in_link": (["item_id", "time"], location_in_link),
"D": (["item_id", "time"], D),
"volume": (["item_id", "time"], volume)
}
Explanation: We now collect the arrays into a dictionary of variables, some of which will be tracked through time (["item_id", "time"]), and others of which will remain constant through time :
End of explanation
items = {"grid_element": "link", "element_id": element_id}
parcels = DataRecord(
grid,
items=items,
time=[0.0],
data_vars=variables,
dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]},
)
Explanation: With all of the required attributes collected, we can create the parcels DataRecord. Often, parcels will eventually transport off of the downstream-most link. To track these parcels, we have designated a "dummy_element" here, which has index value -2.
End of explanation
timesteps = 10 # total number of timesteps
dt = 60 * 60 * 24 *2 # length of timestep (seconds)
Explanation: 3. Run the NetworkSedimentTransporter
With the parcels and grid set up, we can move on to setting up the model.
End of explanation
fd = FlowDirectorSteepest(grid, "topographic__elevation")
fd.run_one_step()
Explanation: Before running the NST, we need to determine flow direction on the grid (upstream and downstream for each link). To do so, we initalize and run a Landlab flow director component:
End of explanation
nst = NetworkSedimentTransporter(
grid,
parcels,
fd,
bed_porosity=0.3,
g=9.81,
fluid_density=1000,
transport_method="WilcockCrowe",
)
Explanation: Then, we initialize the network sediment transporter:
End of explanation
for t in range(0, (timesteps * dt), dt):
nst.run_one_step(dt)
print("Model time: ", t/(60*60*24), "days passed")
Explanation: Now we are ready to run the model forward in time:
End of explanation
timestep_of_interest = 6
originating_link = 27
#filter the parcels to calculate total volumes of only the parcels that originated in the chosen link
parcelfilter = np.zeros_like(
parcels.dataset.element_id, dtype=bool
)
parcelfilter[:, timestep_of_interest] = (parcels.dataset.element_id[:,0] == originating_link)
vol_orig_link = parcels.calc_aggregate_value(
np.sum,
"volume",
at="link",
filter_array=parcelfilter,
fill_value=0.0
)
fig = plot_network_and_parcels(
grid, parcels,
link_attribute=vol_orig_link,
link_attribute_title = "Vol of sed originating on link x",
network_linewidth = 5,
parcel_alpha = 0
)
Explanation: 4. Plot the model results
There are landlab plotting tools specific to the NetworkSedimentTransporter. In particular, plot_network_and_parcels creates a plan-view map of the network and parcels (represented as dots along the network). We can color both the parcels and the links by attributes.
Here, we demonstrate one example use of plot_network_and_parcels. For a thorough tutorial on the plotting tools, see this notebook.
We can color links by values that we calculate. For example, if we are curious about the fate of sediment that started out on link 27, we might want to plot the total volume of sediment that originated on link 27 during a later timestep:
End of explanation
parcel_vol_on_grid = parcels.dataset["volume"].values
parcel_vol_on_grid[parcels.dataset["element_id"].values==-2]=0
#plt.figure(figsize=(8,6))
plt.plot(np.asarray(parcels.time_coordinates)/(60*60*24),
np.sum(parcel_vol_on_grid, axis=0),
'-',
linewidth=3,
alpha=0.5
)
plt.ylabel('Total volume of parcels on grid $[m^3]$')
plt.xlabel('Time [days]')
plt.show()
Explanation: Non-network plotting
The results of the NST can be visualized by directly accessing information about the grid, the parcels, and by accessing variables stored after the run of NST.
As a simple example, we can plot the total volume of parcels on the grid through time. As parcels exit the grid, the total volume decreases.
End of explanation
plt.loglog(parcels.dataset.D[:,-1],
nst._distance_traveled_cumulative,
'.'
)
plt.xlabel('Parcel grain size (m)')
plt.ylabel('Cumulative parcel travel distance (m)')
# Note: some of the smallest grain travel distances can exceed the length of the
# grid by "overshooting" during a single timestep of high transport rate
Explanation: We can also plot individual parcel characteristics. The plot below shows the total transport distance of each parcel through the whole model run as a function of the parcel's grain size (during the final timestep).
End of explanation
plt.plot(grid.at_link["channel_slope"],
nst.d_mean_active,
'.')
plt.xlabel('Channel slope (m/m)')
plt.ylabel('Mean grain size of active layer (m)')
Explanation: The plot below is an example of accessing variables associated with the grid (grid.at_link.X, or grid.at_node.X), as well as a variable associated with this instance of NetworkModelGrid (nmg.X):
End of explanation |
9,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1D Optimal Classifier Compared With a Simple Neural Network
This tutorial is part of the EFI Data Analytics for Physics workshop. It is meant for the beginner HEP undergraduate or graduate student (or postdoc/faculty) who wants to get started implementing a simple feed-forward Neural Network in Python with Keras. A more advanced Keras tutorial on "Deep learning with images" will follow later in the afternoon. This tutorial aims to teach you key concepts and apply what you have learned in the talks and morning hands-on sessions on using Python for scientific compyting with Jupyter notebooks. Basic familiarity with Python, Jupyter, and numpy is assumed. The tutorial uses Matplotlib, which is the de-facto plotting utility for Python. Don't worry if you are not yet familiar with Matplotlib, we will provide what you need. Basic Keras functionality will be briefly explained as it is introduced in the exercises below. Below are some suggested excercies for you to think about and discuss with your fellow mates. However, feel free to play around and modify the notebook as you like.
This tutorial was develped by Ben Nachman (UC Berkeley) and Joakim Olsson (University of Chicago). If you are interested in a similar tutorial using some ATLAS simualated data there is another tutorial for that by by Amir Farbin at University of Texas at Arlington.
Exercises
1.) Run through the notebook
We recommend that you first run and read through the tutorial carefully one time, don't hesitate to ask us question of something is not clear
Step1: 1D Optimal Classifier
We will start by implementing the optimal (Gaussian) 1D classifier, like Ben Nachman explained in his talk
"Introduction to pattern recognition algorithms and Machine Learning".
Numpy provides the function numpy.random.normal, which is very useful for sampling a Gaussian with a mean 'mu' and standard deviation 'sigma'. You can try out this function yourself in the cell below.
Step2: Let's generate some data. Below we have provided you with a function that gives you two cases
Step3: Now, let's actually run the function and generate some data.
Step4: Often when dealing with data you may want to either save something you generated or load data you got from somebody else. One of the most common formats is HDF5, a "data model, library, and file format for storing and managing data." It is also the most common storage format in data science. h5py provides a python API for HDF5. In most cases, you do not need to know very much about HDF5 or h5py, just how to read/write tensors into/from files, which you can easily pick up from the h5py (Quick Start)[http
Step5: Here's how we can retreive the data back.
Step6: Create histograms and plot the data.
Step7: Next, let's compute the log-likelihood ratio (LLR).
Step8: We can also plot the mapping between the input feature vector and the LLR.
Step9: Don't worry about the 'bumps' towards the edges of the plot, that's just due to low statistics.
The next step is to compute and plot the PDF of the LLR.
Step10: Finally, we can scan a threshold cut on the LLR and make the ROC curve.
Step11: That's it for the first part of this tutorial. Now let's do some Machine Learning!
Implement and train a simple Neural Network in Keras
Before one gets to the actual training, it's often necessary to transform the data into numpy array of the correct shape. To better understand what's going on below, let's do a quick example.
Step12: Another things is that Keras require the 'X' inputs to be formatted in a certain way. Here's a simple example of that.
Step13: Now when we hopefully understand a little more about how to manipulate numpy arrays, let's prepare our actual data for training a Neural Network in Keras.
If you already executed the cells for generating (or loading) data in the first part of the tutorial, you should be good to go, otherwise please scroll up and execute the cell that calls
Step14: Finally we get to the fun stuff, let's implement our first Neural Network in Keras. To learn more what all this does, you are strongly encouraged to go and read the Keras documentation.
The core data structure of Keras is a model, a way to organize layers. The simplest type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows to build arbitrary graphs of layers.
Step15: Now let's add some layers to our model with "Dense". Dense implements the operation
Step16: Next, we compile our model and choose 'binary_crossentropy' as our loss function and 'ADAM' (adaptive moment estimation) as the optimizer (an extension of stochastic gradient descent).
Step17: We then train our model in the training data. Keras automatically runs validation on the test data.
You can experiment with the number of 'epochs' the 'batch size'
Step18: This very simple model should converge very quickly (even after 1-3 epochs). Training more complicated networks can take a very long time (days, weeks, or even months).
The model history keeps track of the loss and accuracy for each epoch. Note that the training above was setup to run on the validation sample at the end of each epoch
Step19: You can plot the loss versus epoch.
Step20: Finally, let's plot the ROC curve and compare it to the result we got form the 1D optimal classifier above. | Python Code:
# Import the print function that is compatible with Python 3
from __future__ import print_function
# Import numpy - the fundamental package for scientific computing with Python
import numpy as np
# Import plotting Python plotting from matplotlib
import matplotlib.pyplot as plt
Explanation: 1D Optimal Classifier Compared With a Simple Neural Network
This tutorial is part of the EFI Data Analytics for Physics workshop. It is meant for the beginner HEP undergraduate or graduate student (or postdoc/faculty) who wants to get started implementing a simple feed-forward Neural Network in Python with Keras. A more advanced Keras tutorial on "Deep learning with images" will follow later in the afternoon. This tutorial aims to teach you key concepts and apply what you have learned in the talks and morning hands-on sessions on using Python for scientific compyting with Jupyter notebooks. Basic familiarity with Python, Jupyter, and numpy is assumed. The tutorial uses Matplotlib, which is the de-facto plotting utility for Python. Don't worry if you are not yet familiar with Matplotlib, we will provide what you need. Basic Keras functionality will be briefly explained as it is introduced in the exercises below. Below are some suggested excercies for you to think about and discuss with your fellow mates. However, feel free to play around and modify the notebook as you like.
This tutorial was develped by Ben Nachman (UC Berkeley) and Joakim Olsson (University of Chicago). If you are interested in a similar tutorial using some ATLAS simualated data there is another tutorial for that by by Amir Farbin at University of Texas at Arlington.
Exercises
1.) Run through the notebook
We recommend that you first run and read through the tutorial carefully one time, don't hesitate to ask us question of something is not clear :)
2.) Try changing things
After you have done that, it's time to start tweaking some parameters and play around with the code. Here are a few suggestions of what you can try:
2a). Try changing the number of samples (N)
What happens if you reduce the number of samples by an order of magnitude or more? If you don't see any effect, try to reduce it further. Which model does better as the number of events goes to zero? Why?
2b). Vary the Network structure
What happens if you increase the number neurons in the hidden layer? What happens if you increase/decrease the number of training epochs? What happens if you vary the batch size? What happens if you change the activation function (in particular try switching from 'sigmoind' to 'linear')?
Let's get started and import some packages that we'll need throughout the tutorial.
End of explanation
# Generate 1000 samples from a Gaussian pdf with mu=5 and sigma=2
mu = 5
sigma = 2
number_of_samples = 1000
test_samples = np.random.normal(mu, sigma, number_of_samples)
# Plotting with matplotlib
# First clear the figures
plt.clf()
# Segment the canvas into upper and lower subplots
plt.subplot(211)
# Plot the random numbers
plt.plot(test_samples)
plt.subplot(212)
# Histogram the numbers
plt.hist(test_samples, bins=100)
# Display the canvas
plt.show()
Explanation: 1D Optimal Classifier
We will start by implementing the optimal (Gaussian) 1D classifier, like Ben Nachman explained in his talk
"Introduction to pattern recognition algorithms and Machine Learning".
Numpy provides the function numpy.random.normal, which is very useful for sampling a Gaussian with a mean 'mu' and standard deviation 'sigma'. You can try out this function yourself in the cell below.
End of explanation
# Function that generates N signal and N background samples (note that 'do_simple' is true by default)
def generate_samples(N, do_superposition=False):
# Case 1: Signal and background are each a single Gaussian
if not do_superposition:
# Signal Gaussian has mean 1 and standard deviation 1
mu1, sigma1 = 1, 1
signal = np.random.normal(mu1, sigma1, N)
# Background Gaussian has mean 1 and standard deviation 1
mu1, sigma1 = -1, 1
background = np.random.normal(mu1, sigma1, N)
# Case 2: Signal and background are superpositions of Gaussians
else:
mu1a, sigma1a = -1.1, 0.5
x1a = np.random.normal(mu1a, sigma1a, int(0.6*N))
mu1b, sigma1b = 1, 1
x1b = np.random.normal(mu1b, sigma1b, int(0.4*N))
mu2a, sigma2a = 2, 0.5
x2a = np.random.normal(mu2a, sigma2a, int(0.7*N))
mu2b, sigma2b = -1, 1
x2b = np.random.normal(mu2b, sigma2b, int(0.3*N))
signal = np.append(x1a,x1b)
background = np.append(x2a,x2b)
return signal, background
Explanation: Let's generate some data. Below we have provided you with a function that gives you two cases:
1. 'do_superposition = False': Two Gaussians, both with a standard deviation of 1, but one with mean +1 and the other with mean -1.
2. 'do_superposition = True': Signal and background are the superposition of two Gaussians whose parameters are given below. By construction, the overlap between signal and background is larger in this second case.
You are encouraged to experiment with your own function after you have run through the tutorial.
End of explanation
# If 'do_superposition = True' we get multiple Gaussians
do_superposition = True
# Number of samples
N = 10000000
# Number of bins in the histograms
nbins = 500
# Generate signal and background
signal, background = generate_samples(N, do_superposition)
Explanation: Now, let's actually run the function and generate some data.
End of explanation
import h5py
# create a new file
h5_file = h5py.File("data1.h5", "w")
h5_file.create_dataset('signal', data=signal)
h5_file.create_dataset('background', data=background)
h5_file.close()
Explanation: Often when dealing with data you may want to either save something you generated or load data you got from somebody else. One of the most common formats is HDF5, a "data model, library, and file format for storing and managing data." It is also the most common storage format in data science. h5py provides a python API for HDF5. In most cases, you do not need to know very much about HDF5 or h5py, just how to read/write tensors into/from files, which you can easily pick up from the h5py (Quick Start)[http://docs.h5py.org/en/latest/quick.html].
Even though this tutorial could be completed without storing and retreiving the data, it may be useful to go through that exercise.
Let's start by save the data you generated to HDF5.
End of explanation
h5_file_readonly = h5py.File('data1.h5','r')
signal = h5_file_readonly['signal'][:]
background = h5_file_readonly['background'][:]
h5_file_readonly.close()
Explanation: Here's how we can retreive the data back.
End of explanation
# Plot the histograms
plt.clf()
plt.hist(signal, 50, density=True, facecolor='blue', alpha=0.75, label='S')
plt.hist(background, 50, density=True, facecolor='red', alpha=0.75, label='B')
plt.xlabel('Input feature x')
plt.ylabel('Probability density (arbitrary units)')
plt.title(r'Signal and Background')
plt.legend(loc='upper right')
plt.axis([-5, 5, 0, 0.7])
plt.grid(True)
plt.show()
Explanation: Create histograms and plot the data.
End of explanation
# Create the histograms, which we will use for calculating the log-likelihood ratio (LLR) below
h_signal = np.histogram(signal, bins=500, range=(-5,5))
h_background = np.histogram(background, bins=500, range=(-5,5))
LL_dict = {} # used only for plotting
LL_dict_bybin = {} # used for computing
for i in range(len(h_signal[0])):
# the if statements are there to account for "binning effects"
if (h_background[0][i] > 0 and h_signal[0][i] > 0):
LL_dict[h_background[1][i]] = np.log(1.*h_signal[0][i]/h_background[0][i])
elif (h_signal[0][i] > 0): # in case background bin = 0
LL_dict[h_background[1][i]] = np.log(100000.) #huge number
elif (h_background[0][i] > 0): # in case signal bin = 0
LL_dict[h_background[1][i]] = np.log(1./100000.) #very small number
else:
LL_dict[h_background[1][i]] = np.log(1.)
LL_dict_bybin[i] = LL_dict[h_background[1][i]]
Explanation: Next, let's compute the log-likelihood ratio (LLR).
End of explanation
# array of 'x' values
xvals = [d for d in LL_dict]
# array of 'y' values
yvals = [LL_dict[d] for d in LL_dict]
xvals = np.array(xvals)
yvals = np.array(yvals)
# Return the indices that result from sorting the array (but do not modify the array itself)
index_sorted = xvals.argsort()
# Sort the arrays
xvals = xvals[index_sorted[::-1]]
yvals = yvals[index_sorted[::-1]]
# Plot the LLR as a function of input feature x
plt.clf()
plt.plot(xvals,yvals)
plt.xlabel('Input feature x')
plt.ylabel('Log Likelihood Ratio')
plt.title(r'LLR as a function of x')
plt.axis([-6,6,-6,6])
plt.show()
Explanation: We can also plot the mapping between the input feature vector and the LLR.
End of explanation
# Number of bins in the histgrams
nbins = 50
# Create histograms
h_signal_yvals = np.histogram([], bins=nbins, range=(-10,10))
h_background_yvals = np.histogram([], bins=nbins, range=(-10,10))
# Fill histograms
for i in range(len(h_signal[0])):
whichbin = np.digitize(LL_dict[h_signal[1][i]], h_signal_yvals[1])
if (whichbin > 49):
whichbin = 49
h_signal_yvals[0][whichbin]+=h_signal[0][i]
h_background_yvals[0][whichbin]+=h_background[0][i]
# Plot the PDF of the LLR
plt.clf()
plt.xlabel('Log Likelihood Ratio')
plt.ylabel('Probability Density (arbitrary units)')
plt.title(r'Signal and Background')
plt.bar(h_signal_yvals[1][:-1],h_signal_yvals[0], width=h_signal_yvals[1][1]-h_signal_yvals[1][0],facecolor='blue', alpha=0.75, label='S')
plt.bar(h_background_yvals[1][:-1],h_background_yvals[0], width=h_background_yvals[1][1]-h_background_yvals[1][0],facecolor='red', alpha=0.75, label='B')
plt.show()
Explanation: Don't worry about the 'bumps' towards the edges of the plot, that's just due to low statistics.
The next step is to compute and plot the PDF of the LLR.
End of explanation
# Make the ROC curve
ROCx = np.zeros(nbins)
ROCy = np.zeros(nbins)
intx = 0.
inty = 0.
for i in range(nbins):
intx+=h_signal_yvals[0][i]
inty+=h_background_yvals[0][i]
for i in range(nbins):
sum_signal = 0.
sum_background = 0.
for j in range(i,len(h_signal_yvals[1])-1):
sum_signal+=h_signal_yvals[0][j]
sum_background+=h_background_yvals[0][j]
ROCx[i] = sum_signal/intx
ROCy[i] = sum_background/inty
# Plot the ROC curve
plt.clf()
plt.axes().set_aspect('equal')
plt.plot(ROCx,ROCy,label="LLR")
plt.plot([0,1],[0,1],linestyle='--',color="#C0C0C0",label="Random")
plt.xlabel('Pr(label signal | signal)')
plt.ylabel('Pr(label signal | background)')
plt.title(r'ROC Curve')
plt.axis([0, 1, 0, 1])
plt.legend(loc='upper left')
plt.show()
Explanation: Finally, we can scan a threshold cut on the LLR and make the ROC curve.
End of explanation
# Say we have two 1x3 arrays (e.g. A=signal, B=background)
A = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=np.float32)
B = np.array([11, 12, 13, 14, 15, 16, 17, 18, 19, 20], dtype=np.float32)
# We want to have labels '1' associated with the signal (A) and labels '0' associated with the background (B)
A_labels = np.ones(10)
B_labels = np.zeros(10)
print('\nA: {}'.format(A))
print('B: {}'.format(B))
print('\nA_labels: {}'.format(A_labels))
print('B_labels: {}\n'.format(B_labels))
# We can concatenate the A and B arrays, and the A_labels and B_labels array like this
C = np.concatenate((A,B))
C_labels = np.concatenate((A_labels,B_labels))
print('\nC: {}'.format(C))
print('C_labels: {}'.format(C_labels))
# Before training on the a dataset one often want to split it up into a 'training set' and a 'test set'
# There is a useful function in scikit-learn that does this for you
# This function also scrambles the examples
from sklearn.model_selection import train_test_split
C_train, C_test, C_labels_train, C_labels_test, = train_test_split(C, C_labels, test_size=3, random_state=1)
# If this seems confusing, taking a look at the print output below should hopefully make things clear
print('\nC_train: {}'.format(C_train))
print('C_labels_train: {}'.format(C_labels_train))
print('\nC_test: {}'.format(C_test))
print('\nC_labels_test: {}'.format(C_labels_test))
Explanation: That's it for the first part of this tutorial. Now let's do some Machine Learning!
Implement and train a simple Neural Network in Keras
Before one gets to the actual training, it's often necessary to transform the data into numpy array of the correct shape. To better understand what's going on below, let's do a quick example.
End of explanation
A = np.array([1, 2, 3, 4], dtype=np.float32)
print(A)
AT = np.array(A)[np.newaxis].T
print(AT)
Explanation: Another things is that Keras require the 'X' inputs to be formatted in a certain way. Here's a simple example of that.
End of explanation
# total number of signal + background events
n_signal = len(signal)
n_background = len(background)
n_total = len(signal) + len(background)
# use 90% of the total number of events for training the network
n_train = int(0.9*n_total)
# use the remaning 10% for testing
n_test = n_total-n_train
# generate an array of ones as signal labels
sig_labels = np.ones(n_signal)
# generate an array of zeros as background labels
bkg_labels = np.zeros(n_background)
# concatenate the signal and background samples
X = np.concatenate((signal,background))
y = np.concatenate((sig_labels,bkg_labels))
# Format the inputs for Keras
X = np.array(X)[np.newaxis].T
# split the dataset into a training and a validation set and scamble the inputs (as illustrated above)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, = train_test_split(X, y, test_size=n_test, random_state=1)
Explanation: Now when we hopefully understand a little more about how to manipulate numpy arrays, let's prepare our actual data for training a Neural Network in Keras.
If you already executed the cells for generating (or loading) data in the first part of the tutorial, you should be good to go, otherwise please scroll up and execute the cell that calls:
signal, background = generate_samples(N)
End of explanation
from keras.models import Sequential
model = Sequential()
Explanation: Finally we get to the fun stuff, let's implement our first Neural Network in Keras. To learn more what all this does, you are strongly encouraged to go and read the Keras documentation.
The core data structure of Keras is a model, a way to organize layers. The simplest type of model is the Sequential model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows to build arbitrary graphs of layers.
End of explanation
from keras.layers import Dense
# Since our samples are only the X values (of either signal or background), the first layer just has one input dimension
model.add(Dense(1, input_dim=1, kernel_initializer='normal', activation='relu'))
# We then implement only one hidden layer with 8 neurons (you can experiment with changing this number)
n_neurons_hidden = 8
model.add(Dense(n_neurons_hidden, kernel_initializer='normal', activation='relu'))
# Finally we add one output layer
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
Explanation: Now let's add some layers to our model with "Dense". Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True).
In other words, Dense is just your the regular densely-connected NN layer.
End of explanation
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Print a summary of the model structure
model.summary()
Explanation: Next, we compile our model and choose 'binary_crossentropy' as our loss function and 'ADAM' (adaptive moment estimation) as the optimizer (an extension of stochastic gradient descent).
End of explanation
history=model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=3, batch_size=2048)
Explanation: We then train our model in the training data. Keras automatically runs validation on the test data.
You can experiment with the number of 'epochs' the 'batch size':
- one epoch = One forward pass and one backward pass of all the training examples.
- batch size = The number of training examples in one forward/backward pass. Tthe higher the batch size, the more memory space you'll need.
End of explanation
print(history.history)
Explanation: This very simple model should converge very quickly (even after 1-3 epochs). Training more complicated networks can take a very long time (days, weeks, or even months).
The model history keeps track of the loss and accuracy for each epoch. Note that the training above was setup to run on the validation sample at the end of each epoch:
End of explanation
loss_history=history.history["loss"]
plt.plot(range(len(loss_history)),loss_history)
plt.show()
Explanation: You can plot the loss versus epoch.
End of explanation
# Here we make use of a function in scikit-learn to calculate the ROC curve
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_test, model.predict(X_test))
roc_auc = auc(fpr, tpr)
# Plot the ROC curve from the NN and overlay the ROC curve from the 1D classifier
plt.clf()
plt.axes().set_aspect('equal')
plt.plot(ROCx,ROCy,label="LLR")
plt.plot(tpr,fpr,color='darkorange',label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0,1],[0,1],linestyle='--',color="#C0C0C0",label="Random")
plt.xlabel('Pr(label signal | signal)')
plt.ylabel('Pr(label signal | background)')
plt.title(r'ROC Curve')
plt.axis([0, 1, 0, 1])
plt.legend(loc='upper left')
plt.show()
Explanation: Finally, let's plot the ROC curve and compare it to the result we got form the 1D optimal classifier above.
End of explanation |
9,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1D Wasserstein barycenter demo
This example illustrates the computation of regularized Wassersyein Barycenter
as proposed in [3].
[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015).
Iterative Bregman projections for regularized transportation problems
SIAM Journal on Scientific Computing, 37(2), A1111-A1138.
Step1: Generate data
Step2: Plot data
Step3: Barycenter computation
Step4: Barycentric interpolation | Python Code:
# Author: Remi Flamary <[email protected]>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import ot
# necessary for 3d plot even if not used
from mpl_toolkits.mplot3d import Axes3D # noqa
from matplotlib.collections import PolyCollection
Explanation: 1D Wasserstein barycenter demo
This example illustrates the computation of regularized Wassersyein Barycenter
as proposed in [3].
[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015).
Iterative Bregman projections for regularized transportation problems
SIAM Journal on Scientific Computing, 37(2), A1111-A1138.
End of explanation
#%% parameters
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
a1 = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
a2 = ot.datasets.make_1D_gauss(n, m=60, s=8)
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
Explanation: Generate data
End of explanation
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
Explanation: Plot data
End of explanation
#%% barycenter computation
alpha = 0.2 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
Explanation: Barycenter computation
End of explanation
#%% barycenter interpolation
n_alpha = 11
alpha_list = np.linspace(0, 1, n_alpha)
B_l2 = np.zeros((n, n_alpha))
B_wass = np.copy(B_l2)
for i in range(0, n_alpha):
alpha = alpha_list[i]
weights = np.array([1 - alpha, alpha])
B_l2[:, i] = A.dot(weights)
B_wass[:, i] = ot.bregman.barycenter(A, M, reg, weights)
#%% plot interpolation
pl.figure(3)
cmap = pl.cm.get_cmap('viridis')
verts = []
zs = alpha_list
for i, z in enumerate(zs):
ys = B_l2[:, i]
verts.append(list(zip(x, ys)))
ax = pl.gcf().gca(projection='3d')
poly = PolyCollection(verts, facecolors=[cmap(a) for a in alpha_list])
poly.set_alpha(0.7)
ax.add_collection3d(poly, zs=zs, zdir='y')
ax.set_xlabel('x')
ax.set_xlim3d(0, n)
ax.set_ylabel('$\\alpha$')
ax.set_ylim3d(0, 1)
ax.set_zlabel('')
ax.set_zlim3d(0, B_l2.max() * 1.01)
pl.title('Barycenter interpolation with l2')
pl.tight_layout()
pl.figure(4)
cmap = pl.cm.get_cmap('viridis')
verts = []
zs = alpha_list
for i, z in enumerate(zs):
ys = B_wass[:, i]
verts.append(list(zip(x, ys)))
ax = pl.gcf().gca(projection='3d')
poly = PolyCollection(verts, facecolors=[cmap(a) for a in alpha_list])
poly.set_alpha(0.7)
ax.add_collection3d(poly, zs=zs, zdir='y')
ax.set_xlabel('x')
ax.set_xlim3d(0, n)
ax.set_ylabel('$\\alpha$')
ax.set_ylim3d(0, 1)
ax.set_zlabel('')
ax.set_zlim3d(0, B_l2.max() * 1.01)
pl.title('Barycenter interpolation with Wasserstein')
pl.tight_layout()
pl.show()
Explanation: Barycentric interpolation
End of explanation |
9,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating the 1945 Makran Tsunami using Thetis
The 1945 Makran Tsunami was a large tsunami which originated due to the 1945 Balochistan earthquake. The resulting tsunami is beielved to have killed around 4000 people along the coast of modern day Pakistan, India, Iran and Oman. Tidal records indicate that the tsunami was recorded as far as the islands of Seychells and Minicoy. Moden simulations of the tsunami indicate tsunami elevations would have been observed across the islands of the Maldives as well.
Here we will model the tsunami using elevations from actual fault estimations.
<img src="figures/Map_of_region.png" style="height
Step1: Importing the mesh
Next we will import the mesh (this mesh was actually created using qmesh mentioned in yesterday's lecture). Additionally we will visualise the mesh as well using the firedrake plot utility function. The plot function provides an easy way to plot of firedrake functions.
Step2: The mesh is created using a UTM-based coordinate system, which divides the world in different zones. In each zone coordinates can be accurately represented by a flat x,y plane. The zone that covers our region is UTM zone 43 which is valid between 72 and 78 degrees longitude East.
Setup the bathymetry
Next we will define a bathymetry function. For the purpose of this simulation to keeps things simple we will assign a constant depth of 20m. Before creating the bathymetry function and assigning depth we will visualise the bathymetry of the area using the GEBCO bathymetry dataset.
<img src="figures/Depth.png" style="height
Step3: Initial Elevation
Now we will define a function for the initial elevation, using the same P1 function space. This initial elevation function will set the values for the initial elevation in our simulation. It contains a perturbation in the North of the domain that represent the initial stage of a tsunami.
Step4: After defining the initial elevation function we need to set the values of the function, across our mesh. The initial elevation is given to us stored on a regular lon, lat grid. In our simulation we will be using a mesh defined in UTM zone 43 coordinates, and thus we need to perform a coordinate transformation and interpolate between the two grids.
First we extract the coordinates of the mesh (in UTM zone 43) and the values of the initial value function as numpy arrays
Step5: Next we will import the data that we wish to be interpolate onto the initial elevation function. Here the data that we wish to be interpolated on to our elevation function is the vertical displacement of the free-surface due to the earthquake. Given the earthquake fault parameters which are available from various sources such as USGS these deformations can be obtained through various means. Here we will use the output obtained from an 'Okada model' utilising single-fault parameters as provided in Arjun.et.al. We also print the first row of the data, the columns are arranged as Longitude, Latitude, x, y and z displacement.
Step6: In order to interpolate the data values onto our mesh, we will use a nearest-neighbour interpolator from scipy. We extract the first two columns (lon and lat) as the coordinates, and the fourth column (vertical displacement) as the values we want to interpolate
Step7: intp is now an interpolator object that can interpolate the data at any location using intp(lon, lat). However since our mesh is in Universal Transverse Mercator coordinate system (UTM) coordinates, we first convert the coordinates of the mesh points into lat, lon. This is done using the pyproj library
Step8: lon and lat are now two numpy arrays with longitude and latitude of all mesh points. These can be fed straight into intp to return an array of the interpolated values in these points. By assigning it to the init_elev_data array these will become the values of the init_elev function.
Step9: Let's plot this to check
Step10: Setting Solver Options
Now we have everything we need to setup the solver and set its options. The setup is very similar to what was discussed for the 2D channel example.
Step11: Boundary and Initial Conditions
Next we define the boundary conditions of our model. We will set the velocities at the coastlines and open boundaries to zero. At the open boundaries we set both the normal velocity and elevation to zero, this leads to weakly reflective boundaries.
Step12: Additionaly we set the initial conditions for our simulation. Here we use the init_elev we set up above. Initial velocities are zero (by default)
Step13: Now set the solver iterator to start the simulation running. Note that since we do not have any time varying forcing, we do not need a update_forcings function.
Step14: Further Practice
a. Instead of the constant bathymetry, load the bathymetry file provided in the runfiles folder, visuallize the bathymetry and use this bathymetry to run the model. Notice any difference?
Hint, to load the bathymetry, use | Python Code:
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import scipy.interpolate # used for interpolation
import pyproj # used for coordinate transformations
import math
from thetis import *
Explanation: Simulating the 1945 Makran Tsunami using Thetis
The 1945 Makran Tsunami was a large tsunami which originated due to the 1945 Balochistan earthquake. The resulting tsunami is beielved to have killed around 4000 people along the coast of modern day Pakistan, India, Iran and Oman. Tidal records indicate that the tsunami was recorded as far as the islands of Seychells and Minicoy. Moden simulations of the tsunami indicate tsunami elevations would have been observed across the islands of the Maldives as well.
Here we will model the tsunami using elevations from actual fault estimations.
<img src="figures/Map_of_region.png" style="height:1000px, width:1000px">
As usual we begin by importing the required python libraries and modules.
End of explanation
mesh = Mesh('runfiles/mesh.msh')
plt.figure(figsize=(12, 8))
ax = plt.gca()
triplot(mesh, axes=ax);
Explanation: Importing the mesh
Next we will import the mesh (this mesh was actually created using qmesh mentioned in yesterday's lecture). Additionally we will visualise the mesh as well using the firedrake plot utility function. The plot function provides an easy way to plot of firedrake functions.
End of explanation
P1 = FunctionSpace(mesh, "CG", 1)
bathymetry_2d = Function(P1, name='Bathymetry')
depth = 2000.0
bathymetry_2d.assign(depth)
tricontourf(bathymetry_2d);
Explanation: The mesh is created using a UTM-based coordinate system, which divides the world in different zones. In each zone coordinates can be accurately represented by a flat x,y plane. The zone that covers our region is UTM zone 43 which is valid between 72 and 78 degrees longitude East.
Setup the bathymetry
Next we will define a bathymetry function. For the purpose of this simulation to keeps things simple we will assign a constant depth of 20m. Before creating the bathymetry function and assigning depth we will visualise the bathymetry of the area using the GEBCO bathymetry dataset.
<img src="figures/Depth.png" style="height:1000px, width:1000px">
To start of with however, we will simply use a constant depth of 2000m. The bathymetry function will be defined within the function space of piecewise linear functions ("CG" for Continous Galerkin and a polynomial degree of 1). We will again visualise the depth to confirm if the value is assigned across the domain.
End of explanation
init_elev = Function(P1, name = 'init_elev')
Explanation: Initial Elevation
Now we will define a function for the initial elevation, using the same P1 function space. This initial elevation function will set the values for the initial elevation in our simulation. It contains a perturbation in the North of the domain that represent the initial stage of a tsunami.
End of explanation
mesh_coordinates = mesh.coordinates.dat.data
init_elev_data = init_elev.dat.data
Explanation: After defining the initial elevation function we need to set the values of the function, across our mesh. The initial elevation is given to us stored on a regular lon, lat grid. In our simulation we will be using a mesh defined in UTM zone 43 coordinates, and thus we need to perform a coordinate transformation and interpolate between the two grids.
First we extract the coordinates of the mesh (in UTM zone 43) and the values of the initial value function as numpy arrays:
End of explanation
data = np.genfromtxt('runfiles/outputs.txt')
print (np.round(data[0],5))
Explanation: Next we will import the data that we wish to be interpolate onto the initial elevation function. Here the data that we wish to be interpolated on to our elevation function is the vertical displacement of the free-surface due to the earthquake. Given the earthquake fault parameters which are available from various sources such as USGS these deformations can be obtained through various means. Here we will use the output obtained from an 'Okada model' utilising single-fault parameters as provided in Arjun.et.al. We also print the first row of the data, the columns are arranged as Longitude, Latitude, x, y and z displacement.
End of explanation
intp = scipy.interpolate.NearestNDInterpolator(data[:,0:2], data[:,4])
Explanation: In order to interpolate the data values onto our mesh, we will use a nearest-neighbour interpolator from scipy. We extract the first two columns (lon and lat) as the coordinates, and the fourth column (vertical displacement) as the values we want to interpolate:
End of explanation
outproj = pyproj.Proj(init='epsg:4326')
inproj = pyproj.Proj(init='epsg:32643')
lon, lat = pyproj.transform(inproj, outproj, mesh_coordinates[:,0], mesh_coordinates[:,1])
Explanation: intp is now an interpolator object that can interpolate the data at any location using intp(lon, lat). However since our mesh is in Universal Transverse Mercator coordinate system (UTM) coordinates, we first convert the coordinates of the mesh points into lat, lon. This is done using the pyproj library:
End of explanation
init_elev_data[:] = intp(lon, lat)
Explanation: lon and lat are now two numpy arrays with longitude and latitude of all mesh points. These can be fed straight into intp to return an array of the interpolated values in these points. By assigning it to the init_elev_data array these will become the values of the init_elev function.
End of explanation
plt.figure(figsize=(12, 8))
ax = plt.gca()
tricontourf(init_elev, axes=ax, cmap=matplotlib.cm.coolwarm, levels=50);
Explanation: Let's plot this to check:
End of explanation
solver_obj = solver2d.FlowSolver2d(mesh, bathymetry_2d)
options = solver_obj.options
# total duration in seconds
options.simulation_export_time = 900.0
# export interval in seconds
options.simulation_end_time = 3600.0 * 2
options.timestepper_type = 'CrankNicolson'
options.timestep = 5.0
options.output_directory = 'outputs_makran_tsunami'
options.fields_to_export = []
options.fields_to_export_hdf5 = ['elev_2d', 'uv_2d']
Explanation: Setting Solver Options
Now we have everything we need to setup the solver and set its options. The setup is very similar to what was discussed for the 2D channel example.
End of explanation
solver_obj.bnd_functions['shallow_water'] = {
100: {'un': 0.0},
200: {'un': 0.0, 'elev' :0.0}
}
Explanation: Boundary and Initial Conditions
Next we define the boundary conditions of our model. We will set the velocities at the coastlines and open boundaries to zero. At the open boundaries we set both the normal velocity and elevation to zero, this leads to weakly reflective boundaries.
End of explanation
solver_obj.assign_initial_conditions(elev=init_elev)
Explanation: Additionaly we set the initial conditions for our simulation. Here we use the init_elev we set up above. Initial velocities are zero (by default):
End of explanation
solver_obj.iterate()
elev = Function(solver_obj.function_spaces.H_2d, name='elev_2d')
uv = Function(solver_obj.function_spaces.U_2d, name='uv_2d')
last_idx = solver_obj.i_export
nrows = math.ceil(last_idx/2)
fig, axes = plt.subplots(nrows, 2, figsize=(16, 6*nrows))
for idx, ax in enumerate(axes.flatten()):
filename = os.path.join(options.output_directory, 'hdf5','Elevation2d_%05d' % idx)
dc = DumbCheckpoint(filename, mode=FILE_READ)
dc.load(elev)
dc.close()
# by specifying cmap=None above, we avoid displaying a colorbar
tricontourf(elev, axes=ax, cmap=matplotlib.cm.coolwarm, levels=50)
# Firedrake sets an automatic colorbar range which therefore changes per timestep
# instead, we want to fix it:
cbar = ax.collections[-1].set_clim(-5, 5)
# we do want to set an appropriate colormap
cbar = ax.collections[-1].set_cmap(matplotlib.cm.coolwarm)
ax.axis('equal')
ax.title.set_text('Export no. %d at t=%.0f' % (idx, options.simulation_export_time*idx))
plt.tight_layout()
Explanation: Now set the solver iterator to start the simulation running. Note that since we do not have any time varying forcing, we do not need a update_forcings function.
End of explanation
dc = DumbCheckpoint('runfiles/bathy', mode=FILE_READ)
dc.load(bathymetry_2d, name='bathy');
Explanation: Further Practice
a. Instead of the constant bathymetry, load the bathymetry file provided in the runfiles folder, visuallize the bathymetry and use this bathymetry to run the model. Notice any difference?
Hint, to load the bathymetry, use:
End of explanation |
9,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aligning MRS voxels with the anatomy
Several steps in the analysis and interpertation of the MRS data require knowledge of the anatomical location of the volume from which MRS data was acquired. In particular, we would like to know how much of the volume contains gray matter, relative to other tissue components, such as white matter, CSF, etc. In order to infer this, we need to acquire a T1-weighted MRI scan in the same session, and (assuming the subject hasn't moved too much), use the segmentation of the T1w image into different tissue types (e.g. using Freesurfer).
However, in order to do that, we first need to align the MRS voxel with the T1w data, so that we can extract these quantities.
Step1: In order to be able to align the files with regard to each other, they need to both encode an affine transformation relative to the scanner space. For a very thorough introduction to these transformations and their utility, see this tutorial
Step2: If you read the aformentioned tutorial, this will make sense. The diagonal of the top ledt 3 x 3 matrix encodes the resolution of the voxels used in each of the acquisitions (in mm). The MRS data has a single 2.5 x 2.5 x 2.5 cm$^2$ isotropic voxel, and the T1 has (approximately) 0.9 x 0.9 x 0.9 mm$^2$ isotropic voxels. They were both acquired without any rotation relative to the scanner coordinate system, which is why the off-diagonal terms of the top left 3 x 3 matrix is all zeros. The 4th column of each of these matrices encodes the xyz shift (again, in mm) relative to the scanner isocenter.
Composing these two transformations together tells us how to align the two volumes relative to each other. In particular, we might ask where in the t1 coordinate system the center of the MRS voxel is. Since we are multiplying
Step3: This allows us to compute the location of the center of the MRS voxel in the T1 volume coordinates, and the locations of the corners of the voxel
Step4: Using this information, we can manually create a volume that only contains the T1-weighted data in the MRS ROI
Step5: To view this, we will create a rather rough orthographic viewer of the T1 data, using IPython's interactive widget system. We add the data int the MRS ROI using a different color map, so that we can see where it is in the context of the anatomy | Python Code:
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import os.path as op
import nibabel as nib
import MRS.data as mrd
import IPython.html.widgets as wdg
import IPython.display as display
mrs_nifti = nib.load(op.join(mrd.data_folder, '12_1_PROBE_MEGA_L_Occ.nii.gz'))
t1_nifti = nib.load(op.join(mrd.data_folder, '5062_2_1.nii.gz'))
Explanation: Aligning MRS voxels with the anatomy
Several steps in the analysis and interpertation of the MRS data require knowledge of the anatomical location of the volume from which MRS data was acquired. In particular, we would like to know how much of the volume contains gray matter, relative to other tissue components, such as white matter, CSF, etc. In order to infer this, we need to acquire a T1-weighted MRI scan in the same session, and (assuming the subject hasn't moved too much), use the segmentation of the T1w image into different tissue types (e.g. using Freesurfer).
However, in order to do that, we first need to align the MRS voxel with the T1w data, so that we can extract these quantities.
End of explanation
mrs_aff = mrs_nifti.get_affine()
t1_aff = t1_nifti.get_affine()
print("The affine transform for the MRS data is:")
print(mrs_aff)
print("The affine transform for the T1 data is:")
print(t1_aff)
Explanation: In order to be able to align the files with regard to each other, they need to both encode an affine transformation relative to the scanner space. For a very thorough introduction to these transformations and their utility, see this tutorial
End of explanation
composed_affine = np.dot(np.linalg.pinv(t1_aff), mrs_aff)
Explanation: If you read the aformentioned tutorial, this will make sense. The diagonal of the top ledt 3 x 3 matrix encodes the resolution of the voxels used in each of the acquisitions (in mm). The MRS data has a single 2.5 x 2.5 x 2.5 cm$^2$ isotropic voxel, and the T1 has (approximately) 0.9 x 0.9 x 0.9 mm$^2$ isotropic voxels. They were both acquired without any rotation relative to the scanner coordinate system, which is why the off-diagonal terms of the top left 3 x 3 matrix is all zeros. The 4th column of each of these matrices encodes the xyz shift (again, in mm) relative to the scanner isocenter.
Composing these two transformations together tells us how to align the two volumes relative to each other. In particular, we might ask where in the t1 coordinate system the center of the MRS voxel is. Since we are multiplying
End of explanation
mrs_center = [0,0,0,1]
t1_center = np.round(np.dot(composed_affine, mrs_center)).astype(int)
mrs_corners = [[-0.5, -0.5, -0.5, 1],
[-0.5, -0.5, 0.5, 1],
[-0.5, 0.5, -0.5, 1],
[-0.5, 0.5, 0.5, 1],
[ 0.5, -0.5, -0.5, 1],
[ 0.5, -0.5, 0.5, 1],
[ 0.5, 0.5, -0.5, 1],
[ 0.5, 0.5, 0.5, 1]]
t1_corners = [np.round(np.dot(composed_affine, c)).astype(int) for c in mrs_corners]
t1_corners
Explanation: This allows us to compute the location of the center of the MRS voxel in the T1 volume coordinates, and the locations of the corners of the voxel:
End of explanation
t1_data = t1_nifti.get_data().squeeze()
mrs_roi = np.ones_like(t1_data) * np.nan
mrs_roi[144:172, 176:204, 78:106] = t1_data[144:172, 176:204, 78:106]
Explanation: Using this information, we can manually create a volume that only contains the T1-weighted data in the MRS ROI:
End of explanation
def show_voxel(x=t1_center[0], y=t1_center[1], z=t1_center[2]):
fig = plt.figure()
ax = fig.add_subplot(221)
ax.axis('off')
ax.imshow(np.rot90(t1_data[:, :, z]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[:, :, z]), matplotlib.cm.jet)
ax.plot([x, x], [0, t1_data.shape[0]], color='w')
ax.plot([0, t1_data.shape[1]], [y, y], color='w')
ax.set_ylim([0, t1_data.shape[0]])
ax.set_xlim([0, t1_data.shape[1]])
ax = fig.add_subplot(222)
ax.axis('off')
ax.imshow(np.rot90(t1_data[:, -y, :]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[:, -y, :]), matplotlib.cm.jet)
ax.plot([x, x], [0, t1_data.shape[1]], color='w')
ax.plot([0, t1_data.shape[1]], [z, z], color='w')
ax.set_xlim([0, t1_data.shape[0]])
ax.set_ylim([t1_data.shape[2], 0])
ax = fig.add_subplot(223)
ax.axis('off')
ax.imshow(np.rot90(t1_data[x, :, :]), matplotlib.cm.bone)
ax.imshow(np.rot90(mrs_roi[x, :, :]), matplotlib.cm.jet)
ax.plot([t1_data.shape[1]-y, t1_data.shape[1]-y], [0, t1_data.shape[1]], color='w')
ax.plot([0, t1_data.shape[1]], [z, z], color='w')
ax.set_xlim([0, t1_data.shape[1]])
ax.set_ylim([t1_data.shape[2], 0])
fig.set_size_inches(10, 10)
return fig
def voxel_viewer(t1_data, mrs_roi):
pb_widget = wdg.interactive(show_voxel,
t1_data = wdg.fixed(t1_data),
mrs_roi = wdg.fixed(mrs_roi),
x=wdg.IntSliderWidget(min=0, max=t1_data.shape[0]-1, value=155),
y=wdg.IntSliderWidget(min=0, max=t1_data.shape[1]-1, value=65),
z=wdg.IntSliderWidget(min=0, max=t1_data.shape[2]-1, value=92)
)
display.display(pb_widget)
voxel_viewer(t1_data, mrs_roi)
Explanation: To view this, we will create a rather rough orthographic viewer of the T1 data, using IPython's interactive widget system. We add the data int the MRS ROI using a different color map, so that we can see where it is in the context of the anatomy
End of explanation |
9,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Then we can visualize Lena with the following command (don't forget to switch the BGR
ordering of the color channels to RGB)
Step2: The image itself is stored in a 3D array of size (height, width, depth) containing
blue/green/red contributions as integers from 0 to 255
Step3: Because every color channel has 256 possible values, the number of possible colors is 256 x
256 x 256, or 16,777,216, as mentioned prior to this. One way to visualize the sheer amount
of different colors in the image is to reshape the data to a cloud of points in a 3D color
space. We also scale the colors to lie between 0 and 1
Step4: In this 3D color space, every row of data is a data point. In order to visualize this data, we
will write a function called plot_pixels, which takes as input the data matrix and a figure
title. Optionally, we also let the user specify the colors to use. For the sake of efficiency, we
can also limit the analysis to a subset of N pixels
Step5: We can then call the function with our data matrix (data) and an appropriate title
Step6: Reducing the color palette using k-means
Now let's reduce these 16 million colors to just 16 colors by telling $k$-means to cluster all
color variations into 16 distinct clusters. We use the before mentioned procedure, but now
specify 16 as the number of clusters
Step7: The resulting cluster corresponds to the 16 colors of our reduced color palette. Visual
inspection of the centers array reveals that all colors have three entries—B, G, and R—with
values between 0 and 1
Step8: These 16 colors correspond to the 16 cluster labels contained in the labels vector. So we
want all data points with label 0 to be colored according to row 0 in the centers array; all
data points with label 1 to be colored according to row 1 in the centers array; and so on. In
other words, we want to use labels as an index into the centers array—these are our
new colors
Step9: We can plot the data again, but this time, we will use new_colors to color the data points
accordingly
Step10: In order to see the effect of this recoloring, we have to plot new_colors as an image. In
order to get from the image to the data matrix, we flattened the earlier image. Now we need
to do the inverse to get back to the image, which is to reshape new_colors according to the
shape of the Lena image
Step11: Then we can visualize the recolored Lena image like any other image | Python Code:
import cv2
import numpy as np
lena = cv2.imread('data/lena.jpg', cv2.IMREAD_COLOR)
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rc('axes', **{'grid': False})
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Understanding k-means clustering | Contents | Classifying handwritten digits using k-means >
Compressing Color Spaces Using k-Means
One exciting application of $k$-means is the compression of image color spaces. For example,
a typical true-color image comes with a 24-bit color depth, allowing for a total of 16,777,216
color variations. However, in most images, a large number of the colors will be unused, and
many of the pixels in the image will have similar or identical colors.
Alternatively, we can use k-means to reduce the color palette to, for example, 16 color
variations. The trick here is to think of the cluster centers as the reduced color palette. Then
$k$-means will automatically organize the millions of colors in the original image into the
appropriate number of colors!
Visualizing the true-color palette
Let's have a look at a particular image:
End of explanation
plt.figure(figsize=(10, 6))
plt.imshow(cv2.cvtColor(lena, cv2.COLOR_BGR2RGB))
Explanation: Then we can visualize Lena with the following command (don't forget to switch the BGR
ordering of the color channels to RGB):
End of explanation
lena.shape
Explanation: The image itself is stored in a 3D array of size (height, width, depth) containing
blue/green/red contributions as integers from 0 to 255:
End of explanation
img_data = lena / 255.0 # use 0...1 scale
img_data = img_data.reshape((-1, 3))
img_data.shape
Explanation: Because every color channel has 256 possible values, the number of possible colors is 256 x
256 x 256, or 16,777,216, as mentioned prior to this. One way to visualize the sheer amount
of different colors in the image is to reshape the data to a cloud of points in a 3D color
space. We also scale the colors to lie between 0 and 1:
End of explanation
def plot_pixels(data, title, colors=None, N=10000):
if colors is None:
colors = data
# choose a random subset
rng = np.random.RandomState(0)
i = rng.permutation(data.shape[0])[:N]
colors = colors[i]
pixel = data[i].T
R, G, B = pixel[0], pixel[1], pixel[2]
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
ax[0].scatter(R, G, color=colors, marker='.')
ax[0].set(xlabel='Red', ylabel='Green', xlim=(0, 1), ylim=(0, 1))
ax[1].scatter(R, B, color=colors, marker='.')
ax[1].set(xlabel='Red', ylabel='Blue', xlim=(0, 1), ylim=(0, 1))
fig.suptitle(title, size=20);
Explanation: In this 3D color space, every row of data is a data point. In order to visualize this data, we
will write a function called plot_pixels, which takes as input the data matrix and a figure
title. Optionally, we also let the user specify the colors to use. For the sake of efficiency, we
can also limit the analysis to a subset of N pixels:
End of explanation
plot_pixels(img_data, title='Input color space: 16 million possible colors')
Explanation: We can then call the function with our data matrix (data) and an appropriate title:
End of explanation
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
flags = cv2.KMEANS_RANDOM_CENTERS
compactness, labels, centers = cv2.kmeans(img_data.astype(np.float32),
16, None, criteria, 10, flags)
Explanation: Reducing the color palette using k-means
Now let's reduce these 16 million colors to just 16 colors by telling $k$-means to cluster all
color variations into 16 distinct clusters. We use the before mentioned procedure, but now
specify 16 as the number of clusters:
End of explanation
centers
Explanation: The resulting cluster corresponds to the 16 colors of our reduced color palette. Visual
inspection of the centers array reveals that all colors have three entries—B, G, and R—with
values between 0 and 1:
End of explanation
new_colors = centers[labels].reshape((-1, 3))
Explanation: These 16 colors correspond to the 16 cluster labels contained in the labels vector. So we
want all data points with label 0 to be colored according to row 0 in the centers array; all
data points with label 1 to be colored according to row 1 in the centers array; and so on. In
other words, we want to use labels as an index into the centers array—these are our
new colors:
End of explanation
plot_pixels(img_data, colors=new_colors, title="Reduced color space: 16 colors")
Explanation: We can plot the data again, but this time, we will use new_colors to color the data points
accordingly:
End of explanation
lena_recolored = new_colors.reshape(lena.shape)
Explanation: In order to see the effect of this recoloring, we have to plot new_colors as an image. In
order to get from the image to the data matrix, we flattened the earlier image. Now we need
to do the inverse to get back to the image, which is to reshape new_colors according to the
shape of the Lena image:
End of explanation
plt.figure(figsize=(10, 6))
plt.imshow(cv2.cvtColor(lena_recolored, cv2.COLOR_BGR2RGB));
plt.title('16-color image')
Explanation: Then we can visualize the recolored Lena image like any other image:
End of explanation |
9,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook arguments
measurement_id (int)
Step1: Selecting a data file
Step2: Data load and Burst search
Load and process the data
Step3: Compute background and burst search
Step4: Perform a background plot as a function of the channel
Step5: Let's take a look at the photon waiting times histograms and at the fitted background rates
Step6: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel.
Let's plot a timetrace for the background to see is there are significat variations during the measurement
Step7: We can look at the timetrace of the photon stream (binning)
Step8: Burst selection and FRET
Step9: Selecting bursts by size
Step10: 3-Gaussian peaks
Step13: Fit
Step14: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
$$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$
$$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$
$$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$
Step15: Kinetics
Definitions
Step16: Moving-window processing
Step17: Burst-data
Step18: Population fraction | Python Code:
import time
from pathlib import Path
import pandas as pd
from scipy.stats import linregress
from IPython.display import display
from fretbursts import *
sns = init_notebook(fs=14)
import lmfit; lmfit.__version__
import phconvert; phconvert.__version__
Explanation: Notebook arguments
measurement_id (int): Select the measurement. Valid values: 0, 1, 2.
windows (tuple of ints): List of integration window durations (seconds).
8-spot kinetics
<p class="lead">This notebook executes the realtime-kinetics analysis.</p>
The first cell of this notebook selects which measurement is analyzed.
Measurements can be processed one-by-one, by manually running this notebook,
or in batch by using the notebook: "8-spot bubble-bubble kinetics - Run-All".
Loading the software
End of explanation
dir_ = 'data/multispot_'
filenames = [
dir_+'2015-07-31_bubble-bubble-run-off-kinetics-800mW-steer110_12.hdf5',
dir_+'2015-07-29_bubble-bubble-open-complex-run-off-kinetics-600mW-steer110_7.hdf5',
dir_+'2015-07-30_bubble-bubble-run-off-kinetics-800mW-steer110_8.hdf5']
start_times = [900, 600, 900] # time of NTP injection and start of kinetics
filename = filenames[measurement_id]
start_time = start_times[measurement_id]
filename
import os
assert os.path.exists(filename)
Explanation: Selecting a data file
End of explanation
d = loader.photon_hdf5(filename)
d.time_max
Explanation: Data load and Burst search
Load and process the data:
End of explanation
d.calc_bg(bg.exp_fit, time_s=10, tail_min_us='auto', F_bg=1.7)
Explanation: Compute background and burst search:
End of explanation
mch_plot_bg(d)
Explanation: Perform a background plot as a function of the channel:
End of explanation
dplot(d, hist_bg);
Explanation: Let's take a look at the photon waiting times histograms and at the fitted background rates:
End of explanation
dplot(d, timetrace_bg);
xlim(start_time - 150, start_time + 150)
Explanation: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel.
Let's plot a timetrace for the background to see is there are significat variations during the measurement:
End of explanation
#dplot(d, timetrace)
#xlim(2, 3); ylim(-100, 100);
Explanation: We can look at the timetrace of the photon stream (binning):
End of explanation
d.burst_search(m=10, F=5)
ds = d.select_bursts(select_bursts.size, th1=30)
Explanation: Burst selection and FRET
End of explanation
ds0 = ds.select_bursts(select_bursts.time, time_s1=0, time_s2=start_time-10)
dplot(ds0, hist_fret, pdf=False);
dm0 = ds0.collapse()
dplot(dm0, hist_fret, pdf=False);
weights = 'size'
bext.bursts_fitter(dm0, weights=weights)
dm0.E_fitter.fit_histogram(mfit.factory_three_gaussians(p1_center=0.05, p2_center=0.6, p3_center=0.9), verbose=False)
dplot(dm0, hist_fret, show_model=True, weights=weights);
dm0.E_fitter.params
weights = None
bext.bursts_fitter(dm0, weights=weights)
dm0.E_fitter.fit_histogram(mfit.factory_three_gaussians(p1_center=0.05, p2_center=0.6, p3_center=0.9), verbose=False)
dplot(dm0, hist_fret, show_model=True, weights=weights);
dm0.E_fitter.params
Explanation: Selecting bursts by size
End of explanation
def gauss3(**params0):
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak3 = lmfit.models.GaussianModel(prefix='p3_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2 + peak3
model.set_param_hint('p1_center', **{'value': 0.0, 'min': 0.0, 'max': 0.2, **params0.get('p1_center', {})})
model.set_param_hint('p2_center', **{'value': 0.5, 'min': 0.0, 'max': 1.0, **params0.get('p2_center', {})})
model.set_param_hint('p3_center', **{'value': 0.9, 'min': 0.8, 'max': 1.0, **params0.get('p3_center', {})})
for sigma in ['p%d_sigma' % i for i in (1, 2, 3)]:
model.set_param_hint(sigma, **{'value': 0.02, 'min': 0.01, **params0.get(sigma, {})})
for ampl in ['p%d_amplitude' % i for i in (1, 2, 3)]:
model.set_param_hint(ampl, **{'value': 0.333, 'min': 0.01, **params0.get(ampl, {})})
model.name = '3 gauss peaks'
return model
#%matplotlib notebook
#fig, ax = plt.subplots(figsize=(12, 8))
#dplot(dm0, scatter_fret_size, ax=ax)
bext.bursts_fitter(dm0, weights=None)
dm0.E_fitter.fit_histogram(gauss3(), verbose=False)
mfit.plot_mfit(dm0.E_fitter)
params_3gauss = dm0.E_fitter.params
plt.xlabel('E')
plt.ylabel('PDF')
plt.title('')
#dir_ = r'C:\Data\Antonio\docs\conferences\Seaborg2015\figures/'
#plt.savefig(dir_+'Realtime kinetics FRET hist', dpi=200, bbox_inches='tight')
params_3gauss
dsc = ds.collapse()
dm_final = dsc.select_bursts(select_bursts.time, time_s1=start_time+300, time_s2=ds.time_max + 1)
dm_final.num_bursts
dm_final1 = dsc.select_bursts(select_bursts.time, time_s1=start_time+100, time_s2=start_time+1600)
dm_final1.num_bursts
dm_final2 = dsc.select_bursts(select_bursts.time, time_s1=start_time + 2100, time_s2=ds.time_max + 1)
dm_final2.num_bursts
bext.bursts_fitter(dm_final1, weights=None)
model = gauss3()
model.set_param_hint('p2_center', value=params_3gauss.p2_center[0], vary=False)
dm_final1.E_fitter.fit_histogram(model, verbose=False)
fig, ax = plt.subplots(figsize=(12, 6))
mfit.plot_mfit(dm_final1.E_fitter, ax=ax)
params_3gauss1 = dm_final1.E_fitter.params
params_3gauss1
bext.bursts_fitter(dm_final2, weights=None)
model = gauss3()
model.set_param_hint('p2_center', value=params_3gauss.p2_center[0], vary=False)
dm_final2.E_fitter.fit_histogram(model, verbose=False)
fig, ax = plt.subplots(figsize=(12, 6))
mfit.plot_mfit(dm_final2.E_fitter, ax=ax)
params_3gauss1 = dm_final2.E_fitter.params
params_3gauss1
bext.bursts_fitter(dm_final, weights=None)
model = gauss3()
model.set_param_hint('p2_center', value=params_3gauss.p2_center[0], vary=False)
dm_final.E_fitter.fit_histogram(model, verbose=False)
fig, ax = plt.subplots(figsize=(12, 6))
mfit.plot_mfit(dm_final.E_fitter, ax=ax)
params_3gauss1 = dm_final.E_fitter.params
params_3gauss1
#del params_3gauss0
if 'params_3gauss0' not in locals():
params_3gauss0 = params_3gauss.copy()
params_3gauss0.p3_center = params_3gauss1.p3_center
params_3gauss0.p1_amplitude + params_3gauss0.p2_amplitude + params_3gauss0.p3_amplitude
'params_3gauss0' in locals()
Explanation: 3-Gaussian peaks
End of explanation
from scipy import optimize
params_fixed = dict(
mu1=float(params_3gauss0.p1_center),
mu2=float(params_3gauss0.p2_center),
mu3=float(params_3gauss0.p3_center),
sig1=float(params_3gauss0.p1_sigma),
sig2=float(params_3gauss0.p2_sigma),
sig3=float(params_3gauss0.p3_sigma),
)
def em_weights_3gauss(x, a2, a3, mu1, mu2, mu3, sig1, sig2, sig3):
Responsibility function for a 3-Gaussian model.
Returns 3 arrays of size = x.size: the responsibility of
each Gaussian population.
a1 = 1 - a2 - a3
assert np.abs(a1 + a2 + a3 - 1) < 1e-3
f1 = a1 * gauss_pdf(x, mu1, sig1)
f2 = a2 * gauss_pdf(x, mu2, sig2)
f3 = a3 * gauss_pdf(x, mu3, sig3)
γ1 = f1 / (f1 + f2 + f3)
γ2 = f2 / (f1 + f2 + f3)
γ3 = f3 / (f1 + f2 + f3)
return γ1, γ2, γ3
def em_fit_3gauss(x, a2_0, a3_0, params_fixed, print_every=10, max_iter=100, rtol=1e-3):
Fit amplitude of 3_Gaussian model using Expectation-Maximization.
Only 2 amplitudes are fitted (a2, a3), the first peak is derived imposing
that the PDF sums to 1.
a2_new, a3_new = a2_0, a3_0
rel_change = 1
i = 0
while rel_change > rtol and i < max_iter:
# E-step
γ1, γ2, γ3 = em_weights_3gauss(x, a2_new, a3_new, **params_fixed)
assert np.allclose(γ1.sum() + γ2.sum() + γ3.sum(), x.size)
# M-step
a2_old, a3_old = a2_new, a3_new
a2_new = γ2.sum()/γ2.size
a3_new = γ3.sum()/γ3.size
# Convergence
rel_change = (np.abs((a2_old - a2_new)/a2_new)
+ np.abs((a3_old - a3_new)/a3_new))
i += 1
if (i % print_every) == 0:
print(i, a2_new, a3_new, rel_change)
return a2_new, a3_new, i
from matplotlib.pylab import normpdf as gauss_pdf
# Model PDF to be maximized
def model_pdf(x, a2, a3, mu1, mu2, mu3, sig1, sig2, sig3):
a1 = 1 - a2 - a3
#assert np.abs(a1 + a2 + a3 - 1) < 1e-3
return (a1 * gauss_pdf(x, mu1, sig1) +
a2 * gauss_pdf(x, mu2, sig2) +
a3 * gauss_pdf(x, mu3, sig3))
# Function to be minimized by lmfit
def func2min_lmfit(params, x):
a2 = params['a2'].value
a3 = params['a3'].value
mu1 = params['mu1'].value
mu2 = params['mu2'].value
mu3 = params['mu3'].value
sig1 = params['sig1'].value
sig2 = params['sig2'].value
sig3 = params['sig3'].value
return -np.sqrt(np.log(model_pdf(x, a2, a3, mu1, mu2, mu3, sig1, sig2, sig3)))
# Function to be minimized by scipy
def func2min_scipy(params_fit, params_fixed, x):
a2, a3 = params_fit
mu1 = params_fixed['mu1']
mu2 = params_fixed['mu2']
mu3 = params_fixed['mu3']
sig1 = params_fixed['sig1']
sig2 = params_fixed['sig2']
sig3 = params_fixed['sig3']
return -np.log(model_pdf(x, a2, a3, mu1, mu2, mu3, sig1, sig2, sig3)).sum()
# create a set of Parameters
params = lmfit.Parameters()
params.add('a2', value=0.33, min=0)
params.add('a3', value=0.33, min=0)
for k, v in params_fixed.items():
params.add(k, value=v, vary=False)
Explanation: Fit
End of explanation
x = dm0.E[0]
x
#result = lmfit.minimize(func2min_lmfit, params, args=(x,), method='nelder')
#lmfit.report_fit(result.params)
#optimize.brute(func2min_scipy, ranges=((0.01, 0.99), (0.01, 0.99)), Ns=101, args=(params, x))
res = optimize.minimize(func2min_scipy, x0=[0.33, 0.33], args=(params_fixed, x), method='Nelder-Mead')
res
res = optimize.minimize(func2min_scipy, x0=[0.33, 0.33], args=(params_fixed, x), bounds=((0,1), (0,1)), method='SLSQP')
res
res = optimize.minimize(func2min_scipy, x0=[0.33, 0.33], args=(params_fixed, x), bounds=((0,1), (0,1)), method='TNC')
res
bins = np.arange(-0.1, 1.1, 0.025)
plt.hist(x, bins, histtype='step', lw=2, normed=True);
xx = np.arange(-0.1, 1.1, 0.005)
#plt.plot(xx, model_pdf(xx, params))
plt.plot(xx, model_pdf(xx, a2=res.x[0], a3=res.x[1], **params_fixed))
Explanation: $$f(x) = \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$
$$\log f(x) = \log \frac{A}{\sigma\sqrt{2\pi}}\, e^{-\frac{(x - \mu)^2}{2 \sigma^2}} = \log{A} -\log{\sigma} - \log\sqrt{2\pi} -\frac{(x - \mu)^2}{2 \sigma^2}$$
$$w_1 \; f_1(x) + w_2 \; f_2(x) + w_3 \; f_3(x)$$
$$\log (w_1 \; f_1(x)) = \log{w_1} + \log{f_1(x)}$$
End of explanation
def _kinetics_fit_em(dx, a2_0, a3_0, params_fixed, **kwargs):
kwargs = {'max_iter': 200, 'print_every': 201, **kwargs}
a2, a3, i = em_fit_3gauss(dx.E[0], a2_0, a3_0, params_fixed, **kwargs)
return a2, a3, i < kwargs['max_iter']
def _kinetics_fit_ll(dx, a2_0, a3_0, params_fixed, **kwargs):
kwargs = {'method':'Nelder-Mead', **kwargs}
res = optimize.minimize(func2min_scipy, x0=[a2_0, a3_0], args=(params_fixed, dx.E[0]),
**kwargs)
return res.x[0], res.x[1], res.success
def _kinetics_fit_hist(dx, a2_0, a3_0, params_fixed):
E_fitter = bext.bursts_fitter(dx)
model = gauss3(p1_amplitude={'value': 1 - a2_0 - a3_0},
p2_amplitude={'value': a2_0},
p3_amplitude={'value': a3_0})
model.set_param_hint('p1_center', value=params_fixed['mu1'], vary=False)
model.set_param_hint('p2_center', value=params_fixed['mu2'], vary=False)
model.set_param_hint('p3_center', value=params_fixed['mu3'], vary=False)
model.set_param_hint('p1_sigma', value=params_fixed['sig1'], vary=False)
model.set_param_hint('p2_sigma', value=params_fixed['sig2'], vary=False)
model.set_param_hint('p3_sigma', value=params_fixed['sig3'], vary=False)
E_fitter.fit_histogram(model, verbose=False)
return (float(E_fitter.params.p2_amplitude),
float(E_fitter.params.p3_amplitude),
dx.E_fitter.fit_res[0].success)
def kinetics_fit(ds_slices, params_fixed, a2_0=0.33, a3_0=0.33, method='em', **method_kws):
fit_func = {
'em': _kinetics_fit_em,
'll': _kinetics_fit_ll,
'hist': _kinetics_fit_hist}
fit_list = []
for dx in ds_slices:
a2, a3, success = fit_func[method](dx, a2_0, a3_0, params_fixed, **method_kws)
df_i = pd.DataFrame(data=dict(p2_amplitude=a2, p3_amplitude=a3,
p1_center=params_fixed['mu1'], p2_center=params_fixed['mu2'],
p3_center=params_fixed['mu3'], p1_sigma=params_fixed['sig1'],
p2_sigma=params_fixed['sig2'], p3_sigma=params_fixed['sig3'],
tstart=dx.slice_tstart, tstop=dx.slice_tstop,
tmean=0.5*(dx.slice_tstart + dx.slice_tstop)),
index=[0.5*(dx.slice_tstart + dx.slice_tstop)])
if not success:
print('* ', end='', flush=True)
continue
fit_list.append(df_i)
return pd.concat(fit_list)
start_time/60
Explanation: Kinetics
Definitions
End of explanation
def print_slices(moving_window_params):
msg = ' - Slicing measurement:'
for name in ('start', 'stop', 'step', 'window'):
msg += ' %s = %.1fs' % (name, moving_window_params[name])
print(msg, flush=True)
num_slices = len(bext.moving_window_startstop(**moving_window_params))
print(' Number of slices %d' % num_slices, flush=True)
t1 = time.time()
time.ctime()
dsc = ds.collapse()
dsc.calc_max_rate(m=10)
dsc_high = dsc.select_bursts(select_bursts.E, E1=0.88)
step = 1
params = {}
for window in windows:
moving_window_params = dict(start=0, stop=dsc.time_max, step=step, window=window)
print_slices(moving_window_params)
ds_slices = bext.moving_window_chunks(dsc, time_zero=start_time, **moving_window_params)
for meth in ['em', 'll', 'hist']:
print(' >>> Fitting method %s ' % meth, end='', flush=True)
p = kinetics_fit(ds_slices, params_fixed, method=meth)
print(flush=True)
p['kinetics'] = p.p3_amplitude / (p.p2_amplitude + p.p3_amplitude)
p = p.round(dict(p1_center=3, p1_sigma=4, p2_amplitude=4, p2_center=3, p2_sigma=4, kinetics=4,
p3_amplitude=4, p3_center=3, p3_sigma=4))
params[meth, window, step] = p
print('Moving-window processing duration: %d seconds.' % (time.time() - t1))
Explanation: Moving-window processing
End of explanation
moving_window_params['window'] = 30
moving_window_params
ds_slices = bext.moving_window_chunks(dsc, **moving_window_params)
ds_slices_high = bext.moving_window_chunks(dsc_high, **moving_window_params)
df = bext.moving_window_dataframe(**moving_window_params) - start_time
df['size_mean'] = [di.nt_.mean() for di in ds_slices]
df['size_max'] = [di.nt_.max() for di in ds_slices]
df['num_bursts'] = [di.num_bursts[0] for di in ds_slices]
df['burst_width'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices]
df['burst_width_high'] = [di.mburst_.width.mean()*di.clk_p*1e3 for di in ds_slices_high]
df['phrate_mean'] = [di.max_rate_.mean() for di in ds_slices]
df = df.round(dict(tmean=1, tstart=1, tstop=1, size_mean=2, size_max=1,
burst_width=2, burst_width_high=2, phrate_mean=1))
df
labels = ('num_bursts', 'burst_width', 'phrate_mean')
fig, axes = plt.subplots(len(labels), 1, figsize=(12, 3*len(labels)))
for ax, label in zip(axes, labels):
ax.plot(label, data=df)
ax.legend(loc='best')
#ax.set_ylim(0)
# %%timeit -n1 -r1
# meth = 'em'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'hist'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
# %%timeit -n1 -r1
# meth = 'll'
# print(' >>> Fitting method %s' % meth, flush=True)
# p = kinetics_fit(ds_slices, params_fixed, method=meth)
out_fname = 'results/%s_burst_data_vs_time__window%ds_step%ds.csv' % (
Path(filename).stem, moving_window_params['window'], moving_window_params['step'])
out_fname
df.to_csv(out_fname)
Explanation: Burst-data
End of explanation
# np.abs((params['em', 30, 1] - params['ll', 30, 1]).p2_amplitude).max()
methods = ('em', 'll', 'hist')
for meth in methods:
plt.figure(figsize=(14, 3))
plt.plot(params[meth, windows[0], step].index, params[meth, windows[0], step].kinetics, 'h', color='gray', alpha=0.2)
plt.plot(params[meth, windows[1], step].index, params[meth, windows[1], step].kinetics, 'h', alpha=0.3)
# (params['em', 5, 1].kinetics - params['ll', 5, 1].kinetics).plot()
step = 1
for window in windows:
for meth in methods:
out_fname = ('results/' + Path(filename).stem +
'_%sfit_ampl_only__window%ds_step%ds.csv' % (meth, window, step))
print('- Saving: ', out_fname)
params[meth, window, step].to_csv(out_fname)
d
Explanation: Population fraction
End of explanation |
9,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deriving coefficients for the implicit scheme
The ice sheet energy balance model uses an implicit scheme to solve the
heat equation for $N$ layers. It uses the Crank-Nicholson scheme to discretise
the equations. For an equation in one space dimension $x$
$\frac{df}{dt} = F$,
the Crank-Nicholson scheme discretises the equation as
$\frac{f^{i+1}_x - f^{i}_x}{\Delta t} = 0.5\left [ F^{i+1}_x + F^{i}_x \right ]$
where the superscript is time and the subscript is space.
Step1: The coefficients for the $i+1$ temperature (predicted) are
Step2: The coefficients for the $i$ temperature (current) are | Python Code:
from sympy import *
init_printing()
tnew_x = Symbol('T^{i+1}_x')
tnew_xprev = Symbol('T^{i+1}_{x-1}')
tnew_xafter = Symbol('T^{i+1}_{x+1}')
told_x = Symbol('T^{i}_x')
told_xprev = Symbol('T^{i}_{x-1}')
told_xafter = Symbol('T^{i}_{x+1}')
u_x = Symbol('\kappa_x')
u_xprev = Symbol('\kappa_{x-1}')
u_xafter = Symbol('\kappa_{x+1}')
delta_t = Symbol('\Delta t')
delta_x = Symbol('\Delta x')
told_x, u_xprev, tnew_xafter, delta_x
lhs = (tnew_x - told_x)/delta_t
lhs # The time derivative
rhs_new = 0.5*(u_x*(tnew_xprev - 2*tnew_x + tnew_xafter)/delta_x**2 +
((tnew_x - tnew_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_old = 0.5*(u_x*(told_xprev - 2*told_x + told_xafter)/delta_x**2 +
((told_x - told_xprev)/(delta_x))*((u_x - u_xprev)/(delta_x)))
rhs_new, rhs_old # The two parts of the crank-nicholson RHS.
expr = lhs - rhs_new - rhs_old
expr
poly_form = Poly(expr, tnew_x, tnew_xafter, tnew_xprev, told_x, told_xafter, told_xprev)
poly_form
Explanation: Deriving coefficients for the implicit scheme
The ice sheet energy balance model uses an implicit scheme to solve the
heat equation for $N$ layers. It uses the Crank-Nicholson scheme to discretise
the equations. For an equation in one space dimension $x$
$\frac{df}{dt} = F$,
the Crank-Nicholson scheme discretises the equation as
$\frac{f^{i+1}_x - f^{i}_x}{\Delta t} = 0.5\left [ F^{i+1}_x + F^{i}_x \right ]$
where the superscript is time and the subscript is space.
End of explanation
(poly_form.coeff_monomial(tnew_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_x)*delta_t).simplify(), (poly_form.coeff_monomial(tnew_xafter)*delta_t).simplify()
Explanation: The coefficients for the $i+1$ temperature (predicted) are
End of explanation
-(poly_form.coeff_monomial(told_xprev)*delta_t).simplify(), (poly_form.coeff_monomial(told_x)*-delta_t).simplify(), -(poly_form.coeff_monomial(told_xafter)*delta_t).simplify()
Explanation: The coefficients for the $i$ temperature (current) are
End of explanation |
9,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
YAML support is provided by PyYAML at http
Step2: The following cell provides an initial example of a note in our system.
A note is nothing more than a YAML document. The idea of notetaking is to keep it simple, so a note should make no assumptions about formatting whatsoever.
In our current thinking, we have the following sections
Step3: This shows how to load just the YAML portion of the document, resulting in a Python dictionary data structure. Observe that the Python dictionary has { key
Step4: Closing the loop, the following shows how to iterate the keys of the data structure.
Step5: And this shows how to get any particular item of interest. In this case, we're extracting the bibtex key so we can do something with the embedded BibTeX (e.g. print it).
Step6: Adapted from http
Step7: Now we are onto some sqlite3 explorations.
Ordinarily, I would use some sort of mapping framework to handle database operations. However, it's not clear the FTS support is part of any ORM (yet). I will continue to research but since there is likely only one table, it might not be worth the trouble.
Next we will actually add the Zettel to the database and do a test query. Almost there. | Python Code:
import yaml
Explanation: YAML support is provided by PyYAML at http://pyyaml.org/. This notebook depends on it.
End of explanation
myFirstZettel=
title: First BIB Note for Castells
tags:
- Castells
- Network Society
- Charles Babbage is Awesome
- Charles Didn't do Everything
mentions:
- gkt
- dbdennis
dates: 2016
cite:
- Castells Rise 2016
- ii-iv
- 23-36
outline:
- Introduction
- - Computers
- People
- Conclusions
- - Great Ideas of Computing
text: |
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam eleifend est sed diam maximus rutrum. Quisque sit amet imperdiet odio, id tristique libero. Aliquam viverra convallis mauris vel tristique. Cras ac dolor non risus porttitor molestie vel at nisi. Donec vitae finibus quam. Phasellus vehicula urna sed nibh condimentum, ultrices interdum velit eleifend. Nam suscipit dolor eu rutrum fringilla. Sed pulvinar purus purus, sit amet venenatis enim convallis a. Duis fringilla nisl sit amet erat lobortis dictum. Nunc fringilla arcu nec ex blandit, a gravida purus commodo. Vivamus lacinia tellus dui, vel maximus lacus ornare id.
Vivamus euismod justo sit amet luctus bibendum. Integer non mi ullamcorper enim fringilla vulputate sit amet in urna. Nullam eu sodales ipsum. Curabitur id convallis ex. Duis a condimentum lorem. Nulla et urna massa. Duis in nibh eu elit lobortis vehicula. Mauris congue mauris mollis metus lacinia, ut suscipit mi egestas. Donec luctus ante ante, eget viverra est mollis vitae.
Vivamus in purus in erat dictum scelerisque. Aliquam dictum quis ligula ac euismod. Mauris elementum metus vel scelerisque feugiat. Vivamus bibendum massa eu pellentesque sodales. Nulla nec lacus dolor. Donec scelerisque, nibh sed placerat gravida, nunc turpis tristique nibh, ac feugiat enim massa ut eros. Nulla finibus, augue egestas hendrerit accumsan, tellus augue tempor eros, in sagittis dolor turpis nec mi. Nunc fringilla mi non malesuada aliquet.
bibkey:
Castells Rise 1996
bibtex: |
@book{castells_rise_1996,
address = {Cambridge, Mass.},
series = {Castells, {Manuel}, 1942- {Information} age . v},
title = {The rise of the network society},
isbn = {978-1-55786-616-5},
language = {eng},
publisher = {Blackwell Publishers},
author = {Castells, Manuel},
year = {1996},
keywords = {Information networks., Information society., Information technology Economic aspects., Information technology Social aspects., Technology and civilization.}
}
note:
George likes this new format.
print(myFirstZettel)
Explanation: The following cell provides an initial example of a note in our system.
A note is nothing more than a YAML document. The idea of notetaking is to keep it simple, so a note should make no assumptions about formatting whatsoever.
In our current thinking, we have the following sections:
title: an optional title (text)
tags: one or more keywords (text, sequence of text, no nesting)
mentions: one or more mentions (text, sequence of text, no nesting)
outline: one or more items (text, sequence of text, nesting is permitted)
dates (numeric text, sequence, must follow established historical ways of representing dates)
text (text from the source as multiline string)
bibtex, ris, or inline (text for the bibliographic item; will be syntax checked)
bibkey (text, a hopefully unique identifier for referring to this source in other Zettels)
cite: Used to cite a bibkey from the same or other notes. In addition, the citation may be represented as a list, where the first item is the bibkey and subsequent items are pages or ranges of page numbers. See below for a good example of how this will work.
note (any additional details that you wish to hide from indexing)
In most situations, freeform text is permitted. If you need to do crazy things, you must put quotes around the text so YAML can process it. However, words separated by whitespace and punctuation seems to work fine in most situations.
These all are intended to be string data, so there are no restrictions on what can be in any field; however, we will likely limit tags, mentions, dates in some way as we go forward. Fields such as bibtex, ris, or inline are also subject to validity checking.
Print the document to the console (nothing special here).
End of explanation
doc = yaml.load(myFirstZettel)
Explanation: This shows how to load just the YAML portion of the document, resulting in a Python dictionary data structure. Observe that the Python dictionary has { key : value, ... }. So we can extract the YAML fields from the Python dictionary data structure.
Notice that when you write a YAML list of mentions, there is a nested Python list ['gkt', 'dbdennis'].
End of explanation
for key in doc.keys():
print(key, "=", doc[key])
Explanation: Closing the loop, the following shows how to iterate the keys of the data structure.
End of explanation
print(doc['bibkey'])
print(doc['bibtex'])
Explanation: And this shows how to get any particular item of interest. In this case, we're extracting the bibtex key so we can do something with the embedded BibTeX (e.g. print it).
End of explanation
def flatten(item):
if type(item) != type([]):
return [str(item)]
if item == []:
return item
if isinstance(item[0], list):
return flatten(item[0]) + flatten(item[1:])
return item[:1] + flatten(item[1:])
flatten("George was here")
flatten(['A', ['B', 'C'], ['D', ['E']]])
Explanation: Adapted from http://stackoverflow.com/questions/12472338/flattening-a-list-recursively. There really must be a nicer way to do stuff like this. I will rewrite this using a walker so we can have custom processing of the list items.
End of explanation
import sqlite3
# This is for showing data structures only.
import pprint
printer = pprint.PrettyPrinter(indent=2)
class SQLiteFTS(object):
def __init__(self, db_name, table_name, field_names):
self.db_name = db_name
self.conn = sqlite3.connect(db_name)
self.cursor = self.conn.cursor()
self.table_name = table_name
self.fts_field_names = field_names
self.fts_field_refs = ['?'] * len(self.fts_field_names) # for sqlite insert template generation
self.fts_field_init = [''] * len(self.fts_field_names)
self.fts_fields = dict(zip(self.fts_field_names, self.fts_field_refs))
self.fts_default_record = dict(zip(self.fts_field_names, self.fts_field_init))
def bind(self, doc):
self.record = self.fts_default_record.copy()
for k in doc.keys():
if k in self.record.keys():
self.record[k] = doc[k]
else:
print("Unknown fts field %s" % k)
self.record.update(doc)
def drop_table(self):
self.conn.execute("DROP TABLE IF EXISTS %s" % self.table_name)
def create_table(self):
sql_fields = ",".join(self.fts_default_record.keys())
print("CREATE VIRTUAL TABLE zettels USING fts4(%s)" % sql_fields)
self.conn.execute("CREATE VIRTUAL TABLE zettels USING fts4(%s)" % sql_fields)
def insert_into_table(self):
sql_params = ",".join(self.fts_fields.values())
#printer.pprint(self.record)
#printer.pprint(self.record.values())
sql_insert_values = [ ",".join(flatten(value)) for value in list(self.record.values())]
print("INSERT INTO zettels VALUES (%s)" % sql_params)
print(self.record.keys())
printer.pprint(sql_insert_values)
self.conn.execute("INSERT INTO zettels VALUES (%s)" % sql_params, sql_insert_values)
def done(self):
self.conn.commit()
self.conn.close()
sql = SQLiteFTS('zettels.db', 'zettels', ['title', 'tags', 'mentions', 'outline', 'cite', 'dates', 'summary', 'text', 'bibkey', 'bibtex', 'ris', 'inline', 'note' ])
#doc_keys = list(doc.keys())
#doc_keys.sort()
#rec_keys = list(sql.record.keys())
#rec_keys.sort()
#print("doc keys %s" % doc_keys)
#print("record keys %s" % rec_keys)
sql.drop_table()
sql.create_table()
printer.pprint(doc)
sql.bind(doc)
sql.insert_into_table()
sql.done()
#sql_insert_values = [ str(field) for field in sql.record.values()]
#print(sql_insert_values)
#print(record)
with open("xyz.txt") as datafile:
text = datafile.read()
print(text)
bibkey = 'blahblahblah'
bibtex = text
import yaml
from collections import OrderedDict
class quoted(str): pass
def quoted_presenter(dumper, data):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='"')
yaml.add_representer(quoted, quoted_presenter)
class literal(str): pass
def literal_presenter(dumper, data):
return dumper.represent_scalar('tag:yaml.org,2002:str', data, style='|')
yaml.add_representer(literal, literal_presenter)
def ordered_dict_presenter(dumper, data):
return dumper.represent_dict(data.items())
yaml.add_representer(OrderedDict, ordered_dict_presenter)
d = OrderedDict(bibkey=bibkey, bibtex=literal(bibtex))
print(yaml.dump(d))
Explanation: Now we are onto some sqlite3 explorations.
Ordinarily, I would use some sort of mapping framework to handle database operations. However, it's not clear the FTS support is part of any ORM (yet). I will continue to research but since there is likely only one table, it might not be worth the trouble.
Next we will actually add the Zettel to the database and do a test query. Almost there.
End of explanation |
9,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuickDraw Data
If machine learning is rocket science then data is your fuel! So before
doing anything we will have a close look at the data available and spend
some time bringing it into the "right" form (i.e.
tf.train.Example).
That's why we start by spending quite a lot of time on this notebook, downloading
the data, understanding it, and transforming it into the right format for
Tensorflow.
The data used in this workshop is taken from Google's quickdraw (click on
the images to see loads of examples)
Step2: Get the data
In this section we download a set of raw data files from the web.
Step5: Create your own group -- the more categories you include the more challenging the classification task will be...
Step6: Inspect the data
Let's find out what the format of the downloaded files is.
First, we are going to enumerate them.
Step7: Let's further explore what the NDJSON file format is.
Step11: As we can see, it's a format that contains one JSON dictionary per line.
Let's parse one single line.
Step14: Rasterize
Idea
Step15: Protobufs and tf.train.Example
Tensorflow's "native" format for data storage is the tf.train.Example
protocol buffer.
In this section we briefly explore the API needed to access the data
inside the tf.train.Example protocol buffer. It's not necessary to read
through the
Protocol Buffer Basics
Step16: Create datasets
Now let's create a "dataset" of tf.train.Example
protocol buffers ("protos").
A single example will contain all the information we want to use for training for a drawing (i.e. rasterized
image, label, and maybe other information).
Step22: Sharding
A dataset consists of non-overlapping sets of examples that will be used for
training and evaluation of the classifier (the "test" set will be used for the
final evaluation). As these files can quickly become very large, we split them into smaller files referred to as shards.
For example, we could split a single dataset into a number of shards, like
* train-00000-of-00005,
* train-00001-of-00005,
* ...,
* train-00004-of-00005 (if we're using 5 shards).
This way we have smaller individual files, and we can also easily access for example only 20% of all data, or have 5 threads which read through all the data
simultaneously.
Generally, with large datasets, a recommendation is to split data into individual shards with a size of ~100 MB each. This workshop might use smaller sharding sizes for simplicity reasons.
Step24: Create IMG dataset
Step25: We will now create a dataset with 80k samples consisting of
Step27: Create STROKE dataset
This section creates another dataset of example protos that contain the raw
stroke data, suitable for usage with a recurrent neural network.
Step28: ----- Optional part -----
Inspect data
Step29: More on protobufs | Python Code:
data_path = '/content/gdrive/My Drive/amld_data'
# Alternatively, you can also store the data in a local directory. This method
# will also work when running the notebook in Jupyter instead of Colab.
# data_path = './amld_data
if data_path.startswith('/content/gdrive/'):
from google.colab import drive
assert data_path.startswith('/content/gdrive/My Drive/'), 'Google Drive paths must start with "/content/gdrive/My Drive/"!'
drive.mount('/content/gdrive')
if data_path.startswith('gs://'):
from google.colab import auth
auth.authenticate_user()
# In Jupyter, you would need to install TF 2 via !pip.
%tensorflow_version 2.x
# Always make sure you are using running the expected version.
# There are considerable differences between versions.
# This Colab was tested with 2.1.0.
import tensorflow as tf
tf.__version__
import base64, collections, io, itertools, functools, json, os, random, re, textwrap, time, urllib, xml
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from PIL import Image, ImageDraw
from IPython import display
Explanation: QuickDraw Data
If machine learning is rocket science then data is your fuel! So before
doing anything we will have a close look at the data available and spend
some time bringing it into the "right" form (i.e.
tf.train.Example).
That's why we start by spending quite a lot of time on this notebook, downloading
the data, understanding it, and transforming it into the right format for
Tensorflow.
The data used in this workshop is taken from Google's quickdraw (click on
the images to see loads of examples):
https://quickdraw.withgoogle.com/data
We will download the data below.
Init
First, we'll choose where our data should be stored.
If you choose a path under "/content/gdrive/My Drive" then data will be stored in your Google drive and persisted across VM starts (preferable).
End of explanation
# Retrieve list of categories.
def list_bucket(bucket, regexp='.*'):
Returns a filtered list of Keys in specified GCS bucket.
keys = []
fh = urllib.request.urlopen('https://storage.googleapis.com/%s' % bucket)
content = xml.dom.minidom.parseString(fh.read())
for e in content.getElementsByTagName('Contents'):
key = e.getElementsByTagName('Key')[0].firstChild.data
if re.match(regexp, key):
keys.append(key)
return keys
all_ndjsons = list_bucket('quickdraw_dataset', '.*ndjson$')
print('available: (%d)' % len(all_ndjsons))
print('\n'.join(textwrap.wrap(
'|'.join([key.split('/')[-1].split('.')[0] for key in all_ndjsons]),
width=100)))
# Mini group of two animals.
pets = ['cat', 'dog']
# Somewhat larger group of zoo animals.
zoo = ['camel', 'crocodile', 'dolphin', 'elephant', 'flamingo', 'giraffe',
'kangaroo', 'lion', 'monkey', 'penguin', 'rhinoceros']
# Even larger group of all animals.
animals = ['ant', 'bat', 'bear', 'bee', 'bird', 'butterfly', 'camel', 'cat',
'cow', 'crab', 'crocodile', 'dog', 'dolphin', 'dragon', 'duck',
'elephant', 'fish', 'flamingo', 'frog', 'giraffe', 'hedgehog',
'horse', 'kangaroo', 'lion', 'lobster', 'monkey', 'mosquito',
'mouse', 'octopus', 'owl', 'panda', 'parrot', 'penguin', 'pig',
'rabbit', 'raccoon', 'rhinoceros', 'scorpion', 'sea turtle', 'shark',
'sheep', 'snail', 'snake', 'spider', 'squirrel', 'swan']
# You could do something like:
# my_objects = ['shoe', 'shorts', 't-shirt']
Explanation: Get the data
In this section we download a set of raw data files from the web.
End of explanation
# YOUR ACTION REQUIRED:
# Choose one of above groups for remainder of workshop.
# Note: This will result in ~100MB of download per class.
# `dataset_name` will be used to construct directories containing the data.
labels, dataset_name = zoo, 'zoo'
# Or use another dataset defined above:
# labels, dataset_name = pets, 'pets'
# labels, dataset_name = animals, 'animals'
# Download above chosen group.
def valid_ndjson(filename):
Checks presence + completeness of .ndjson file.
try:
json.loads(tf.io.gfile.GFile(filename).readlines()[-1])
return True
except (ValueError, IOError):
return False
def retrieve(bucket, key, filename):
Returns a file specified by its Key from a GCS bucket.
url = 'https://storage.googleapis.com/%s/%s' % (
bucket, urllib.parse.quote(key))
print('\n' + url)
if not tf.io.gfile.exists(filename):
with tf.io.gfile.GFile(filename, 'w') as f:
f.write(urllib.request.urlopen(url).read())
while not valid_ndjson(filename):
print('*** Corrupted download (%.2f MB), retrying...' % (
os.path.getsize(filename) / 2.**20))
with tf.io.gfile.GFile(filename, 'w') as f:
f.write(urllib.request.urlopen(url).read())
tf.io.gfile.makedirs(data_path)
print('\n%d labels:' % len(labels))
for name in labels:
print(name, end=' ')
dst = '%s/%s.ndjson' % (data_path, name)
retrieve('quickdraw_dataset', 'full/simplified/%s.ndjson' % name, dst)
print('%.2f MB' % (tf.io.gfile.stat(dst).length / 2.**20))
print('\nDONE :)')
Explanation: Create your own group -- the more categories you include the more challenging the classification task will be...
End of explanation
print('\n'.join([
'%6.1fM : %s' % (tf.io.gfile.stat(path).length/1024**2, path)
for path in tf.io.gfile.glob('{}/*.ndjson'.format(data_path))
]))
Explanation: Inspect the data
Let's find out what the format of the downloaded files is.
First, we are going to enumerate them.
End of explanation
path = sorted(tf.io.gfile.glob(os.path.join(data_path, '*.ndjson')))[0]
print(path)
print(tf.io.gfile.GFile(path).read()[:1000] + '...')
Explanation: Let's further explore what the NDJSON file format is.
End of explanation
data_json = json.loads(tf.io.gfile.GFile(path).readline())
data_json.keys()
# So we have some meta information.
for k, v in data_json.items():
if k != 'drawing':
print('%20s -> %s' % (k, v))
# Extract the actual drawing.
drawing = data_json['drawing']
# The drawing consists of a series of strokes:
print('Shapes:', [np.array(stroke).shape for stroke in drawing])
print('Example stroke:', drawing[0])
# Draw the image -- the strokes all have have shape (2, n)
# so the first index seems to be x/y coordinate:
for stroke in drawing:
# Each array has X coordinates at [0, :] and Y coordinates at [1, :].
plt.plot(np.array(stroke[0]), -np.array(stroke[1]))
# Would YOU recognize this drawing successfully?
# Some more code to load many sketches at once.
# Let's ignore the difficult `unrecognized` sketches for now
# (i.e. unrecognized by the official quickdraw classifier).
def convert(line):
Converts single JSON line and converts 'drawing' to list of np.array.
d = json.loads(line)
d['drawing'] = [np.array(stroke) for stroke in d['drawing']]
return d
def loaditer(name, unrecognized=False):
Returns iterable of drawings in specified file.
Args:
name: Name of the downloaded object (e.g. "elephant").
unrecognized: Whether to include drawings that were not recognized
by Google AI (i.e. the hard ones).
for line in tf.io.gfile.GFile('%s/%s.ndjson' % (data_path, name)):
d = convert(line)
if d['recognized'] or unrecognized:
yield d
def loadn(name, n, unrecognized=False):
Returns list of drawings.
Args:
name: Name of the downloaded object (e.g. "elephant").
n: Number of drawings to load.
unrecognized: Whether to include drawings that were not recognized
by Google AI (i.e. the hard ones).
it = loaditer(name, unrecognized=unrecognized)
return list(itertools.islice(it, 0, n))
n = 100
print('Loading {} instances of "{}"...'.format(n, labels[0]), end='')
sample = loadn(labels[0], 100)
print('done.')
# Some more drawings.
rows, cols = 3, 3
plt.figure(figsize=(3*cols, 3*rows))
for y in range(rows):
for x in range(cols):
i = y * cols + x
plt.subplot(rows, cols, i + 1)
for stroke in sample[i]['drawing']:
plt.plot(np.array(stroke[0]), -np.array(stroke[1]))
Explanation: As we can see, it's a format that contains one JSON dictionary per line.
Let's parse one single line.
End of explanation
def dict_to_img(drawing, img_sz=64, lw=3, maximize=True):
Converts QuickDraw data to quadratic rasterized image.
Args:
drawing: Dictionary instance of QuickDraw dataset.
img_sz: Size output image (in pixels).
lw: Line width (in pixels).
maximize: Whether to maximize drawing within image pixels.
Returns:
A PIL.Image with the rasterized drawing.
img = Image.new('L', (img_sz, img_sz))
draw = ImageDraw.Draw(img)
lines = np.array([
stroke[0:2, i:i+2]
for stroke in drawing['drawing']
for i in range(stroke.shape[1] - 1)
], dtype=np.float32)
if maximize:
for i in range(2):
min_, max_ = lines[:,i,:].min() * 0.95, lines[:,i,:].max() * 1.05
lines[:,i,:] = (lines[:,i,:] - min_) / max(max_ - min_, 1)
else:
lines /= 1024
for line in lines:
draw.line(tuple(line.T.reshape((-1,)) * img_sz), fill='white', width=lw)
return img
# Show some examples.
def showimg(img):
Shows an image with an inline HTML <img> tag.
Args:
img: Can be a PIL.Image or a numpy.ndarray.
if isinstance(img, np.ndarray):
img = Image.fromarray(img, 'L')
b = io.BytesIO()
img.convert('RGB').save(b, format='png')
enc = base64.b64encode(b.getvalue()).decode('utf-8')
display.display(display.HTML(
'<img src="data:image/png;base64,%s">' % enc))
# Fetch some images + shuffle order.
rows, cols = len(labels), 10
n_per_class = rows * cols // len(labels) + 1
drawings_list = [drawing for name in labels
for drawing in loadn(name, cols)]
# Create mosaic of rendered images.
lw = 4
img_sz = 64
tableau = np.zeros((img_sz * rows, img_sz * cols), dtype=np.uint8)
for y in range(rows):
for x in range(cols):
i = y * cols + x
img = dict_to_img(drawings_list[i], img_sz=img_sz, lw=lw, maximize=True)
tableau[y*img_sz:(y+1)*img_sz,
x*img_sz:(x+1)*img_sz] = np.asarray(img)
showimg(tableau)
print('{} samples of : {}'.format(cols, ' '.join(labels)))
Explanation: Rasterize
Idea: After converting the raw drawing data into rasterized images, we can
use MNIST-like
image processing to classify the drawings.
End of explanation
# Create a new (empty) instance.
example = tf.train.Example()
# An empty example will not print anything.
print(example)
# An example contains a map from feature name to "Feature".
# Every "Feature" contains a list of elements of the same
# type, which is one of:
# - bytes_list (similar to Python's "str")
# - float_list (float number)
# - int64_list (integer number)
# These values can be accessed as follows (no need to understand
# details):
# Add float value "3.1416" to feature "magic_numbers"
example.features.feature['magic_numbers'].float_list.value.append(3.1416)
# Add some more values to the float list "magic_numbers".
example.features.feature['magic_numbers'].float_list.value.extend([2.7183, 1.4142, 1.6180])
### YOUR ACTION REQUIRED:
# Create a second feature named "adversaries" and add the elements
# b'Alice' and b'Bob'.
example.features.feature['adversaries'].
# This will now print a serialized representation of our protocol buffer
# with features "magic_numbers" and "adversaries" set...
print(example)
# .. et voila : that's all you need to know about protocol buffers for this
# workshop.
Explanation: Protobufs and tf.train.Example
Tensorflow's "native" format for data storage is the tf.train.Example
protocol buffer.
In this section we briefly explore the API needed to access the data
inside the tf.train.Example protocol buffer. It's not necessary to read
through the
Protocol Buffer Basics: Python - documentation.
End of explanation
# Let's first check how many [recognized=True] examples we have in each class.
for name in labels:
num_all_samples = len(list(tf.io.gfile.GFile('%s/%s.ndjson' % (data_path, name))))
num_recognized_samples = len(list(loaditer(name)))
print(name, num_all_samples, 'recognized', num_recognized_samples)
Explanation: Create datasets
Now let's create a "dataset" of tf.train.Example
protocol buffers ("protos").
A single example will contain all the information we want to use for training for a drawing (i.e. rasterized
image, label, and maybe other information).
End of explanation
#@title `make_sharded_files()` code
#@markdown Helper code to create sharded recordio files.
#@markdown Simply **click "execute"** and continue to the next cell.
#@markdown No need to read through this code to understand the remainder of the Colab.
#@markdown
#@markdown If you want to have a look anyways, you can double-click this cell or click on the three dots
#@markdown and then select "Form" and then "Show Code" (shortcut `<Ctrl-M> <F>`).
# Helper code to create sharded recordio files.
# (No need to read through this.)
# The code in this cell simply takes a list of iterators and then
# randomly distributes the values returned by these iterators into sharded
# datasets (e.g. a train/eval/test split).
def rand_key(counts):
Returns a random key from "counts", using values as distribution.
r = random.randint(0, sum(counts.values()))
for key, count in counts.items():
if r > count or count == 0:
r -= count
else:
counts[key] -= 1
return key
def get_split(i, splits):
Returns key from "splits" for iteration "i".
i %= sum(splits.values())
for split in sorted(splits):
if i < splits[split]:
return split
i -= splits[split]
def make_counts(labels, total):
Generates counts for "labels" totaling "total".
counts = {}
for i, name in enumerate(labels):
counts[name] = total // (len(labels) - i)
total -= counts[name]
return counts
def example_to_dict(example):
Converts a tf.train.Example to a dictionary.
example_dict = {}
for name, value in example.features.feature.items():
if value.HasField('bytes_list'):
value = value.bytes_list.value
elif value.HasField('int64_list'):
value = value.int64_list.value
elif value.HasField('float_list'):
value = value.float_list.value
else:
raise 'Unknown *_list type!'
if len(value) == 1:
example_dict[name] = value[0]
else:
example_dict[name] = np.array(value)
return example_dict
def make_sharded_files(make_example, path, labels, iters, counts, splits,
shards=10, overwrite=False, report_dt=10, make_df=False):
Create sharded dataset from "iters".
Args:
make_example: Converts object returned by elements of "iters"
to tf.train.Example() proto.
path: Directory that will contain recordio files.
labels: Names of labels, will be written to "labels.txt".
iters: List of iterables returning drawing objects.
counts: Dictionary mapping class to number of examples.
splits: Dictionary mapping filename to multiple examples. For example,
splits=dict(a=2, b=1) will result in two examples being written to "a"
for every example being written to "b".
shards: Number of files to be created per split.
overwrite: Whether a pre-existing directory should be overwritten.
report_dt: Number of seconds between status updates (0=no updates).
make_df: Also write data as pandas.DataFrame - do NOT use this with very
large datasets that don't fit in memory!
Returns:
Total number of examples written to disk per split.
assert len(iters) == len(labels)
# Prepare output.
if not tf.io.gfile.exists(path):
tf.io.gfile.makedirs(path)
paths = {
split: ['%s/%s-%05d-of-%05d' % (path, split, i, shards)
for i in range(shards)]
for split in splits
}
assert overwrite or not tf.io.gfile.exists(paths.values()[0][0])
writers = {
split: [tf.io.TFRecordWriter(ps[i]) for i in range(shards)]
for split, ps in paths.items()
}
t0 = time.time()
examples_per_split = collections.defaultdict(int)
i, n = 0, sum(counts.values())
counts = dict(**counts)
rows = []
# Create examples.
while sum(counts.values()):
name = rand_key(counts)
split = get_split(i, splits)
writer = writers[split][examples_per_split[split] % shards]
label = labels.index(name)
example = make_example(label, next(iters[label]))
writer.write(example.SerializeToString())
if make_df:
example.features.feature['split'].bytes_list.value.append(split.encode('utf8'))
rows.append(example_to_dict(example))
examples_per_split[split] += 1
i += 1
if report_dt > 0 and time.time() - t0 > report_dt:
print('processed %d/%d (%.2f%%)' % (i, n, 100. * i / n))
t0 = time.time()
# Store results.
for split in splits:
for writer in writers[split]:
writer.close()
with tf.io.gfile.GFile('%s/labels.txt' % path, 'w') as f:
f.write('\n'.join(labels))
with tf.io.gfile.GFile('%s/counts.json' % path, 'w') as f:
json.dump(examples_per_split, f)
if make_df:
df_path = '%s/dataframe.pkl' % path
print('Writing %s...' % df_path)
pd.DataFrame(rows).to_pickle(df_path)
return dict(**examples_per_split)
Explanation: Sharding
A dataset consists of non-overlapping sets of examples that will be used for
training and evaluation of the classifier (the "test" set will be used for the
final evaluation). As these files can quickly become very large, we split them into smaller files referred to as shards.
For example, we could split a single dataset into a number of shards, like
* train-00000-of-00005,
* train-00001-of-00005,
* ...,
* train-00004-of-00005 (if we're using 5 shards).
This way we have smaller individual files, and we can also easily access for example only 20% of all data, or have 5 threads which read through all the data
simultaneously.
Generally, with large datasets, a recommendation is to split data into individual shards with a size of ~100 MB each. This workshop might use smaller sharding sizes for simplicity reasons.
End of explanation
# Uses `dict_to_img()` from previous cell to create raster image.
def make_example_img(label, drawing):
Converts QuickDraw dictionary to example with rasterized data.
Args:
label: Numerical representation of the label (e.g. '0' for labels[0]).
drawing: Dictionary with QuickDraw data.
Returns:
A tf.train.Example protocol buffer (with 'label', 'img_64', and additional
metadata features).
example = tf.train.Example()
example.features.feature['label'].int64_list.value.append(label)
img_64 = np.asarray(dict_to_img(
drawing, img_sz=64, lw=4, maximize=True)).reshape(-1)
example.features.feature['img_64'].int64_list.value.extend(img_64)
example.features.feature['countrycode'].bytes_list.value.append(
drawing['countrycode'].encode())
example.features.feature['recognized'].int64_list.value.append(
drawing['recognized'])
example.features.feature['word'].bytes_list.value.append(
drawing['word'].encode())
ts = drawing['timestamp']
ts = time.mktime(time.strptime(ts[:ts.index('.')], '%Y-%m-%d %H:%M:%S'))
example.features.feature['timestamp'].int64_list.value.append(int(ts))
example.features.feature['key_id'].int64_list.value.append(
int(drawing['key_id']))
return example
Explanation: Create IMG dataset
End of explanation
# Create the (rasterized) dataset.
path = '%s/%s_img' % (data_path, dataset_name)
t0 = time.time()
examples_per_split = make_sharded_files(
make_example=make_example_img,
path=path,
labels=labels,
iters=[loaditer(name) for name in labels],
# Creating 50k train, 20k eval and 10k test examples.
counts=make_counts(labels, 80000),
splits=dict(train=5, eval=2, test=1),
overwrite=True,
# Note: Set this to False when generating large datasets.
make_df=True,
)
# If you don't see the final output below, it's probably because your VM
# has run out of memory and crashed!
# This can happen when make_df=True.
print('stored data to "%s"' % path)
print('generated %s examples in %d seconds' % (
examples_per_split, time.time() - t0))
Explanation: We will now create a dataset with 80k samples consisting of:
50k samples used for training
20k samples used for evaluation
10k samples used for testing
The generation below will take about ~5 minutes.
Note: Larger datasets take longer to generate and to train on, but also lead to better classification results.
End of explanation
# Convert stroke coordinates into normalized relative coordinates,
# one single list, and add a "third dimension" that indicates when
# a new stroke starts.
def dict_to_stroke(d):
norm = lambda x: (x - x.min()) / max(1, (x.max() - x.min()))
xy = np.concatenate([np.array(s, dtype=np.float32) for
s in d['drawing']], axis=1)
z = np.zeros(xy.shape[1])
if len(d['drawing']) > 1:
z[np.cumsum(np.array(list(map(lambda x: x.shape[1],
d['drawing'][:-1]))))] = 1
dxy = np.diff(norm(xy))
return np.concatenate([dxy, z.reshape((1, -1))[:, 1:]])
# Visualize and control output of `dict_to_stroke()`.
stroke = dict_to_stroke(sample[0])
# The first 2 dimensions are normalized dx/dy coordinates, and
# the third dimension indicates a new stroke.
xy = stroke[:2, :].cumsum(axis=1)
plt.plot(xy[0,:], -xy[1,:])
pxy = xy[:, stroke[2] != 0]
# Indicate the new stroke with a red circle.
plt.plot(pxy[0], -pxy[1], 'ro');
# Uses `dict_to_stroke()` from previous cell to create raster image.
def make_example_stroke(label, drawing):
Converts QuickDraw dictionary to example with stroke data.
Args:
label: Numerical representation of the label (e.g. '0' for labels[0]).
drawing: Dictionary with QuickDraw data.
Returns:
A tf.train.Example protocol buffer (with 'label', 'stroke_x', 'stroke_y',
'stroke_z', and additional metadata features).
example = tf.train.Example()
example.features.feature['label'].int64_list.value.append(label)
stroke = dict_to_stroke(drawing)
example.features.feature['stroke_x'].float_list.value.extend(stroke[0, :])
example.features.feature['stroke_y'].float_list.value.extend(stroke[1, :])
example.features.feature['stroke_z'].float_list.value.extend(stroke[2, :])
example.features.feature['stroke_len'].int64_list.value.append(
stroke.shape[1])
example.features.feature['countrycode'].bytes_list.value.append(
drawing['countrycode'].encode())
example.features.feature['recognized'].int64_list.value.append(
drawing['recognized'])
example.features.feature['word'].bytes_list.value.append(
drawing['word'].encode())
ts = drawing['timestamp']
ts = time.mktime(time.strptime(ts[:ts.index('.')], '%Y-%m-%d %H:%M:%S'))
example.features.feature['timestamp'].int64_list.value.append(int(ts))
example.features.feature['key_id'].int64_list.value.append(
int(drawing['key_id']))
return example
path = '%s/%s_stroke' % (data_path, dataset_name)
t0 = time.time()
examples_per_split = make_sharded_files(
make_example=make_example_stroke,
path=path,
labels=labels,
iters=[loaditer(name) for name in labels],
# Creating 50k train, 20k eval, 10k test examples. Takes ~2min
counts=make_counts(labels, 80000),
splits=dict(train=5, eval=2, test=1),
overwrite=True,
# Note: Set this to False when generating large datasets...
make_df=True,
)
print('stored data to "%s"' % path)
print('generated %s examples in %d seconds' % (examples_per_split, time.time() - t0))
Explanation: Create STROKE dataset
This section creates another dataset of example protos that contain the raw
stroke data, suitable for usage with a recurrent neural network.
End of explanation
# YOUR ACTION REQUIRED:
# Check out the files generated in $data_path
# Note that you can also inspect the files in http://drive.google.com if you
# used Drive as the destination.
# Let's look at a single file of the sharded dataset.
tf_record_path = '{}/{}_img/eval-00000-of-00010'.format(data_path, dataset_name)
# YOUR ACTION REQUIRED:
# Use `tf.data.TFRecordDataset()` to read a single record from the file and
# assign it to the variable `record`. What data type has this record?
# Hint: dataset is a Python "iterable".
#dataset = ...
#record
# Check out the features. They should correspond to what we generated in
# `make_example_img()` above.
example = tf.train.Example()
# Note: `.numpy()` returns the underlying string from the Tensor.
example.ParseFromString(record.numpy())
print(list(example.features.feature.keys()))
# YOUR ACTION REQUIRED:
# Extract the label and the image data from the example protobuf.
# Use above section "tf.train.Example" for reference.
label_int =
img_64 =
# Visualize the image:
print(labels[label_int])
plt.matshow(np.array(img_64).reshape((64, 64)))
# YOUR ACTION REQUIRED:
# Check that we have an equal distribution of labels in the training files.
Explanation: ----- Optional part -----
Inspect data
End of explanation
# If we want to create our own protocol buffers, we first need to install
# some programs.
!apt-get -y install protobuf-compiler python-pil python-lxml
# Step 1: Write a proto file that describes our data format.
# YOUR ACTION REQUIRED: Complete the definition of the "Person" message (you
# can use the slide for inspiration).
with open('person.proto', 'w') as f:
f.write('''syntax = "proto3";''')
# Step 2: Compile proto definition to a Python file.
!protoc --python_out=. person.proto
!ls -lh
# Step 3: Import code from generated Python file.
from person_pb2 import Person
# Note: If you change the person_pb2 module, you'll need to restart the kernel
# to see the changes because Python will still remember the previous import.
person = Person()
person.name = 'John Doe'
person.email = '[email protected]'
person.lucky_numbers.extend([13, 99])
person.SerializeToString()
# YOUR ACTION REQUIRED:
# Compare the size of the serialized person structure in proto format
# vs. JSON encoded (you can use Python's json.dumps() and list members
# manually, or import google.protobuf.json_format).
# Which format is more efficient? Why?
# Which format is easier to use?
# Which format is more versatile?
Explanation: More on protobufs
End of explanation |
9,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Bootcamp
Step1: Population by age
We have both "estimates" of the past (1950-2015) and "projections" of the future (out to 2100). Here we focus on the latter, specifically what the UN refers to as the medium variant
Step2: Exercise. What do you see here? What else would you like to know?
Exercise. Adapt the preceeding code to do the same thing for China. Or some other country that sparks your interest.
Fertility
Step3: Exercise. What do you see here? What else would you like to know?
Exercise. Add Canada to the figure. How does it compare to the others? What other countries would you be interested in?
Life expectancy
One of the bottom line summary numbers for mortality is life expectancy
Step4: Exercise. What other countries would you like to see? Can you add them? The code below generates a list.
Step5: Exercise. Why do you think the US is falling behind? What would you look at to verify your conjecture?
Mortality
Step6: Comment. At this point, we need to pivot the data. That's not something we've done before, so take it as simply something we can do easily if we have to. We're going to do this twice to produce different graphs
Step7: Exercises.
What country's old people have the lowest mortality?
What do you see here for the US? Why is our life expectancy shorter?
What other countries would you like to see? Can you adapt the code to show them?
Anything else cross your mind? | Python Code:
# import packages
import pandas as pd # data management
import matplotlib.pyplot as plt # graphics
import matplotlib as mpl # graphics parameters
import numpy as np # numerical calculations
# IPython command, puts plots in notebook
%matplotlib inline
# check Python version
import datetime as dt
import sys
print('Today is', dt.date.today())
print('What version of Python are we running? \n', sys.version, sep='')
Explanation: Data Bootcamp: Demography
We love demography, specifically the dynamics of population growth and decline. You can drill down seemingly without end, as this terrific graphic about causes of death suggests.
We take a look here at the UN's population data: the age distribution of the population, life expectancy, fertility (the word we use for births), and mortality (deaths). Explore the website, it's filled with interesting data. There are other sources that cover longer time periods, and for some countries you can get detailed data on specific things (causes of death, for example).
We use a number of countries as examples, but Japan and China are the most striking. The code is written so that the country is easily changed.
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
Preliminaries
Import statements and a date check for future reference.
End of explanation
url1 = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url2 = '1_Indicators%20(Standard)/EXCEL_FILES/1_Population/'
url3 = 'WPP2015_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.XLS'
url = url1 + url2 + url3
cols = [2, 5] + list(range(6,28))
#est = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
prj = pd.read_excel(url, sheetname=1, skiprows=16, parse_cols=cols, na_values=['…'])
prj.head(3)[list(range(6))]
# rename some variables
pop = prj
names = list(pop)
pop = pop.rename(columns={names[0]: 'Country',
names[1]: 'Year'})
# select country and years
country = ['Japan']
years = [2015, 2055, 2095]
pop = pop[pop['Country'].isin(country) & pop['Year'].isin(years)]
pop = pop.drop(['Country'], axis=1)
# set index = Year
# divide by 1000 to convert numbers from thousands to millions
pop = pop.set_index('Year')/1000
pop.head()[list(range(8))]
# transpose (T) so that index = age
pop = pop.T
pop.head(3)
ax = pop.plot(kind='bar',
color='blue',
alpha=0.5, subplots=True, sharey=True, figsize=(8,6))
for axnum in range(len(ax)):
ax[axnum].set_title('')
ax[axnum].set_ylabel('Millions')
ax[0].set_title('Population by age', fontsize=14, loc='left')
Explanation: Population by age
We have both "estimates" of the past (1950-2015) and "projections" of the future (out to 2100). Here we focus on the latter, specifically what the UN refers to as the medium variant: their middle of the road projection. It gives us a sense of how Japan's population might change over the next century.
It takes a few seconds to read the data.
What are the numbers? Thousands of people in various 5-year age categories.
End of explanation
# fertility overall
uft = 'http://esa.un.org/unpd/wpp/DVD/Files/'
uft += '1_Indicators%20(Standard)/EXCEL_FILES/'
uft += '2_Fertility/WPP2015_FERT_F04_TOTAL_FERTILITY.XLS'
cols = [2] + list(range(5,18))
ftot = pd.read_excel(uft, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
ftot.head(3)[list(range(6))]
# rename some variables
names = list(ftot)
f = ftot.rename(columns={names[0]: 'Country'})
# select countries
countries = ['China', 'Japan', 'Germany', 'United States of America']
f = f[f['Country'].isin(countries)]
# shape
f = f.set_index('Country').T
f = f.rename(columns={'United States of America': 'United States'})
f.tail(3)
fig, ax = plt.subplots()
f.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6.5, 4))
ax.set_title('Fertility (births per woman, lifetime)', fontsize=14, loc='left')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
ax.set_ylim(ymin=0)
ax.hlines(2.1, -1, 13, linestyles='dashed')
ax.text(8.5, 2.4, 'Replacement = 2.1')
Explanation: Exercise. What do you see here? What else would you like to know?
Exercise. Adapt the preceeding code to do the same thing for China. Or some other country that sparks your interest.
Fertility: aka birth rates
We might wonder, why is the population falling in Japan? Other countries? Well, one reason is that birth rates are falling. Demographers call this fertility. Here we look at the fertility using the same UN source as the previous example. We look at two variables: total fertility and fertility by age of mother. In both cases we explore the numbers to date, but the same files contain projections of future fertility.
End of explanation
# life expectancy at birth, both sexes
ule = 'http://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'
ule += 'WPP2015_MORT_F07_1_LIFE_EXPECTANCY_0_BOTH_SEXES.XLS'
cols = [2] + list(range(5,34))
le = pd.read_excel(ule, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
le.head(3)[list(range(10))]
# rename some variables
oldname = list(le)[0]
l = le.rename(columns={oldname: 'Country'})
l.head(3)[list(range(8))]
# select countries
countries = ['China', 'Japan', 'Germany', 'United States of America']
l = l[l['Country'].isin(countries)]
# shape
l = l.set_index('Country').T
l = l.rename(columns={'United States of America': 'United States'})
l.tail()
fig, ax = plt.subplots()
l.plot(ax=ax, kind='line', alpha=0.5, lw=3, figsize=(6, 8), grid=True)
ax.set_title('Life expectancy at birth', fontsize=14, loc='left')
ax.set_ylabel('Life expectancy in years')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
ax.set_ylim(ymin=0)
Explanation: Exercise. What do you see here? What else would you like to know?
Exercise. Add Canada to the figure. How does it compare to the others? What other countries would you be interested in?
Life expectancy
One of the bottom line summary numbers for mortality is life expectancy: if mortaility rates fall, people live longer, on average. Here we look at life expectancy at birth. There are also numbers for life expectancy given than you live to some specific age; for example, life expectancy given that you survive to age 60.
End of explanation
countries = le.rename(columns={oldname: 'Country'})['Country']
Explanation: Exercise. What other countries would you like to see? Can you add them? The code below generates a list.
End of explanation
# mortality overall
url = 'http://esa.un.org/unpd/wpp/DVD/Files/'
url += '1_Indicators%20(Standard)/EXCEL_FILES/3_Mortality/'
url += 'WPP2015_MORT_F17_1_ABRIDGED_LIFE_TABLE_BOTH_SEXES.XLS'
cols = [2, 5, 6, 7, 9]
mort = pd.read_excel(url, sheetname=0, skiprows=16, parse_cols=cols, na_values=['…'])
mort.tail(3)
# change names
names = list(mort)
m = mort.rename(columns={names[0]: 'Country', names[2]: 'Age', names[3]: 'Interval', names[4]: 'Mortality'})
m.head(3)
Explanation: Exercise. Why do you think the US is falling behind? What would you look at to verify your conjecture?
Mortality: aka death rates
Another thing that affects the age distribution of the population is the mortality rate: if mortality rates fall people live longer, on average. Here we look at how mortality rates have changed over the past 60+ years. Roughly speaking, people live an extra five years every generation. Which is a lot. Some of you will live to be a hundred. (Look at the 100+ agen category over time for Japan.)
The experts look at mortality rates by age. The UN has a whole page devoted to mortality numbers. We take 5-year mortality rates from the Abridged Life Table.
The numbers are percentages of people in a given age group who die over a 5-year period. 0.1 means that 90 percent of an age group is still alive in five years.
End of explanation
# compare countries for most recent period
countries = ['China', 'Japan', 'Germany', 'United States of America']
mt = m[m['Country'].isin(countries) & m['Interval'].isin([5]) & m['Period'].isin(['2010-2015'])]
print('Dimensions:', mt.shape)
mp = mt.pivot(index='Age', columns='Country', values='Mortality')
mp.head(3)
fig, ax = plt.subplots()
mp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3,
# logy=True,
figsize=(6, 4))
ax.set_title('Mortality by age', fontsize=14, loc='left')
ax.set_ylabel('Mortality Rate (log scale)')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
Explanation: Comment. At this point, we need to pivot the data. That's not something we've done before, so take it as simply something we can do easily if we have to. We're going to do this twice to produce different graphs:
Compare countries for the same period.
Compare different periods for the same country.
End of explanation
# compare periods for the one country -- countries[0] is China
mt = m[m['Country'].isin([countries[0]]) & m['Interval'].isin([5])]
print('Dimensions:', mt.shape)
mp = mt.pivot(index='Age', columns='Period', values='Mortality')
mp = mp[[0, 6, 12]]
mp.head(3)
fig, ax = plt.subplots()
mp.plot(ax=ax, kind='line', alpha=0.5, linewidth=3,
# logy=True,
figsize=(6, 4))
ax.set_title('Mortality over time', fontsize=14, loc='left')
ax.set_ylabel('Mortality Rate (log scale)')
ax.legend(loc='best', fontsize=10, handlelength=2, labelspacing=0.15)
Explanation: Exercises.
What country's old people have the lowest mortality?
What do you see here for the US? Why is our life expectancy shorter?
What other countries would you like to see? Can you adapt the code to show them?
Anything else cross your mind?
End of explanation |
9,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sinkhorn Divergence Hessians
Samples two point clouds, computes their sinkhorn_divergence
We show in this colab how OTT and JAX can be used to compute automatically the Hessian of the Sinkhorn divergence w.r.t. input variables, such as weights a or locations x. Don't forget to !pip install ott-jax before running the code below.
Step1: Sample two random point clouds of dimension dim
Step2: As usual in JAX, we define a custom loss that outputs the quantity of interest, and is defined using relevant inputs as arguments, i.e. parameters against which we may want to differentiate. We add to a and x the implicit auxiliary flag which will be used to switch between unrolling and implicit differentiation of the Sinkhorn algorithm (see this excellent tutorial for a deep dive on their differences!)
The loss outputs the Sinkhorn Divergence between two point clouds.
Step3: Let's parse the three lines in the call to sinkhorn_divergence above | Python Code:
import jax
import jax.numpy as jnp
import ott
from ott.tools import sinkhorn_divergence
from ott.geometry import pointcloud
import matplotlib.pyplot as plt
Explanation: Sinkhorn Divergence Hessians
Samples two point clouds, computes their sinkhorn_divergence
We show in this colab how OTT and JAX can be used to compute automatically the Hessian of the Sinkhorn divergence w.r.t. input variables, such as weights a or locations x. Don't forget to !pip install ott-jax before running the code below.
End of explanation
def sample(n, m, dim):
rngs = jax.random.split(jax.random.PRNGKey(0), 6)
x = jax.random.uniform(rngs[0], (n, dim))
y = jax.random.uniform(rngs[1], (m, dim))
a = jax.random.uniform(rngs[2], (n,)) + .1
b = jax.random.uniform(rngs[3], (m,)) + .1
a = a / jnp.sum(a)
b = b / jnp.sum(b)
return a, x, b ,y
a, x, b, y = sample(15, 17, 3)
Explanation: Sample two random point clouds of dimension dim
End of explanation
def loss(a, x, implicit):
return sinkhorn_divergence.sinkhorn_divergence(
pointcloud.PointCloud, x, y, # this part defines geometry
a=a, b=b, # this sets weights
sinkhorn_kwargs={'implicit_differentiation': implicit, 'use_danskin': False} # to be used by Sinkhorn algorithm.
).divergence
Explanation: As usual in JAX, we define a custom loss that outputs the quantity of interest, and is defined using relevant inputs as arguments, i.e. parameters against which we may want to differentiate. We add to a and x the implicit auxiliary flag which will be used to switch between unrolling and implicit differentiation of the Sinkhorn algorithm (see this excellent tutorial for a deep dive on their differences!)
The loss outputs the Sinkhorn Divergence between two point clouds.
End of explanation
for arg in [0,1]:
# Compute Hessians using either unrolling or implicit differentiation.
hess_loss_imp = jax.jit(jax.hessian(lambda a, x: loss(a, x, True),
argnums=arg))
print('--- Time: Implicit Hessian w.r.t. ' + ('a' if arg == 0 else 'x'))
%timeit _ = hess_loss_imp(a, x).block_until_ready()
hess_imp = hess_loss_imp(a, x)
hess_loss_back = jax.jit(jax.hessian(lambda a, x: loss(a, x, False),
argnums=arg))
print('--- Time: Unrolled Hessian w.r.t. ' + ('a' if arg == 0 else 'x'))
%timeit _ = hess_loss_back(a, x).block_until_ready()
hess_back = hess_loss_back(a, x)
# Since we are solving balanced OT problems, Hessians w.r.t. weights are
# only defined up to the orthogonal space of 1s.
# For that reason we remove that contribution and check the
# resulting matrices are equal.
if arg == 0:
hess_imp -= jnp.mean(hess_imp,axis=1)[:,None]
hess_back -= jnp.mean(hess_back,axis=1)[:,None]
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))
im = ax1.imshow(hess_imp if arg == 0 else hess_imp[0,0,:,:])
ax1.set_title('Implicit Hessian w.r.t. ' + ('a' if arg == 0 else 'x (1st slice)'))
fig.colorbar(im, ax=ax1)
im = ax2.imshow(hess_back if arg == 0 else hess_back[0,0,:,:])
ax2.set_title('Unrolled Hessian w.r.t. ' + ('a' if arg == 0 else 'x (1st slice)'))
fig.colorbar(im, ax=ax2)
Explanation: Let's parse the three lines in the call to sinkhorn_divergence above:
- The first one defines the point cloud geometry between x and y that will define the cost matrix. Here we could have added details on epsilon regularization (or scheduler), as well as alternative definitions of the cost function (here assumed by default to be squared Euclidean distance). We stick to the default setting.
The second one sets the respective weight vectors a and b. Those are simply two histograms of size n and m, both sum to 1, in the so-called balanced setting.
The third one passes on arguments to the three sinkhorn solvers that will be called, to compare x with y, x with x and y with y with their respective weights a and b. Rather than focusing on the several numerical options available to parmeterize sinkhorn's behavior, we instruct JAX on how it should differentiate the outputs of the sinkhorn algorithm. The use_danskin flag specifies whether the outputted potentials should be freezed when differentiating. Since we aim for 2nd order differentiation here, we must set this to False (if we wanted to compute gradients, True would have resulted in faster yet almost equivalent computations).
Computing Hessians
Let's now plot Hessians of this output w.r.t. either a or x.
The Hessian w.r.t. a will be a $n \times n$ matrix, with the convention that a has size $n$.
Because x is itself a matrix of 3D coordinates, the Hessian w.r.t. x will be a 4D tensor of size $n \times 3 \times n \times 3$.
To plot both Hessians, we loop on arg 0 or 1 of loss, and plot all (or part for x) of those Hessians, to check they match:
End of explanation |
9,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Visualization - Matplotlib</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey. Licensed under CC BY 4.0 Creative Commons
Matplotlib
Matplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (graphical user interface) toolkits. It is a great package with lots of options.
However, matplotlib is...
The 800-pound gorilla — and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a custom plot or produce a publication-ready graphic.
(As we’ll see, when it comes to statistical visualization, the preferred tack might be
Step1: - dry stuff - The matplotlib Figure, Axes and Axis
At the heart of every plot is the figure object. The "Figure" object is the top level concept which can be drawn to one of the many output formats, or simply just to screen. Any object which can be drawn in this way is known as an "Artist" in matplotlib.
Lets create our first artist using pyplot, and then show it
Step2: On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
By far the most useful artist in matplotlib is the Axes artist. The Axes artist represents the "data space" of a typical plot, a rectangular axes (the most common, but not always the case, e.g. polar plots) will have 2 (confusingly named) Axis artists with tick labels and tick marks.
There is no limit on the number of Axes artists which can exist on a Figure artist. Let's go ahead and create a figure with a single Axes artist, and show it using pyplot
Step3: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.
Under the hood matplotlib still had to create a Figure artist, its just we didn't need to capture it into a variable.
- essential stuff - pyplot versus Object based
Some example data
Step4: Observe the following difference
Step5: 2. object oriented
Step6: Although a little bit more code is involved, the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure
Step7: And also Matplotlib advices the object oriented style
Step8: An small cheat-sheet reference for some common elements
Step9: Adjusting specific parts of a plot is a matter of accessing the correct element of the plot
Step10: <div class="alert alert-success">
**EXERCISE**
Make a line chart of the `data` using Matplotlib. The figure should be 12 (width) by 4 (height) in inches. Make the line color 'darkgrey' and provide an x-label ('days since start') and a y-label ('measured value').
Use the object oriented approach to create the chart.
<details><summary>Hints</summary>
- When Matplotlib only receives a single input variable, it will interpret this as the variable for the y-axis
- Check the cheat sheet above for the functions.
</details>
</div>
Step11: <div class="alert alert-success">
**EXERCISE**
The data represents each a day starting from Jan 1st 2021. Create an array (variable name `dates`) of the same length as the original data (length 100) with the corresponding dates ('2021-01-01', '2021-01-02',...). Create the same chart as in the previous exercise, but use the `dates` values for the x-axis data.
Mark the region inside `[-5, 5]` with a green color to show that these values are within an acceptable range.
<details><summary>Hints</summary>
- As seen in notebook `pandas_04_time_series_data`, Pandas provides a useful function `pd.date_range` to create a set of datetime values. In this case 100 values with `freq="D"`.
- Make sure to understand the difference between `axhspan` and `fill_between`, which one do you need?
- When adding regions, adding an `alpha` level is mostly a good idea.
</details>
</div>
Step12: <div class="alert alert-success">
**EXERCISE**
Compare the __last ten days__ ('2021-04-01' till '2021-04-10') in a bar chart using darkgrey color. For the data on '2021-04-01', use an orange bar to highlight the measurement on this day.
<details><summary>Hints</summary>
- Select the last 10 days from the `data` and `dates` variable, i.e. slice [-10
Step13: I do not like the style...
...understandable
Matplotlib had a bad reputation in terms of its default styling as figures created with earlier versions of Matplotlib were very Matlab-lookalike and mostly not really catchy.
Since Matplotlib 2.0, this has changed
Step14: We should not start discussing about colors and styles, just pick your favorite style!
Step15: or go all the way and define your own custom style, see the official documentation or this tutorial.
<div class="alert alert-info">
<b>REMEMBER</b>
Step16: A typical issue when plotting multiple elements in the same Figure is the overlap of the subplots. A straight-forward approach is using a larger Figure size, but this is not always possible and does not make the content independent from the Figure size. Matplotlib provides the usage of a constrained-layout to fit plots within your Figure cleanly.
Step18: When more advanced layout configurations are required, the usage of the gridspec module is a good reference. See gridspec demo for more information. A useful shortcut to know about is the string-shorthand to setup subplot layouts in a more intuitive way, e.g.
Step19: Interaction with Pandas
What we have been doing while plotting with Pandas
Step20: Under the hood, it creates an Matplotlib Figure with an Axes object.
Pandas versus matplotlib
Comparison 1
Step21: Making this with matplotlib...
Step22: is still ok!
Comparison 2
Step23: Mimicking this in matplotlib (just as a reference, it is basically what Pandas is doing under the hood)
Step24: Is already a bit harder ;-). Pandas provides as set of default configurations on top of Matplotlib.
Best of both worlds...
Step25: <div class="alert alert-info">
<b>Remember</b>
Step26: <div class="alert alert-success">
**EXERCISE**
Pandas supports different types of charts besides line plots, all available from `.plot.xxx`, e.g. `.plot.scatter`, `.plot.bar`,... Make a bar chart to compare the mean discharge in the three measurement stations L06_347, LS06_347, LS06_348. Add a y-label 'mean discharge'. To do so, prepare a Figure and Axes with Matplotlib and add the chart to the created Axes.
<details><summary>Hints</summary>
* You can either use Pandas `ylabel` parameter to set the label or add it with Matploltib `ax.set_ylabel()`
* To link an Axes object with Pandas output, pass the Axes created by `fig, ax = plt.subplots()` as parameter to the Pandas plot function.
</details>
</div>
Step27: <div class="alert alert-success">
**EXERCISE**
To compare the stations data, make two subplots next to each other
Step28: <div class="alert alert-success">
**EXERCISE**
Make a line plot of the discharge measurements in station `LS06_347`.
The main event on November 13th caused a flood event. To support the reader in the interpretation of the graph, add the following elements | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: <p><font size="6"><b>Visualization - Matplotlib</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey. Licensed under CC BY 4.0 Creative Commons
Matplotlib
Matplotlib is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (graphical user interface) toolkits. It is a great package with lots of options.
However, matplotlib is...
The 800-pound gorilla — and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a custom plot or produce a publication-ready graphic.
(As we’ll see, when it comes to statistical visualization, the preferred tack might be: “do as much as you easily can in your convenience layer of choice [nvdr e.g. directly from Pandas, or with seaborn], and then use matplotlib for the rest.”)
(quote used from this blogpost)
And that's we mostly did, just use the .plot function of Pandas. So, why do we learn matplotlib? Well, for the ...then use matplotlib for the rest.; at some point, somehow!
Matplotlib comes with a convenience sub-package called pyplot which, for consistency with the wider matplotlib community, should always be imported as plt:
End of explanation
fig = plt.figure()
plt.show()
Explanation: - dry stuff - The matplotlib Figure, Axes and Axis
At the heart of every plot is the figure object. The "Figure" object is the top level concept which can be drawn to one of the many output formats, or simply just to screen. Any object which can be drawn in this way is known as an "Artist" in matplotlib.
Lets create our first artist using pyplot, and then show it:
End of explanation
ax = plt.axes()
Explanation: On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
By far the most useful artist in matplotlib is the Axes artist. The Axes artist represents the "data space" of a typical plot, a rectangular axes (the most common, but not always the case, e.g. polar plots) will have 2 (confusingly named) Axis artists with tick labels and tick marks.
There is no limit on the number of Axes artists which can exist on a Figure artist. Let's go ahead and create a figure with a single Axes artist, and show it using pyplot:
End of explanation
x = np.linspace(0, 5, 10)
y = x ** 2
Explanation: Matplotlib's pyplot module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with plt.figure because it was implicit that we needed a figure when we created the Axes artist.
Under the hood matplotlib still had to create a Figure artist, its just we didn't need to capture it into a variable.
- essential stuff - pyplot versus Object based
Some example data:
End of explanation
ax = plt.plot(x, y, '-')
Explanation: Observe the following difference:
1. pyplot style: plt. (you will see this a lot for code online!)
End of explanation
from matplotlib import ticker
x = np.linspace(0, 5, 10)
y = x ** 10
fig, ax = plt.subplots()
ax.plot(x, y, '-')
ax.set_title("My data")
ax.yaxis.set_major_formatter(ticker.FormatStrFormatter("%.1f"))
Explanation: 2. object oriented
End of explanation
fig, ax1 = plt.subplots()
ax1.plot(x, y, '-')
ax1.set_ylabel('y')
ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
ax2.set_xlabel('x')
ax2.plot(x, y*2, 'r-')
Explanation: Although a little bit more code is involved, the advantage is that we now have full control of where the plot axes are placed, and we can easily add more than one axis to the figure:
End of explanation
fig, ax = plt.subplots()
ax.plot(x, y, '-')
# ...
Explanation: And also Matplotlib advices the object oriented style:
<div class="alert alert-info" style="font-size:18px">
<b>REMEMBER</b>:
<ul>
<li>Use the <b>object oriented</b> power of Matplotlib</li>
<li>Get yourself used to writing <code>fig, ax = plt.subplots()</code></li>
</ul>
</div>
End of explanation
x = np.linspace(-1, 0, 100)
fig, ax = plt.subplots(figsize=(10, 7))
# Adjust the created axes so that its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.9)
ax.plot(x, x**2, color='0.4', label='power 2')
ax.plot(x, x**3, color='0.8', linestyle='--', label='power 3')
ax.vlines(x=-0.75, ymin=0., ymax=0.8, color='0.4', linestyle='-.')
ax.fill_between(x=x, y1=x**2, y2=1.1*x**2, color='0.85')
ax.axhline(y=0.1, color='0.4', linestyle='-.')
ax.axhspan(ymin=0.65, ymax=0.75, color='0.95')
fig.suptitle('Figure title', fontsize=18,
fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.set_xlim(-1.0, 1.1)
ax.set_ylim(-0.1, 1.)
ax.text(0.5, 0.2, 'Text centered at (0.5, 0.2)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin relative Axes coordinates.',
horizontalalignment='center', fontsize=14,
transform=ax.transAxes, color='grey')
ax.annotate('Text pointing at (0.0, 0.75)', xy=(0.0, 0.75), xycoords="data",
xytext=(20, 40), textcoords="offset points",
horizontalalignment='left', fontsize=14,
arrowprops=dict(facecolor='black', shrink=0.05, width=1))
ax.legend(loc='lower right', frameon=True, ncol=2, fontsize=14)
Explanation: An small cheat-sheet reference for some common elements
End of explanation
data = np.random.randint(-2, 3, 100).cumsum()
data
Explanation: Adjusting specific parts of a plot is a matter of accessing the correct element of the plot:
For more information on legend positioning, check this post on stackoverflow!
Exercises
For these exercises we will use some random generated example data (as a Numpy array), representing daily measured values:
End of explanation
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(data, color='darkgrey')
ax.set_xlabel('days since start');
ax.set_ylabel('measured value');
Explanation: <div class="alert alert-success">
**EXERCISE**
Make a line chart of the `data` using Matplotlib. The figure should be 12 (width) by 4 (height) in inches. Make the line color 'darkgrey' and provide an x-label ('days since start') and a y-label ('measured value').
Use the object oriented approach to create the chart.
<details><summary>Hints</summary>
- When Matplotlib only receives a single input variable, it will interpret this as the variable for the y-axis
- Check the cheat sheet above for the functions.
</details>
</div>
End of explanation
dates = pd.date_range("2021-01-01", periods=100, freq="D")
fig, ax = plt.subplots(figsize=(12, 4))
ax.plot(dates, data, color='darkgrey')
ax.axhspan(ymin=-5, ymax=5, color='green', alpha=0.2)
ax.set_xlabel('days since start');
ax.set_ylabel('measured value');
Explanation: <div class="alert alert-success">
**EXERCISE**
The data represents each a day starting from Jan 1st 2021. Create an array (variable name `dates`) of the same length as the original data (length 100) with the corresponding dates ('2021-01-01', '2021-01-02',...). Create the same chart as in the previous exercise, but use the `dates` values for the x-axis data.
Mark the region inside `[-5, 5]` with a green color to show that these values are within an acceptable range.
<details><summary>Hints</summary>
- As seen in notebook `pandas_04_time_series_data`, Pandas provides a useful function `pd.date_range` to create a set of datetime values. In this case 100 values with `freq="D"`.
- Make sure to understand the difference between `axhspan` and `fill_between`, which one do you need?
- When adding regions, adding an `alpha` level is mostly a good idea.
</details>
</div>
End of explanation
fig, ax = plt.subplots(figsize=(12, 4))
ax.bar(dates[-10:], data[-10:], color='darkgrey')
ax.bar(dates[-6], data[-6], color='orange')
Explanation: <div class="alert alert-success">
**EXERCISE**
Compare the __last ten days__ ('2021-04-01' till '2021-04-10') in a bar chart using darkgrey color. For the data on '2021-04-01', use an orange bar to highlight the measurement on this day.
<details><summary>Hints</summary>
- Select the last 10 days from the `data` and `dates` variable, i.e. slice [-10:].
- Similar to a `plot` method, Matplotlib provides a `bar` method.
- By plotting a single orange bar on top of the grey bars with a second bar chart, that one is highlithed.
</details>
</div>
End of explanation
plt.style.available
x = np.linspace(0, 10)
with plt.style.context('seaborn-whitegrid'): # 'seaborn', ggplot', 'bmh', 'grayscale', 'seaborn-whitegrid', 'seaborn-muted'
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
Explanation: I do not like the style...
...understandable
Matplotlib had a bad reputation in terms of its default styling as figures created with earlier versions of Matplotlib were very Matlab-lookalike and mostly not really catchy.
Since Matplotlib 2.0, this has changed: https://matplotlib.org/users/dflt_style_changes.html!
However...
Des goûts et des couleurs, on ne discute pas...
(check this link if you're not french-speaking)
To account different tastes, Matplotlib provides a number of styles that can be used to quickly change a number of settings:
End of explanation
plt.style.use('seaborn')
Explanation: We should not start discussing about colors and styles, just pick your favorite style!
End of explanation
fig, ax = plt.subplots(2, 3, figsize=(5, 5))
Explanation: or go all the way and define your own custom style, see the official documentation or this tutorial.
<div class="alert alert-info">
<b>REMEMBER</b>:
* If you just want **quickly a good-looking plot**, use one of the available styles (`plt.style.use('...')`)
* Otherwise, creating `Figure` and `Axes` objects makes it possible to change everything!
</div>
Advanced subplot configuration
The function to setup a Matplotlib Figure we have seen up to now, fig, ax = plt.subplots(), supports creating both a single plot and multiple subplots with a regular number of rows/columns:
End of explanation
fig, ax = plt.subplots(2, 3, figsize=(5, 5), constrained_layout=True)
Explanation: A typical issue when plotting multiple elements in the same Figure is the overlap of the subplots. A straight-forward approach is using a larger Figure size, but this is not always possible and does not make the content independent from the Figure size. Matplotlib provides the usage of a constrained-layout to fit plots within your Figure cleanly.
End of explanation
axd = plt.figure(constrained_layout=True).subplot_mosaic(
ABD
CCD
)
axd;
Explanation: When more advanced layout configurations are required, the usage of the gridspec module is a good reference. See gridspec demo for more information. A useful shortcut to know about is the string-shorthand to setup subplot layouts in a more intuitive way, e.g.
End of explanation
import pandas as pd
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
flowdata.plot.line() # remark default plot() is a line plot
Explanation: Interaction with Pandas
What we have been doing while plotting with Pandas:
End of explanation
flowdata.plot(figsize=(16, 6), ylabel="Discharge m3/s") # SHIFT + TAB this!
Explanation: Under the hood, it creates an Matplotlib Figure with an Axes object.
Pandas versus matplotlib
Comparison 1: single plot
End of explanation
fig, ax = plt.subplots(figsize=(16, 6))
ax.plot(flowdata)
ax.legend(["L06_347", "LS06_347", "LS06_348"])
Explanation: Making this with matplotlib...
End of explanation
axs = flowdata.plot(subplots=True, sharex=True,
figsize=(16, 8), colormap='viridis', # Dark2
fontsize=15, rot=0)
axs[0].set_title("EXAMPLE");
Explanation: is still ok!
Comparison 2: with subplots
End of explanation
from matplotlib import cm
import matplotlib.dates as mdates
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the colors
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col)
ax.legend()
if not ax.get_subplotspec().is_last_row():
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else:
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlabel('Time')
ax.tick_params(labelsize=15)
Explanation: Mimicking this in matplotlib (just as a reference, it is basically what Pandas is doing under the hood):
End of explanation
fig, (ax0, ax1) = plt.subplots(2, 1) #prepare a Matplotlib figure
flowdata.plot(ax=ax0) # use Pandas for the plotting
fig, ax = plt.subplots(figsize=(15, 5)) #prepare a matplotlib figure
flowdata.plot(ax=ax) # use pandas for the plotting
# Provide further adaptations with matplotlib:
ax.set_xlabel("")
ax.grid(which="major", linewidth='0.5', color='0.8')
fig.suptitle('Flow station time series', fontsize=15)
fig, (ax0, ax1) = plt.subplots(2, 1, figsize=(16, 6)) #provide with matplotlib 2 axis
flowdata[["L06_347", "LS06_347"]].plot(ax=ax0) # plot the two timeseries of the same location on the first plot
flowdata["LS06_348"].plot(ax=ax1, color='0.7') # plot the other station on the second plot
# further adapt with matplotlib
ax0.set_ylabel("L06_347")
ax1.set_ylabel("LS06_348")
ax1.legend()
Explanation: Is already a bit harder ;-). Pandas provides as set of default configurations on top of Matplotlib.
Best of both worlds...
End of explanation
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
flowdata.head()
Explanation: <div class="alert alert-info">
<b>Remember</b>:
* You can do anything with matplotlib, but at a cost... <a href="http://stackoverflow.com/questions/tagged/matplotlib">stackoverflow</a>
* The preformatting of Pandas provides mostly enough flexibility for quick analysis and draft reporting. It is not for paper-proof figures or customization
If you take the time to make your perfect/spot-on/greatest-ever matplotlib-figure: Make it a <b>reusable function</b>!
`fig.savefig()` to save your Figure object!
</div>
Exercise
End of explanation
fig, ax = plt.subplots()
flowdata.mean().plot.bar(ylabel="mean discharge", ax=ax)
Explanation: <div class="alert alert-success">
**EXERCISE**
Pandas supports different types of charts besides line plots, all available from `.plot.xxx`, e.g. `.plot.scatter`, `.plot.bar`,... Make a bar chart to compare the mean discharge in the three measurement stations L06_347, LS06_347, LS06_348. Add a y-label 'mean discharge'. To do so, prepare a Figure and Axes with Matplotlib and add the chart to the created Axes.
<details><summary>Hints</summary>
* You can either use Pandas `ylabel` parameter to set the label or add it with Matploltib `ax.set_ylabel()`
* To link an Axes object with Pandas output, pass the Axes created by `fig, ax = plt.subplots()` as parameter to the Pandas plot function.
</details>
</div>
End of explanation
fig, (ax0, ax1) = plt.subplots(1, 2, constrained_layout=True)
flowdata.min().plot.bar(ylabel="min discharge", ax=ax0)
flowdata.max().plot.bar(ylabel="max discharge", ax=ax1)
fig.suptitle(f"Minimal and maximal discharge from {flowdata.index[0]:%Y-%m-%d} till {flowdata.index[-1]:%Y-%m-%d}");
Explanation: <div class="alert alert-success">
**EXERCISE**
To compare the stations data, make two subplots next to each other:
- In the left subplot, make a bar chart of the minimal measured value for each of the station.
- In the right subplot, make a bar chart of the maximal measured value for each of the station.
Add a title to the Figure containing 'Minimal and maximal discharge from 2009-01-01 till 2013-01-02'. Extract these dates from the data itself instead of hardcoding it.
<details><summary>Hints</summary>
- One can directly unpack the result of multiple axes, e.g. `fig, (ax0, ax1) = plt.subplots(1, 2,..` and link each of them to a Pands plot function.
- Remember the remark about `constrained_layout=True` to overcome overlap with subplots?
- A Figure title is called `suptitle` (which is different from an Axes title)
- f-strings ([_formatted string literals_](https://docs.python.org/3/tutorial/inputoutput.html#formatted-string-literals)) is a powerful Python feature (since Python 3.6) to use variables inside a string, e.g. `f"some text with a {variable:HOWTOFORMAT}"` (with the format being optional).
</details>
</div>
End of explanation
alarm_level = 20
max_datetime, max_value = flowdata["LS06_347"].idxmax(), flowdata["LS06_347"].max()
fig, ax = plt.subplots(figsize=(18, 4))
flowdata["LS06_347"].plot(ax=ax)
ax.axhline(y=alarm_level, color='red', linestyle='-', alpha=0.8)
ax.annotate('Alarm level', xy=(flowdata.index[0], alarm_level),
xycoords="data", xytext=(10, 10), textcoords="offset points",
color="red", fontsize=12)
ax.annotate(f"Flood event on {max_datetime:%Y-%m-%d}",
xy=(max_datetime, max_value), xycoords='data',
xytext=(-30, -30), textcoords='offset points',
arrowprops=dict(facecolor='black', shrink=0.05),
horizontalalignment='right', verticalalignment='bottom',
fontsize=12)
Explanation: <div class="alert alert-success">
**EXERCISE**
Make a line plot of the discharge measurements in station `LS06_347`.
The main event on November 13th caused a flood event. To support the reader in the interpretation of the graph, add the following elements:
- Add an horizontal red line at 20 m3/s to define the alarm level.
- Add the text 'Alarm level' in red just above the alarm levl line.
- Add an arrow pointing to the main peak in the data (event on November 13th) with the text 'Flood event on 2020-11-13'
Check the Matplotlib documentation on [annotations](https://matplotlib.org/stable/gallery/text_labels_and_annotations/annotation_demo.html#annotating-plots) for the text annotation
<details><summary>Hints</summary>
- The horizontal line is explained in the cheat sheet in this notebook.
- Whereas `ax.text` would work as well for the 'alarm level' text, the `annotate` method provides easier options to shift the text slightly relative to a data point.
- Extract the main peak event by filtering the data on the maximum value. Different approaches are possible, but the `max()` and `idxmax()` methods are a convenient option in this case.
</details>
</div>
End of explanation |
9,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SemCor
SemCor is a WordNet annotated subset of the Brown corpus. WordNet has coarse features for nouns and verbs, called "supersenses". These are things like "NOUN.BODY", "VERB.MOTION". Supersenses are coarse semantic features, given by linguists. Once downloaded, NLTK provides easy access to the SemCor corpus, in particular as a stream of tagged chunks. Chunks are coarse constituents.
Features
In this section, I derive feature representations for words from SemCor. This follows Tsvetkov et al. (2015).
Step1: Put the tagged chunks into a pandas dataframe. Each row holds a chunk, and I will add columns as I go on.
Step6: These functions correspond to the columns I want to add.
Step7: Add the columns.
Step8: Only nouns and verbs have supersenses in WordNet (adjectives do, but they seem to be ignored in the original paper). We only care about words that have supersenses now. Also, there's a lot of junk words in SemCor. First we replace all digits with '0'. Then we'll drop words that are i) not alphabetic or '0' or ii) multiple words. I can't tell how this is done in the original paper, so I may get slightly different results.
Step9: The linguistic features are in fact a distribution over the 41 supersenses.
Step10: Now we restrict to words with a certain frequency in the corpus. The original paper has 4199 words when thresholding at a frequency of 5, I get 4847. I believe this discrepancy comes from how words are preprocessed.
Step11: Save my features
Step12: Comparing with reference implementation
I want to know how different my linguistic features are from the original paper's, available here. Read in the original paper's. Note that their columns start with "semcor.", which will be helpful when I merge the two sets.
Step13: Save Tsvetkov et al.'s features
Step14: Merge the two sets. This dataframe has 84 columns
Step15: So I have two discrete distributions, and I want to know how different/similar they are. I need to ask someone how to do this properly, but I have a few naive ideas.
KL divergence
Step17: Sort the columns so that features in the two lists are in the same position.
Step18: Inf is not good, meaning that the two distributions are very different. Nearly half of the words have very different distributions.
Step19: Otherwise, most words have similar distributions. | Python Code:
import pandas as pd
from nltk.corpus import semcor
from nltk.corpus.reader.wordnet import Lemma
tagged_chunks = semcor.tagged_chunks(tag='both')
tagged_chunks = list(tagged_chunks) # takes ages
Explanation: SemCor
SemCor is a WordNet annotated subset of the Brown corpus. WordNet has coarse features for nouns and verbs, called "supersenses". These are things like "NOUN.BODY", "VERB.MOTION". Supersenses are coarse semantic features, given by linguists. Once downloaded, NLTK provides easy access to the SemCor corpus, in particular as a stream of tagged chunks. Chunks are coarse constituents.
Features
In this section, I derive feature representations for words from SemCor. This follows Tsvetkov et al. (2015).
End of explanation
sc = pd.DataFrame({'chunk': tagged_chunks})
Explanation: Put the tagged chunks into a pandas dataframe. Each row holds a chunk, and I will add columns as I go on.
End of explanation
def extract_words(tc):
Return the words of a tagged chunk.
words = [w.lower() for w in tc.leaves()]
return ' '.join(words)
def extract_pos(tc):
Return the POS of a tagged chunk.
This isn't the cleanest way, but it works for now.
return tc.pos()[0][1]
def extract_supersense(tc):
Return the supersense of a tagged chunk, otherwise None.
Only nouns and verbs have supersenses.
label = tc.label()
if isinstance(label, Lemma):
return label.synset().lexname()
return None
def extract_rough_pos(supersense):
Return coarser POS from supersense information.
if supersense:
return supersense.split('.')[0]
return None
Explanation: These functions correspond to the columns I want to add.
End of explanation
sc['words'] = sc['chunk'].apply(extract_words)
sc['pos'] = sc['chunk'].apply(extract_pos)
sc['supersense'] = sc['chunk'].apply(extract_supersense)
sc['rough_pos'] = sc['supersense'].apply(extract_rough_pos)
Explanation: Add the columns.
End of explanation
sc = sc[sc['rough_pos'].isin(['noun', 'verb'])]
sc.loc[sc['words'].str.isdigit(), 'words'] = '0'
sc = sc[(sc['words'].str.isalpha()) | (sc['words'].str.match('0'))]
Explanation: Only nouns and verbs have supersenses in WordNet (adjectives do, but they seem to be ignored in the original paper). We only care about words that have supersenses now. Also, there's a lot of junk words in SemCor. First we replace all digits with '0'. Then we'll drop words that are i) not alphabetic or '0' or ii) multiple words. I can't tell how this is done in the original paper, so I may get slightly different results.
End of explanation
grouped = sc.groupby('words')
features = grouped['supersense'].value_counts(normalize=True).unstack()
features['count_in_semcor'] = grouped['supersense'].count()
features.reset_index(inplace=True) # make "words" a column not the index
features.fillna(0, inplace=True)
features.columns.name = ''
Explanation: The linguistic features are in fact a distribution over the 41 supersenses.
End of explanation
threshold = 5
subset = features[features['count_in_semcor'] > threshold]
print('Number of words:', len(subset))
Explanation: Now we restrict to words with a certain frequency in the corpus. The original paper has 4199 words when thresholding at a frequency of 5, I get 4847. I believe this discrepancy comes from how words are preprocessed.
End of explanation
subset.to_csv('my_semcor.csv')
Explanation: Save my features
End of explanation
words, dicts = [], []
with open('tsvetkov.txt', 'r') as f:
for line in f:
word, _, d = line.partition('\t')
words.append(word)
d = eval(d.strip())
dicts.append(d)
tsvetkov = pd.DataFrame(dicts)
tsvetkov['words'] = words
tsvetkov.fillna(0, inplace=True)
Explanation: Comparing with reference implementation
I want to know how different my linguistic features are from the original paper's, available here. Read in the original paper's. Note that their columns start with "semcor.", which will be helpful when I merge the two sets.
End of explanation
tsvetkov.to_csv('tsvetkov_semcor.csv')
Explanation: Save Tsvetkov et al.'s features
End of explanation
comparison = pd.merge(features, tsvetkov, on=['words'], how='inner')
comparison.head()
Explanation: Merge the two sets. This dataframe has 84 columns: one for the word, one for my count in SemCor, 41 for my features, 41 for their features.
End of explanation
%matplotlib inline
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
Explanation: So I have two discrete distributions, and I want to know how different/similar they are. I need to ask someone how to do this properly, but I have a few naive ideas.
KL divergence
End of explanation
their_columns = sorted([c for c in comparison.columns if c.startswith('semcor')])
my_columns = [c.partition('.')[2] for c in their_columns]
def KL(row):
Helper function for apply KL to a dataframe.
I get errors unless I explicitly convert to float, even though they appear to be floats anyway
theirs = row[their_columns].values.astype(float)
mine = row[my_columns].values.astype(float)
return stats.entropy(theirs, mine)
comparison['KL'] = comparison.apply(KL, axis=1)
Explanation: Sort the columns so that features in the two lists are in the same position.
End of explanation
np.isinf(comparison['KL']).sum() / len(comparison)
Explanation: Inf is not good, meaning that the two distributions are very different. Nearly half of the words have very different distributions.
End of explanation
sns.boxplot(comparison[np.isfinite(comparison['KL'])]['KL']);
Explanation: Otherwise, most words have similar distributions.
End of explanation |
9,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Student Admissions with Neural Networks in Keras
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data
Step1: Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
Step2: Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
Step3: This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
One-hot encoding the rank
For this, we'll use the get_dummies function in pandas.
Step4: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
Step5: Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
Step6: Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
Also, in Keras, we need to one-hot encode the output. We'll do this with the to_categorical function.
Step7: Defining the model architecture
Here's where we use Keras to build our neural network.
Step8: Training the model
Step9: Scoring the model | Python Code:
# Importing pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data[:10]
Explanation: Predicting Student Admissions with Neural Networks in Keras
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
Loading the data
To load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:
- https://pandas.pydata.org/pandas-docs/stable/
- https://docs.scipy.org/
End of explanation
# Importing matplotlib
import matplotlib.pyplot as plt
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
Explanation: Plotting the data
First let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
End of explanation
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
Explanation: Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
End of explanation
# Make dummy variables for rank
one_hot_data = pd.concat([data, pd.get_dummies(data['rank'], prefix='rank')], axis=1)
# Drop the previous rank column
one_hot_data = one_hot_data.drop('rank', axis=1)
# Print the first 10 rows of our data
one_hot_data[:10]
Explanation: This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it.
One-hot encoding the rank
For this, we'll use the get_dummies function in pandas.
End of explanation
# Copying our data
processed_data = one_hot_data[:]
# Scaling the columns
processed_data['gre'] = processed_data['gre']/800
processed_data['gpa'] = processed_data['gpa']/4.0
processed_data[:10]
Explanation: Scaling the data
The next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
End of explanation
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
Explanation: Splitting the data into Training and Testing
In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
End of explanation
import keras
# Separate data and one-hot encode the output
# Note: We're also turning the data into numpy arrays, in order to train the model in Keras
features = np.array(train_data.drop('admit', axis=1))
targets = np.array(keras.utils.to_categorical(train_data['admit'], 2))
features_test = np.array(test_data.drop('admit', axis=1))
targets_test = np.array(keras.utils.to_categorical(test_data['admit'], 2))
print(features[:10])
print(targets[:10])
Explanation: Splitting the data into features and targets (labels)
Now, as a final step before the training, we'll split the data into features (X) and targets (y).
Also, in Keras, we need to one-hot encode the output. We'll do this with the to_categorical function.
End of explanation
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(6,)))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(2, activation='softmax'))
# Compiling the model
model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
Explanation: Defining the model architecture
Here's where we use Keras to build our neural network.
End of explanation
# Training the model
model.fit(features, targets, epochs=200, batch_size=100, verbose=0)
Explanation: Training the model
End of explanation
# Evaluating the model on the training and testing set
score = model.evaluate(features, targets)
print("\n Training Accuracy:", score[1])
score = model.evaluate(features_test, targets_test)
print("\n Testing Accuracy:", score[1])
Explanation: Scoring the model
End of explanation |
9,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extra 3.2 - Historical Provenance - Application 3
Step1: Labelling data
Since we are only interested in the instruction messages, we categorise the data entity into two sets
Step2: Balancing data
This section explore the balance of the RRG datasets.
Step3: Since both labels have roughly the same number of data points, we decide not to balance the RRG datasets.
Cross validation
We now run the cross validation tests on the datasets using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code. | Python Code:
import pandas as pd
filepath = "rrg/ancestor-graphs.csv"
df = pd.read_csv(filepath, index_col=0)
df.head()
Explanation: Extra 3.2 - Historical Provenance - Application 3: RRG Chat Messages
Identifying instructions from chat messages in the Radiation Response Game.
In this notebook, we explore the performance of classification using the provenance of a data entity instead of its dependencies (as shown here and in the paper). In order to distinguish between the two, we call the former historical provenance and the latter forward provenance. Apart from using the historical provenance, all other steps are the same as the original experiments.
Goal: To determine if the provenance network analytics method can identify instructions from the provenance of a chat messages.
Classification labels: $\mathcal{L} = \left{ \textit{instruction}, \textit{other} \right} $.
Training data: 69 chat messages manually categorised by HCI researchers.
Reading data
The RRG dataset based on historical provenance is provided in the rrg/ancestor-graphs.csv file, which contains a table whose rows correspond to individual chat messages in RRG:
* First column: the identifier of the chat message
* label: the manual classification of the message (e.g., instruction, information, requests, etc.)
* The remaining columns provide the provenance network metrics calculated from the historical provenance graph of the message.
Note that in this extra experiment, we use the full (historical) provenance of a message, not limiting how far it goes. Hence, there is no $k$ parameter in this experiment.
End of explanation
label = lambda l: 'other' if l != 'instruction' else l
df.label = df.label.apply(label).astype('category')
df.head()
Explanation: Labelling data
Since we are only interested in the instruction messages, we categorise the data entity into two sets: instruction and other.
Note: This section is just an example to show the data transformation to be applied on each dataset.
End of explanation
# Examine the balance of the dataset
df.label.value_counts()
Explanation: Balancing data
This section explore the balance of the RRG datasets.
End of explanation
from analytics import test_classification
results, importances = test_classification(df, n_iterations=1000)
Explanation: Since both labels have roughly the same number of data points, we decide not to balance the RRG datasets.
Cross validation
We now run the cross validation tests on the datasets using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code.
End of explanation |
9,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WASP-80b broadband analysis
3a. Gaussian process hyperparameter estimation I
Hannu Parviainen, Instituto de Astrofísica de Canarias<br>
This notebook works as an appendix to Parviainen et al., Ground based transmission spectroscopy of WASP-80b (2017). The paper covers two analyses
Step1: Compute residuals
The GP hyperparameters are fit to the residuals from the ckwn white-noise analysis.
Step2: Plot the light curves
Step3: Fit the Hyperparameters and plot the GP mean with the data
Step4: Create a Pandas dataframe and save the hyperparameters | Python Code:
%pylab inline
%run __init__.py
from exotk.utils.misc import fold
from src.extcore import *
Explanation: WASP-80b broadband analysis
3a. Gaussian process hyperparameter estimation I
Hannu Parviainen, Instituto de Astrofísica de Canarias<br>
This notebook works as an appendix to Parviainen et al., Ground based transmission spectroscopy of WASP-80b (2017). The paper covers two analyses: a broadband analysis using three previously published datasets, and a transmission spectroscopy analysis using two GTC-observed spectroscopic time series, and this notebook covers a part of the broadband analysis.
Last (significant) revision: 11.08.2017
Here we estimate the GP hyperparameters (HPs) for the light curves in T13 and M14 datasets. The GP for these two datasets uses a simple kernel with time as the only input parameter.
End of explanation
lpf = LPFTM()
pv0 = pd.read_hdf(RFILE_EXT, 'ckwn/fc').median().values
fluxes_m = lpf.compute_transit(pv0)
residuals = [fo-fm for fo,fm in zip(lpf.fluxes, fluxes_m)]
gps = [GPTime(time, res) for time,res in zip(lpf.times, residuals)]
hps = []
Explanation: Compute residuals
The GP hyperparameters are fit to the residuals from the ckwn white-noise analysis.
End of explanation
phases = list(map(lambda t: fold(t, P, TC, 0.5)-0.5, lpf.times))
fig,axs = subplots(4,3, figsize=(14,14),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
a = axs.flat[iax]
a.plot(phases[ilc], lpf.fluxes[ilc],'.', alpha=0.5)
a.plot(phases[ilc], fluxes_m[ilc],'k')
a.plot(phases[ilc], lpf.fluxes[ilc]-fluxes_m[ilc]+0.95,'.', alpha=0.5)
a.text(0.5, 0.95, lpf.passbands[ilc], ha='center', va='top', size=12, transform=a.transAxes)
setp(axs, ylim=(0.94,1.01), xlim=(-0.035,0.035))
fig.tight_layout()
axs.flat[-1].set_visible(False)
Explanation: Plot the light curves
End of explanation
hps = []
for gp in tqdm(gps, desc='Optimising GP hyperparameters'):
gp.fit()
hps.append(gp.hp)
fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
axs.flat[iax].plot(phases[ilc], gps[ilc].flux, '.', alpha=0.5)
gps[ilc].compute(hps[ilc])
pr = gps[ilc].predict()
axs.flat[iax].plot(phases[ilc], pr, 'k')
setp(axs, ylim=(-0.015,.015), xlim=(-0.04,0.04))
fig.tight_layout()
axs.flat[-1].set_visible(False)
fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True)
for iax,ilc in enumerate(lpf.lcorder):
axs.flat[iax].plot(phases[ilc], lpf.fluxes[ilc], '.', alpha=0.5)
gps[ilc].compute(hps[ilc])
pr = gps[ilc].predict()
axs.flat[iax].plot(phases[ilc], fluxes_m[ilc]+pr, 'k')
setp(axs, ylim=(0.955,1.015), xlim=(-0.04,0.04))
fig.tight_layout()
axs.flat[-1].set_visible(False)
Explanation: Fit the Hyperparameters and plot the GP mean with the data
End of explanation
with pd.HDFStore(DFILE_EXT) as f:
ntr = [k[3:] for k in f.keys() if 'lc/triaud' in k]
nma = [k[3:] for k in f.keys() if 'lc/mancini' in k]
df = pd.DataFrame(hps, columns=gp.names, index=lpf.passbands)
df['lc_name'] = ntr+nma
df
df.ix[:3].to_hdf(RFILE_EXT, 'gphp/triaud2013')
df.ix[3:].to_hdf(RFILE_EXT, 'gphp/mancini2014')
Explanation: Create a Pandas dataframe and save the hyperparameters
End of explanation |
9,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'rv' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Let's add an RV dataset to the Bundle (see also the rv API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
Step3: For information on the included passband-dependent parameters (not mentioned below), see the section on the lc dataset (these are used only to compute fluxes when rv_method is 'flux-weighted')
times
Step4: rvs
The rvs parameter is only visible if the respective times parameter is not empty.
Step5: sigmas
The sigmas parameter is also only visible if the respective times parameter is not empty.
Step6: compute_times / compute_phases
See the Compute Times & Phases tutorial.
Step7: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the RV dataset.
Other compute options are covered elsewhere
Step8: rv_method
Step9: If rv_method is set to 'dynamical' then the computed radial velocities are simply the z-velocities of the centers of mass of each component. In this case, only the dynamical options are relevant. For more details on these, see the section on the orb dataset.
If rv_method is set to 'flux-weighted' then radial velocities are determined by the z-velocity of each visible surface element of the mesh, weighted by their respective intensities. Since the stars are placed in their orbits by the dynamic options, the section on the orb dataset is still applicable. So are the meshing options described in mesh dataset and the options for computing fluxes in lc dataset. See also the Rossiter-McLaughlin example.
rv_grav
Step10: See the Gravitational Redshift example for more details on the influence this parameter has on radial velocities.
Synthetics
Step11: Plotting
By default, RV datasets plot as 'rvs' vs 'times'.
Step12: Since these are the only two columns available in the synthetic model, the only other options is to plot in phase instead of time.
Step13: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
Step14: Mesh Fields
By adding a mesh dataset and setting the columns parameter, radial velocities per-element quantities can be exposed and plotted. Since the radial velocities are flux-weighted, the flux-related quantities are also included (except relative intensities/luminosities that would require pblum scaling). For a description of these, see the section on the lc dataset.
Let's add a mesh at the first time of the rv dataset and re-call run_compute
Step15: These new columns are stored with the rv's dataset tag, but with the mesh model-kind.
Step16: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the MESH dataset).
Step17: rvs | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: 'rv' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.add_dataset('rv')
print(b.get_dataset(kind='rv', check_visible=False))
Explanation: Dataset Parameters
Let's add an RV dataset to the Bundle (see also the rv API docs). Some parameters are only visible based on the values of other parameters, so we'll pass check_visible=False (see the filter API docs for more details). These visibility rules will be explained below.
End of explanation
print(b.get_parameter(qualifier='times', component='primary'))
Explanation: For information on the included passband-dependent parameters (not mentioned below), see the section on the lc dataset (these are used only to compute fluxes when rv_method is 'flux-weighted')
times
End of explanation
b.set_value('times', component='primary', value=[0])
print(b.get_parameter(qualifier='rvs', component='primary'))
Explanation: rvs
The rvs parameter is only visible if the respective times parameter is not empty.
End of explanation
print(b.get_parameter(qualifier='sigmas', component='primary'))
Explanation: sigmas
The sigmas parameter is also only visible if the respective times parameter is not empty.
End of explanation
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
print(b.get_compute())
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the RV dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision (used if rv_method=='flux-weighted') are explained in the section on the mesh dataset
* parameters related to computing fluxes (used if rv_method=='flux-weighted') are explained in the section on the lc dataset
End of explanation
print(b.get_parameter(qualifier='rv_method', component='primary'))
Explanation: rv_method
End of explanation
print(b.get_parameter(qualifier='rv_grav', component='primary'))
Explanation: If rv_method is set to 'dynamical' then the computed radial velocities are simply the z-velocities of the centers of mass of each component. In this case, only the dynamical options are relevant. For more details on these, see the section on the orb dataset.
If rv_method is set to 'flux-weighted' then radial velocities are determined by the z-velocity of each visible surface element of the mesh, weighted by their respective intensities. Since the stars are placed in their orbits by the dynamic options, the section on the orb dataset is still applicable. So are the meshing options described in mesh dataset and the options for computing fluxes in lc dataset. See also the Rossiter-McLaughlin example.
rv_grav
End of explanation
b.set_value_all('times', phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none')
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', component='primary', kind='rv', context='model'))
print(b.get_parameter(qualifier='rvs', component='primary', kind='rv', context='model'))
Explanation: See the Gravitational Redshift example for more details on the influence this parameter has on radial velocities.
Synthetics
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Plotting
By default, RV datasets plot as 'rvs' vs 'times'.
End of explanation
afig, mplfig = b.plot(x='phases', show=True)
Explanation: Since these are the only two columns available in the synthetic model, the only other options is to plot in phase instead of time.
End of explanation
print(b.filter(qualifier='period').components)
afig, mplfig = b.plot(x='phases:binary', show=True)
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
print(b.get_parameter(qualifier='columns').choices)
b.set_value('columns', value=['rvs@rv01'])
b.run_compute(irrad_method='none')
print(b.get_model().datasets)
Explanation: Mesh Fields
By adding a mesh dataset and setting the columns parameter, radial velocities per-element quantities can be exposed and plotted. Since the radial velocities are flux-weighted, the flux-related quantities are also included (except relative intensities/luminosities that would require pblum scaling). For a description of these, see the section on the lc dataset.
Let's add a mesh at the first time of the rv dataset and re-call run_compute
End of explanation
print(b.filter(dataset='rv01', kind='mesh', context='model').twigs)
Explanation: These new columns are stored with the rv's dataset tag, but with the mesh model-kind.
End of explanation
afig, mplfig = b.filter(kind='mesh').plot(fc='rvs', ec='None', show=True)
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the MESH dataset).
End of explanation
print(b.get_parameter(qualifier='rvs',
component='primary',
dataset='rv01',
kind='mesh',
context='model'))
Explanation: rvs
End of explanation |
9,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heat Transfer Conduction Calculations
This jupyter notebook walks through basic heat transfer calculations.
There are three basic types of heat transfer
Step1: 1. Conduction
Conduction is defined as the transfer of heat through matter without motion. Before getting into a detailed derivation of the heat equation, lets take a parochial look at heat transfer analysis.
Thermal Resistance Circuits
An analogy between conduction heat transfer and electric circuits can be exploited to aid in problem solving. $\dot{Q}$, the rate of heat transfer, is analogous to current, and $R$, thermal resistance, is analogous to electric resistance. Thus we can define $\dot{Q}$.
$$\dot{Q} = \frac{T_1 - T_2}{R}$$.
<!-- resistive model of heat transfer through a composite slab -->
<img src="assets/HeatTransferConduction_CompositeSlab.PNG">
Lets run through an example using this resistive model.
Example
Step2: These values can be used to calculate $\dot{Q}$ the rate of heat transfer. This can be related to a more physically relevant value, $\dot{q}$, the heat flux. Where,
$$ \dot{q} = \frac{\dot{q}}{A} = \frac{T_1 - T_4}{RA} $$
$$ RA = \sum_{n}{AR_n} $$
Step3: The temperature in the intermediary steps can be found using the fact that $\dot{Q}$ is constant throughout the slab.
$$ \dot{q} = \frac{T_1 - T_2}{R_1 A} $$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Heat Transfer Conduction Calculations
This jupyter notebook walks through basic heat transfer calculations.
There are three basic types of heat transfer:
1. Conduction
1. Convection
1. Radiation
This tutorial covers conduction calculations
We will be using will use numpy and matplotlib, which are imported below.
End of explanation
# Temperatures at stations
T1 = 150
T4 = 10 # celcius
# define values for thermal conductivity
k = [0.07, 0.7, 0.07]
# Length of layers
L = [0.03, 0.1, 0.03]
AR = [] # initialize empty array
for i in range(0,len(k)):
AR.append(L[i]/k[i])
print(AR, "m^2 K/W")
Explanation: 1. Conduction
Conduction is defined as the transfer of heat through matter without motion. Before getting into a detailed derivation of the heat equation, lets take a parochial look at heat transfer analysis.
Thermal Resistance Circuits
An analogy between conduction heat transfer and electric circuits can be exploited to aid in problem solving. $\dot{Q}$, the rate of heat transfer, is analogous to current, and $R$, thermal resistance, is analogous to electric resistance. Thus we can define $\dot{Q}$.
$$\dot{Q} = \frac{T_1 - T_2}{R}$$.
<!-- resistive model of heat transfer through a composite slab -->
<img src="assets/HeatTransferConduction_CompositeSlab.PNG">
Lets run through an example using this resistive model.
Example: 2D Brick Wall
A brick wall with thermal insulation on both sides has temperatures $T_1, T_2, T_3, T_4$ which are defined in the image below. Thus, we have four nodes in our resistive model, and three thermal resistances.
<!-- Example 1: Heat Transfer through brick wall using resistive model -->
<img src="assets/HeatTransferConduction_Example1.PNG">
$k$ is the thermal conductivity.
$$ k_{brick} = k_2 = 0.7 W/m-K $$
$$ k_{insulation} = k_1 = k_3 = 0.07 W/m-K $$
$$ A_1 = A_2 = A_3 = A $$
$$ L1 = L3 = 0.03 $$
$$ L2 = 0.1 $$
The overall resistance is:
$$ R = R_1 + R_2 + R_3 = \frac{L_1}{k_1 A_1} + \frac{L_2}{k_2 A_2} + \frac{L_3}{k_3 A_3} $$
Solving with an arbitrary Area, $A$, we get
$$ A_1 R_1 = \frac{L_1}{k_1}, A_2 R_2 = \frac{L_2}{k_2}, A_3 R_3 = \frac{L_3}{k_3} $$
Lets calculate values of $AR$
End of explanation
q = float(T1 - T4)/np.sum(AR)
print('q = ', q, 'W/m^2') # W/m^2
Explanation: These values can be used to calculate $\dot{Q}$ the rate of heat transfer. This can be related to a more physically relevant value, $\dot{q}$, the heat flux. Where,
$$ \dot{q} = \frac{\dot{q}}{A} = \frac{T_1 - T_4}{RA} $$
$$ RA = \sum_{n}{AR_n} $$
End of explanation
T2 = -q*AR[0] + T1
T3 = q*AR[2] + T4
T = [T1, T2, T3, T4] # vectorize temps
x = [0, L[0], L[0]+L[1], L[0]+L[1]+L[2]]
# Plot Temperature distribution
plt.title('Temperature Distribution Across Brick Wall')
plt.xlabel('X-location')
plt.ylabel('Temperature (C)')
plt.grid()
plt.plot(x,T)
plt.show()
# Print Temperatures
print('T1 = ', T1, 'C')
print('T2 = ', T2, 'C')
print('T3 = ', T3, 'C')
print('T4 = ', T4, 'C')
Explanation: The temperature in the intermediary steps can be found using the fact that $\dot{Q}$ is constant throughout the slab.
$$ \dot{q} = \frac{T_1 - T_2}{R_1 A} $$
End of explanation |
9,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Create TensorFlow DNN model </h1>
This notebook illustrates
Step1: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
Step2: Next, define the feature columns
Step3: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
Step4: Finally, train! | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
Explanation: <h1> Create TensorFlow DNN model </h1>
This notebook illustrates:
<ol>
<li> Creating a model using the high-level Estimator API
</ol>
End of explanation
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
TRAIN_STEPS = 1000
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.compat.v1.gfile.Glob(filename)
# Create dataset from file list
dataset = (tf.compat.v1.data.TextLineDataset(file_list) # Read text file
.map(decode_csv)) # Transform each elem by applying decode_csv fn
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size=10*batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
return _input_fn
Explanation: <h2> Create TensorFlow model using TensorFlow's Estimator API </h2>
<p>
First, write an input_fn to read the data.
End of explanation
# Define feature columns
def get_categorical(name, values):
return tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_vocabulary_list(name, values))
def get_cols():
# Define column types
return [\
get_categorical('is_male', ['True', 'False', 'Unknown']),
tf.feature_column.numeric_column('mother_age'),
get_categorical('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)']),
tf.feature_column.numeric_column('gestation_weeks')
]
Explanation: Next, define the feature columns
End of explanation
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.compat.v1.placeholder(tf.string, [None]),
'mother_age': tf.compat.v1.placeholder(tf.float32, [None]),
'plurality': tf.compat.v1.placeholder(tf.string, [None]),
'gestation_weeks': tf.compat.v1.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNRegressor(
model_dir = output_dir,
feature_columns = get_cols(),
hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
Explanation: To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
End of explanation
# Run the model
shutil.rmtree('babyweight_trained', ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate('babyweight_trained')
Explanation: Finally, train!
End of explanation |
9,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test for Mohammed
This container was started with
sudo docker run -d -p 433
Step1: Here are the RadarSat-2 quadpol coherency matrix image directories as created from the Sentinel-1 Toolbox
Step2: To combine the matrix bands into a single GeoTiff image, we run the python script ingestrs2quad.py
Step3: Here is an RGB display of the three diagonal matrix elements of the above image (bands 1,6 and 9)
Step4: To estimate the equivalent number of looks, run the python script enlml.py
Step5: So the ENL would appear to be about 5.
To run the change sequential change detection on the three images, run the bash script sar_seq_rs2quad.sh. It gathers the three images together and calls the python script sar_seq.py which does the change detection. By choosing a spatial subset (in this case 400x400), the images are clipped and co-registered to the first image. This might be unnecessary if the images are well registered anyway.
If you have a multicore processor you can eneable parallel computation by openeing a terminal window in the container (new terminal) and running
ipcluster start -n 4
Step6: Here is the change map for the most recent changes | Python Code:
%matplotlib inline
Explanation: Test for Mohammed
This container was started with
sudo docker run -d -p 433:8888 --name=sar -v /home/mort/imagery/mohammed/Data:/home/imagery mort/sardocker
End of explanation
ls /home/imagery
Explanation: Here are the RadarSat-2 quadpol coherency matrix image directories as created from the Sentinel-1 Toolbox:
End of explanation
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721080_DK650145_FQ17W_20160427_230257_HH_VV_HV_VH_SLC/
run /home/ingestrs2quad /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/
Explanation: To combine the matrix bands into a single GeoTiff image, we run the python script ingestrs2quad.py:
End of explanation
run /home/dispms -f /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif \
-p [1,6,9]
Explanation: Here is an RGB display of the three diagonal matrix elements of the above image (bands 1,6 and 9):
End of explanation
run /home/enlml /home/imagery/RS2_OK82571_PK721081_DK650146_FQ17W_20160614_230256_HH_VV_HV_VH_SLC/polSAR.tif
Explanation: To estimate the equivalent number of looks, run the python script enlml.py:
End of explanation
!/home/sar_seq_rs2quad.sh 20160403 20160427 20160614 [50,50,400,400] 5 0.01
Explanation: So the ENL would appear to be about 5.
To run the change sequential change detection on the three images, run the bash script sar_seq_rs2quad.sh. It gathers the three images together and calls the python script sar_seq.py which does the change detection. By choosing a spatial subset (in this case 400x400), the images are clipped and co-registered to the first image. This might be unnecessary if the images are well registered anyway.
If you have a multicore processor you can eneable parallel computation by openeing a terminal window in the container (new terminal) and running
ipcluster start -n 4
End of explanation
run /home/dispms \
-f /home/imagery/RS2_OK82571_PK721079_DK650144_FQ17W_20160403_230258_HH_VV_HV_VH_SLC/sarseq(20160403-1-20160614)_cmap.tif -c
Explanation: Here is the change map for the most recent changes:
End of explanation |
9,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Epoching and averaging (ERP/ERF)
Step1: In MNE, epochs refers to a collection of single trials or short segments
of time locked raw data. If you haven't already, you might want to check out
tut_epochs_objects. In this tutorial we take a deeper look into
construction of epochs and averaging the epoch data to evoked instances.
First let's read in the raw sample data.
Step2: To create time locked epochs, we first need a set of events that contain the
information about the times. In this tutorial we use the stimulus channel to
define the events. Let's look at the raw data.
Step3: Notice channel STI 014 at the bottom. It is the trigger channel that
was used for combining all the events to a single channel. We can see that it
has several pulses of different amplitude throughout the recording. These
pulses correspond to different stimuli presented to the subject during the
acquisition. The pulses have values of 1, 2, 3, 4, 5 and 32. These are the
events we are going to align the epochs to. To create an event list from raw
data, we simply call a function dedicated just for that. Since the event list
is simply a numpy array, you can also manually create one. If you create one
from an outside source (like a separate file of events), pay special
attention in aligning the events correctly with the raw data.
Step4: The event list contains three columns. The first column corresponds to
sample number. To convert this to seconds, you should divide the sample
number by the used sampling frequency. The second column is reserved for the
old value of the trigger channel at the time of transition, but is currently
not in use. The third column is the trigger id (amplitude of the pulse).
You might wonder why the samples don't seem to align with the plotted data.
For instance, the first event has a sample number of 27977 which should
translate to roughly 46.6 seconds (27977 / 600). However looking at
the pulses we see the first pulse at 3.6 seconds. This is because Neuromag
recordings have an attribute first_samp which refers to the offset
between the system start and the start of the recording. Our data has a
first_samp equal to 25800. This means that the first sample you see with
raw.plot is the sample number 25800. Generally you don't need to worry
about this offset as it is taken into account with MNE functions, but it is
good to be aware of. Just to confirm, let's plot the events together with the
raw data. Notice how the vertical lines (events) align nicely with the pulses
on STI 014.
Step5: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
triggers correspond to auditory and visual stimuli. The event_id here
can be an int, a list of ints or a dict. With dicts it is possible to assign
these ids to distinct categories. When using ints or lists this information
is lost. First we shall define some parameters to feed to the
Step6: Now we have everything we need to construct the epochs. To get some
meaningful results, we also want to baseline the epochs. Baselining computes
the mean over the baseline period and adjusts the data accordingly. The
epochs constructor uses a baseline period from tmin to 0.0 seconds by
default, but it is wise to be explicit. That way you are less likely to end
up with surprises along the way. None as the first element of the tuple
refers to the start of the time window (-200 ms in this case).
See
Step7: Let's plot the epochs to see the results. The number at the top refers to the
id number. We can see that 128 good epochs out of total of 145 events got
through the rejection process. Visual inspection also reveals that some
epochs containing saccades or blinks got through. You can also reject epochs
by hand by clicking on the epoch in the browser window. The selected epochs
get rejected when you close the epochs browser. How you should reject the
epochs and which thresholds to use is not a trivial question and this
tutorial takes no stand on that matter.
To see all the interactive features of the epochs browser, click 'Help' in
the lower left corner of the browser window.
Step8: To see why the epochs were rejected, we can plot the drop log.
Step9: To get the evoked response you can simply do epochs.average(). It
includes only the data channels by default. For the sake of example, we use
picks to include the EOG channels as well. Notice that we cannot use the
same picks as before as the indices are different. 'Why are they different?'
you might ask. They're different because picks is simply a list of
channel indices and as the epochs were constructed, also a new info structure
is created where the channel indices run from 0 to epochs.info['nchan'].
See tut_info_objects for more information.
Step10: Notice we have used forward slashes ('/') to separate the factors of the
conditions of the experiment. We can use these 'tags' to select for example
all left trials (both visual left and auditory right) ...
Step11: Finally, let's plot the evoked responses. | Python Code:
import os.path as op
import numpy as np
import mne
Explanation: Epoching and averaging (ERP/ERF)
End of explanation
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(fname, add_eeg_ref=False)
raw.set_eeg_reference() # set EEG average reference
Explanation: In MNE, epochs refers to a collection of single trials or short segments
of time locked raw data. If you haven't already, you might want to check out
tut_epochs_objects. In this tutorial we take a deeper look into
construction of epochs and averaging the epoch data to evoked instances.
First let's read in the raw sample data.
End of explanation
order = np.arange(raw.info['nchan'])
order[9] = 312 # We exchange the plotting order of two channels
order[312] = 9 # to show the trigger channel as the 10th channel.
raw.plot(n_channels=10, order=order, block=True)
Explanation: To create time locked epochs, we first need a set of events that contain the
information about the times. In this tutorial we use the stimulus channel to
define the events. Let's look at the raw data.
End of explanation
events = mne.find_events(raw)
print(events)
# Plot the events to get an idea of the paradigm
# Specify colors and an event_id dictionary for the legend.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4,
'smiley': 5, 'button': 32}
color = {1: 'green', 2: 'yellow', 3: 'red', 4: 'c', 5: 'black', 32: 'blue'}
mne.viz.plot_events(events, raw.info['sfreq'], raw.first_samp, color=color,
event_id=event_id)
Explanation: Notice channel STI 014 at the bottom. It is the trigger channel that
was used for combining all the events to a single channel. We can see that it
has several pulses of different amplitude throughout the recording. These
pulses correspond to different stimuli presented to the subject during the
acquisition. The pulses have values of 1, 2, 3, 4, 5 and 32. These are the
events we are going to align the epochs to. To create an event list from raw
data, we simply call a function dedicated just for that. Since the event list
is simply a numpy array, you can also manually create one. If you create one
from an outside source (like a separate file of events), pay special
attention in aligning the events correctly with the raw data.
End of explanation
raw.plot(events=events, n_channels=10, order=order)
Explanation: The event list contains three columns. The first column corresponds to
sample number. To convert this to seconds, you should divide the sample
number by the used sampling frequency. The second column is reserved for the
old value of the trigger channel at the time of transition, but is currently
not in use. The third column is the trigger id (amplitude of the pulse).
You might wonder why the samples don't seem to align with the plotted data.
For instance, the first event has a sample number of 27977 which should
translate to roughly 46.6 seconds (27977 / 600). However looking at
the pulses we see the first pulse at 3.6 seconds. This is because Neuromag
recordings have an attribute first_samp which refers to the offset
between the system start and the start of the recording. Our data has a
first_samp equal to 25800. This means that the first sample you see with
raw.plot is the sample number 25800. Generally you don't need to worry
about this offset as it is taken into account with MNE functions, but it is
good to be aware of. Just to confirm, let's plot the events together with the
raw data. Notice how the vertical lines (events) align nicely with the pulses
on STI 014.
End of explanation
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
# Only pick MEG and EOG channels.
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True)
Explanation: In this tutorial we are only interested in triggers 1, 2, 3 and 4. These
triggers correspond to auditory and visual stimuli. The event_id here
can be an int, a list of ints or a dict. With dicts it is possible to assign
these ids to distinct categories. When using ints or lists this information
is lost. First we shall define some parameters to feed to the
:class:mne.Epochs constructor. The values tmin and tmax refer to
offsets in relation to the events. Here we make epochs that collect the data
from 200 ms before to 500 ms after the event.
End of explanation
baseline = (None, 0.0)
reject = {'mag': 4e-12, 'eog': 200e-6}
epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin,
tmax=tmax, reject=reject, picks=picks, add_eeg_ref=False)
Explanation: Now we have everything we need to construct the epochs. To get some
meaningful results, we also want to baseline the epochs. Baselining computes
the mean over the baseline period and adjusts the data accordingly. The
epochs constructor uses a baseline period from tmin to 0.0 seconds by
default, but it is wise to be explicit. That way you are less likely to end
up with surprises along the way. None as the first element of the tuple
refers to the start of the time window (-200 ms in this case).
See :class:mne.Epochs for more.
We also define rejection thresholds to get rid of noisy epochs. The
rejection thresholds are defined as peak-to-peak values within the epoch time
window. They are defined as T/m for gradiometers, T for magnetometers and V
for EEG and EOG electrodes.
<div class="alert alert-info"><h4>Note</h4><p>In this tutorial, we don't preprocess the data. This is not
something you would normally do. See our `tutorials` on
preprocessing for more.</p></div>
End of explanation
epochs.plot(block=True)
Explanation: Let's plot the epochs to see the results. The number at the top refers to the
id number. We can see that 128 good epochs out of total of 145 events got
through the rejection process. Visual inspection also reveals that some
epochs containing saccades or blinks got through. You can also reject epochs
by hand by clicking on the epoch in the browser window. The selected epochs
get rejected when you close the epochs browser. How you should reject the
epochs and which thresholds to use is not a trivial question and this
tutorial takes no stand on that matter.
To see all the interactive features of the epochs browser, click 'Help' in
the lower left corner of the browser window.
End of explanation
epochs.plot_drop_log()
Explanation: To see why the epochs were rejected, we can plot the drop log.
End of explanation
picks = mne.pick_types(epochs.info, meg=True, eog=True)
evoked_left = epochs['Auditory/Left'].average(picks=picks)
evoked_right = epochs['Auditory/Right'].average(picks=picks)
Explanation: To get the evoked response you can simply do epochs.average(). It
includes only the data channels by default. For the sake of example, we use
picks to include the EOG channels as well. Notice that we cannot use the
same picks as before as the indices are different. 'Why are they different?'
you might ask. They're different because picks is simply a list of
channel indices and as the epochs were constructed, also a new info structure
is created where the channel indices run from 0 to epochs.info['nchan'].
See tut_info_objects for more information.
End of explanation
epochs_left = epochs['Left']
# ... or to select a very specific subset. This is the same as above:
evoked_left = epochs['Left/Auditory'].average(picks=picks)
Explanation: Notice we have used forward slashes ('/') to separate the factors of the
conditions of the experiment. We can use these 'tags' to select for example
all left trials (both visual left and auditory right) ...
End of explanation
evoked_left.plot()
evoked_right.plot()
Explanation: Finally, let's plot the evoked responses.
End of explanation |
9,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
Step6: Let's run it
Step7: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation
batch_size = 100
num_hiddens = 200
graph = tf.Graph()
with graph.as_default():
#input
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size,image_size*image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size,num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
tf_valid_labels = tf.constant(valid_labels) #invalid
tf_test_labels = tf.constant(test_labels)
#variables
weights1 = tf.Variable(tf.truncated_normal([image_size*image_size,num_hiddens]))
biases1 = tf.Variable(tf.zeros([num_hiddens]))
weights2 = tf.Variable(tf.truncated_normal([num_hiddens, num_labels]))
biases2 = tf.Variable(tf.zeros([num_labels]))
#training computation
hiddens1_input = tf.matmul(tf_train_dataset,weights1)+biases1
hiddens1_output = tf.nn.relu(hiddens1_input)
logits = tf.matmul(hiddens1_output,weights2)+biases2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
#optimizer
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
#predictions
tf_train_prediction = tf.nn.softmax(logits)
tf_valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset,weights1)+biases1),weights2)+biases2)
tf_test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset,weights1)+biases1),weights2)+biases2)
# training
num_steps = 4000
with tf.Session(graph=graph) as sess:
# initilze variables
init_graph = tf.initialize_all_variables()
sess.run(init_graph)
print("Initialized!")
#training iterations
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = sess.run([optimizer, loss, tf_train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(tf_valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(tf_test_prediction.eval(), test_labels))
print("----------------------------------------")
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
End of explanation |
9,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step4: You should expect to see a slightly better performance than with k = 1.
Test one_loop code
Step6: Test no_loop code
Step7: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
test python code for creating fold lists | Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cifar-10-batches-py'
X_train_im, y_train, X_test_im, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train_im.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test_im.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train_im[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# plt.subplot(1,2,1)
# plt.imshow(X_train_im[index_max10])
# plt.subplot(1,2,2)
# plt.imshow(X_test_im[10])
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train_im[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test_im[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
print X_train.shape, X_test.shape
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
index_min10 = np.argmin(dists, axis=1)[10]
index_max10 = np.argmax(dists, axis=1)[10]
print index_min10 # index of min dist for 10th test example
print index_max10
test = np.argsort(dists, axis=1)[10,:3]
print test.shape
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: opposite background or foreground colors
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
print np.square(X_train - X_test[0,:]).shape
print np.sqrt(np.sum(np.square(X_train - X_test[0,:]), axis = 1)).shape
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
print dists_one
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
Explanation: You should expect to see a slightly better performance than with k = 1.
Test one_loop code
End of explanation
# np.sum(X*X, axis=1).reshape((num_test,1)).dot(np.ones((1,num_test))) + np.ones((num_train, 1)).dot(np.transpose(np.sum(self.X_train*self.X_train, axis=1).reshape((num_train,1)))) - 2*X*np.transpose(X_train)
num_features = X_train.shape[1]
num_train = X_train.shape[0]
num_test = X_test.shape[0]
print num_features
print np.sum(X_test*X_test, axis=1).reshape((num_test,1)).dot(np.ones((1,num_train))).shape
print np.ones((num_test, 1)).dot(np.transpose(np.sum(X_train*X_train, axis=1).reshape((num_train,1)))).shape
print 2*X_test.dot(np.transpose(X_train)).shape
print np.sqrt(np.sum(X_test*X_test, axis=1).reshape((num_test,1)).dot(np.ones((1,num_train))) + np.ones((num_test, 1)).dot(np.transpose(np.sum(X_train*X_train, axis=1).reshape((num_train,1)))) - 2*X_test.dot(np.transpose(X_train)))
print dists
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: Test no_loop code
End of explanation
num_folds = 5
folds = np.arange(num_folds)
print folds
for count in folds:
# select fold
a = folds[np.arange(len(folds))!=count]
print a
print folds[count]
num_folds = 5
folds = np.arange(num_folds)
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
# print len(X_train_folds), X_train_folds[0].shape
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
classifier1 = KNearestNeighbor() #re-init classifier to make sure it's there
for k in k_choices:
print 'fold %i' % k
# list of accuracies across all folds for this k
accuracies = []
for fold in folds:
# select fold
a = folds[np.arange(len(folds))!=fold]
for item in a:
classifier1.train(X_train_folds[item], y_train_folds[item])
dists_two = classifier1.compute_distances_no_loops(X_train_folds[fold])
y_test_pred = classifier1.predict_labels(dists_two, k=k)
num_correct = np.sum(y_test_pred == y_train_folds[fold])
accuracies.append(float(num_correct) / len(y_train_folds[fold]))
print 'Got %d / %d correct => accuracy: %f for k: %i, fold: %i' % (num_correct, num_test, accuracies[fold], k, fold)
k_to_accuracies[k] = accuracies
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
print 'average accuracy for k %i: %f'% (k,np.mean(k_to_accuracies[k]))
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 7
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
test python code for creating fold lists
End of explanation |
9,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook works out the expected hillslope sediment flux, topography, and soil thickness for steady state on a 4x7 grid. This provides "ground truth" values for tests.
Let the hillslope erosion rate be $E$, the flux coefficient $D$, critical gradient $S_c$, and slope gradient $S$. The regolith thickness is $H$, with bare-bedrock production rate $P_0$ and depth-decay $H_*$. Finally, we set the transport decay scale the same as the production depth-decay scale. Then we have the hillslope flux as a function of distance from ridgetop, $x$, as
$q_s = E x = \left( DS + \frac{D}{S_c^2} S^3 \right) \left(1 - e^{ -H/H_*} \right)$
Parameter values
Step1: With that, calculate the expected equilibrium $H$
Step2: Double check
Step3: Yes, good.
Now, our geometry consists of a hillslope discretized into seven nodes. The two on either end are zero-elevation fixed boundaries, so we have to find the elevations of the five interior ones. But the hillslope should be symmetrical, so we really only have to find 1, 2, and 3 as in
0 --- 1 --- 2 --- 3 --- etc.
where node 3 is the top of the hill.
The slope between nodes 1 and 0 must be positive (uphill to right). It must be just steep enough to carry all the sediment from its own cell plus the sediment from node 2's cell, plus half the sediment from node 3's cell. We'll assume all cells have width $dx = 10 m$. Therefore, we have to transport sediment produced in strip 25 m x 1 m, or 25 m2. Our expected flux is then
Step4: In fact, for each interface between cells, the slope at that interface is given by the following polynomial
Step5: Now, let's calculate the coefficients
Step6: Now let's find the roots of this cubic polynomial
Step7: There's just one real root here
Step8: Great! That's extremely close. Let's try with the slope between nodes 1 and 2. The only difference here is that the flux $qs$ now derives from just $15 m^2$, so $qs = 0.0015
Step9: Once again, let's test
Step10: Finally, the slope between 2 and 3, which needs to carry half a cell's worth of sediment, or $qs = 0.0005$
Step11: And check this
Step12: Fabulous. Now to find the predicted elevations
Step13: So, at equilibrium, our model should create a symmetrical hill with a peak elevation a little under 8 m and a soil thickness of 0.347 m.
What time step size would be reasonable? Start by defining an "effective D" parameter, which is the linearized coefficient in front of the cubic term
Step14: Now, maximum time step size should be $\Delta x^2 / 2 D_{eff}$
Step15: There's also a constraint for the weathering piece. The characteristic time scale is $T = H_* / P_0$, which in this case is
Step16: So, this calculation suggests that weathering is the limiting factor on time-step size. We might choose 250 years for a reasonably smooth solution.
The time it would take for baselevel fall to bring the crest of the hill up to its ten times its equilibrium elevation of 8 m
Step17: So let's say we run for 800,000 years at 250 year time steps | Python Code:
D = 0.01
Sc = 0.8
Hstar = 0.5
E = 0.0001
P0 = 0.0002
Explanation: This notebook works out the expected hillslope sediment flux, topography, and soil thickness for steady state on a 4x7 grid. This provides "ground truth" values for tests.
Let the hillslope erosion rate be $E$, the flux coefficient $D$, critical gradient $S_c$, and slope gradient $S$. The regolith thickness is $H$, with bare-bedrock production rate $P_0$ and depth-decay $H_*$. Finally, we set the transport decay scale the same as the production depth-decay scale. Then we have the hillslope flux as a function of distance from ridgetop, $x$, as
$q_s = E x = \left( DS + \frac{D}{S_c^2} S^3 \right) \left(1 - e^{ -H/H_*} \right)$
Parameter values: let $D = 0.01 m^2 y^{-1}$, $S_c = 0.8$, $H_* = 0.5 m$, $P_0 = 0.0002$, and $E = 0.0001 m y^{-1}$:
End of explanation
import math
H = -Hstar * math.log(E / P0)
H
Explanation: With that, calculate the expected equilibrium $H$:
$E = P_0 e^{-H/H_*}$
$H = -H_* \ln (E/P_0)$
Plugging in the numbers:
End of explanation
P0 * math.exp(-H / Hstar)
Explanation: Double check: if we plug this $H$ back in, do we recover $E$?
End of explanation
qs = 25 * E
qs
Explanation: Yes, good.
Now, our geometry consists of a hillslope discretized into seven nodes. The two on either end are zero-elevation fixed boundaries, so we have to find the elevations of the five interior ones. But the hillslope should be symmetrical, so we really only have to find 1, 2, and 3 as in
0 --- 1 --- 2 --- 3 --- etc.
where node 3 is the top of the hill.
The slope between nodes 1 and 0 must be positive (uphill to right). It must be just steep enough to carry all the sediment from its own cell plus the sediment from node 2's cell, plus half the sediment from node 3's cell. We'll assume all cells have width $dx = 10 m$. Therefore, we have to transport sediment produced in strip 25 m x 1 m, or 25 m2. Our expected flux is then:
End of explanation
f = 1.0 - math.exp(-H / Hstar)
f
Explanation: In fact, for each interface between cells, the slope at that interface is given by the following polynomial:
$f\frac{D}{S_c^2} S^3 + 0 S^2 + fDS - qs = 0$
Here the $f$ is shorthand for $1 - \exp (-H/H_*)$. I've included the zero in front of the $S^2$ term just to make it explicit.
So, for the slope between nodes 0 and 1, we need first to define our polynomial coefficients, $p$. Then we'll invoke numpy's roots function to solve for $S$. To be consistent with roots usage, we'll call the coefficient of the highest (cubic) term $p_0$, the next highest (square) $p_1$, etc. So:
$p_0 S^3 + p_1 S^2 + p_2 S + p_3 = 0$
Clearly, we'll need $f$, so let's calculate that first:
End of explanation
import numpy as np
p = np.zeros(4)
p[0] = (f * D) / (Sc ** 2)
p[1] = 0.0
p[2] = f * D
p[3] = -qs
p
Explanation: Now, let's calculate the coefficients:
$p_0 = f D / S_c^2$
$p_1 = 0$
$p_2 = f D$
$p_3 = -q_s$
Clearly, only $p_3$ will vary from node to node. Here are the numbers:
End of explanation
my_roots = np.roots(p)
my_roots
Explanation: Now let's find the roots of this cubic polynomial:
End of explanation
Spred = 0.4
qspred = (D * Spred + (D / (Sc * Sc)) * (Spred ** 3)) * (1.0 - np.exp(-H / Hstar))
qspred
Explanation: There's just one real root here: $S \approx 1.33$. Let's plug that back in and see if we recapture the correct $qs$:
End of explanation
p[3] = -0.0015
my_roots = np.roots(p)
my_roots
Explanation: Great! That's extremely close. Let's try with the slope between nodes 1 and 2. The only difference here is that the flux $qs$ now derives from just $15 m^2$, so $qs = 0.0015:
End of explanation
Spred = 0.269437
qspred = (D * Spred + (D / (Sc * Sc)) * (Spred ** 3)) * (1.0 - np.exp(-H / Hstar))
qspred
Explanation: Once again, let's test:
End of explanation
p[3] = -0.0005
my_roots = np.roots(p)
my_roots
Explanation: Finally, the slope between 2 and 3, which needs to carry half a cell's worth of sediment, or $qs = 0.0005$:
End of explanation
Spred = 0.0985
qspred = (D * Spred + (D / (Sc * Sc)) * (Spred ** 3)) * (1.0 - np.exp(-H / Hstar))
qspred
Explanation: And check this:
End of explanation
elev = np.zeros(7)
elev[1] = 0.4 * 10.0
elev[5] = elev[1]
elev[2] = elev[1] + 0.269437 * 10.0
elev[4] = elev[2]
elev[3] = elev[2] + 0.0985 * 10.0
elev
Explanation: Fabulous. Now to find the predicted elevations: just add up slope x distance for each node, going inward from the boundaries:
End of explanation
S = 0.4
Deff = D * ((S / Sc) ** 2)
Deff
Explanation: So, at equilibrium, our model should create a symmetrical hill with a peak elevation a little under 8 m and a soil thickness of 0.347 m.
What time step size would be reasonable? Start by defining an "effective D" parameter, which is the linearized coefficient in front of the cubic term:
$D_{eff} = D (S / S_c)^2$
Then take the steepest steady state slope:
End of explanation
10.0*10.0/(2.0*Deff)
Explanation: Now, maximum time step size should be $\Delta x^2 / 2 D_{eff}$:
End of explanation
Hstar / P0
Explanation: There's also a constraint for the weathering piece. The characteristic time scale is $T = H_* / P_0$, which in this case is:
End of explanation
80.0 / E
Explanation: So, this calculation suggests that weathering is the limiting factor on time-step size. We might choose 250 years for a reasonably smooth solution.
The time it would take for baselevel fall to bring the crest of the hill up to its ten times its equilibrium elevation of 8 m:
End of explanation
8.0e5/250.
Explanation: So let's say we run for 800,000 years at 250 year time steps:
End of explanation |
9,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multimodal multivariate Gaussian
We can create a multimodal multivariate gaussian using MultimodalGaussianLogPDF. By default, this has the distribution
$$ p(\boldsymbol{x}) \propto \mathcal{N}\left(\boldsymbol{x}\;\lvert\;\boldsymbol{\mu}_1, \boldsymbol{\Sigma}_1\right) + \mathcal{N}\left(\boldsymbol{x}\;\lvert\;\boldsymbol{\mu}_2, \boldsymbol{\Sigma}_2\right),$$
where, $\boldsymbol{\mu}_1 = (0,0)$ and $\boldsymbol{\mu}_2 = (10,10)$ and $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ are diagonal correlation matrices.
Plotting this pdf
Step1: Use adaptive covariance MCMC to sample from this (un-normalised) pdf.
Step2: Scatter plot of the samples. Adaptive covariance MCMC does ok if we start the chains between the modes.
Step3: And KL divergence by mode is good here.
Step4: But if we start the chains at one of the modes, it can fail to find the other
Step5: Now the KL divergence is for one mode is good but for the other it is terrible.
Step6: Other mode configurations
This distribution can also be used to generate modes in other configurations and other dimensions. For example, we can generate three modes in two dimensions then independently sample from them.
Step7: Or specify different covariance matrices for each mode.
Step8: Or make a multimodal distribution in one dimension.
Step9: Or a multimodal distribution in 5 dimensions (plotting the first two dimensions only).
Step10: Now use Hamiltonian Monte Carlo to sample from this distribution.
Step11: Plot samples in two dimensions from a single chain. Hamiltonian Monte Carlo gets stuck on a mode and remains there! | Python Code:
import pints
import pints.toy
import numpy as np
import matplotlib.pyplot as plt
# Create log pdf
log_pdf = pints.toy.MultimodalGaussianLogPDF()
# Contour plot of pdf
levels = np.linspace(-3,12,20)
num_points = 100
x = np.linspace(-5, 15, num_points)
y = np.linspace(-5, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Multimodal multivariate Gaussian
We can create a multimodal multivariate gaussian using MultimodalGaussianLogPDF. By default, this has the distribution
$$ p(\boldsymbol{x}) \propto \mathcal{N}\left(\boldsymbol{x}\;\lvert\;\boldsymbol{\mu}_1, \boldsymbol{\Sigma}_1\right) + \mathcal{N}\left(\boldsymbol{x}\;\lvert\;\boldsymbol{\mu}_2, \boldsymbol{\Sigma}_2\right),$$
where, $\boldsymbol{\mu}_1 = (0,0)$ and $\boldsymbol{\mu}_2 = (10,10)$ and $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ are diagonal correlation matrices.
Plotting this pdf:
End of explanation
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2], [8, 8], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
Explanation: Use adaptive covariance MCMC to sample from this (un-normalised) pdf.
End of explanation
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha=0.2)
plt.xlim(-5, 15)
plt.ylim(-5, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Scatter plot of the samples. Adaptive covariance MCMC does ok if we start the chains between the modes.
End of explanation
print("KL divergence by mode: " + str(log_pdf.kl_divergence(stacked)))
Explanation: And KL divergence by mode is good here.
End of explanation
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([-0.1, -0.1], [0.1, 0.1], size=(4, 2))
mcmc = pints.MCMCController(log_pdf, 4, x0, method=pints.HaarioBardenetACMC)
# Set maximum number of iterations
mcmc.set_max_iterations(4000)
# Disable logging
mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[1000:] for chain in chains]
stacked = np.vstack(chains)
plt.contour(X, Y, Z, colors ='k', alpha=0.5)
plt.scatter(stacked[:,0], stacked[:,1], marker='.', alpha = 0.2)
plt.xlim(-5, 15)
plt.ylim(-5, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: But if we start the chains at one of the modes, it can fail to find the other:
End of explanation
print("KL divergence by mode: " + str(log_pdf.kl_divergence(stacked)))
Explanation: Now the KL divergence is for one mode is good but for the other it is terrible.
End of explanation
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes=[[0, 0], [5, 10], [10, 0]])
samples = log_pdf.sample(100)
# Contour plot of pdf
levels = np.linspace(-3,12,20)
num_points = 100
x = np.linspace(-5, 15, num_points)
y = np.linspace(-5, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.scatter(samples[:,0], samples[:,1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Other mode configurations
This distribution can also be used to generate modes in other configurations and other dimensions. For example, we can generate three modes in two dimensions then independently sample from them.
End of explanation
covariances = [[[1, 0], [0, 1]],
[[2, 0.8], [0.8, 3]],
[[1, -0.5], [-0.5, 1]]]
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes=[[0, 0], [5, 10], [10, 0]], covariances=covariances)
samples = log_pdf.sample(100)
# Contour plot of pdf
num_points = 100
x = np.linspace(-5, 15, num_points)
y = np.linspace(-5, 15, num_points)
X, Y = np.meshgrid(x, y)
Z = np.zeros(X.shape)
Z = np.exp([[log_pdf([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.scatter(samples[:,0], samples[:,1])
plt.xlim(-5, 15)
plt.ylim(-5, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Or specify different covariance matrices for each mode.
End of explanation
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes=[[0], [5], [10]])
x = np.linspace(-5, 15, num_points)
prob = np.exp([log_pdf([i]) for i in x])
samples = log_pdf.sample(1000)
plt.plot(x, prob / 2)
plt.hist(samples, 40, density=True)
plt.xlabel('x')
plt.show()
Explanation: Or make a multimodal distribution in one dimension.
End of explanation
log_pdf = pints.toy.MultimodalGaussianLogPDF(modes=[np.repeat(0, 5), np.repeat(5, 5), np.repeat(10, 5)])
samples = log_pdf.sample(100)
plt.scatter(samples[:, 0], samples[:, 1])
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Or a multimodal distribution in 5 dimensions (plotting the first two dimensions only).
End of explanation
# Create an adaptive covariance MCMC routine
x0 = np.random.uniform([2, 2, 2, 2, 2], [8, 8, 8, 8, 8], size=(4, 5))
sigma0 = [1, 1, 1, 1, 1]
mcmc = pints.MCMCSampling(log_pdf, 4, x0, method=pints.HamiltonianMCMC, sigma0=sigma0)
# Set maximum number of iterations
mcmc.set_max_iterations(500)
# Disable logging
# mcmc.set_log_to_screen(False)
# Number of chains
num_chains = 4
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Discard warm-up
chains = [chain[250:] for chain in chains]
Explanation: Now use Hamiltonian Monte Carlo to sample from this distribution.
End of explanation
plt.plot(chains[0][:, 2], chains[1][:, 2])
plt.xlim(-5, 15)
plt.ylim(-5, 15)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
Explanation: Plot samples in two dimensions from a single chain. Hamiltonian Monte Carlo gets stuck on a mode and remains there!
End of explanation |
9,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Construct data and experiments directorys from environment variables
Step1: Specify main run parameters
Step2: Load data and normalise inputs
Step3: Specify prior parameters (data dependent so do after data load)
Step4: Assemble run parameters into dictionary for recording with results
Step5: Create necessary run objects
Step6: Run chains, starting from random sample from prior in each and saving results to experiments directory | Python Code:
data_dir = os.path.join(os.environ['DATA_DIR'], 'uci')
exp_dir = os.path.join(os.environ['EXP_DIR'], 'apm_mcmc')
Explanation: Construct data and experiments directorys from environment variables
End of explanation
data_set = 'pima'
method = 'apm(mi+mh)'
n_chain = 10
chain_offset = 0
seeds = np.random.random_integers(10000, size=n_chain)
n_imp_sample = 1
adapt_run = dict(
low_acc_thr = 0.15,
upp_acc_thr = 0.30,
batch_size = 100,
n_batch = 20
)
init_log_sigma_prop_scale = 0.5
init_log_tau_prop_scale = 0.5
n_sample_main = 10000
epsilon = 1e-8
Explanation: Specify main run parameters
End of explanation
X = np.genfromtxt(os.path.join(data_dir, data_set + '_X.txt'))
y = np.genfromtxt(os.path.join(data_dir, data_set + '_y.txt'))
X, X_mn, X_sd = utils.normalise_inputs(X)
Explanation: Load data and normalise inputs
End of explanation
prior = dict(
a_tau = 1.,
b_tau = 1. / X.shape[1]**0.5,
a_sigma = 1.1,
b_sigma = 0.1
)
Explanation: Specify prior parameters (data dependent so do after data load)
End of explanation
run_params = dict(
data_set = data_set,
n_data = X.shape[0],
n_feature = X.shape[1],
method = method,
n_imp_sample = n_imp_sample,
epsilon = epsilon,
prior = prior,
adapt_run = adapt_run,
init_log_sigma_prop_scale = init_log_sigma_prop_scale,
init_log_tau_prop_scale = init_log_tau_prop_scale,
n_sample_main = n_sample_main
)
Explanation: Assemble run parameters into dictionary for recording with results
End of explanation
prng = np.random.RandomState()
kernel_func = lambda K, X, theta: (
krn.isotropic_squared_exponential_kernel(K, X, theta, epsilon)
)
ml_estimator = est.LogMarginalLikelihoodApproxPosteriorISEstimator(
X, y, kernel_func, lpa.laplace_approximation)
def log_f_estimator(u, theta=None, cached_res=None):
log_marg_lik_est, new_cached_res = ml_estimator(u, theta, cached_res)
log_prior = (
utils.log_gamma_log_pdf(theta[0], prior['a_sigma'], prior['b_sigma']) +
utils.log_gamma_log_pdf(theta[1], prior['a_tau'], prior['b_tau'])
)
return log_marg_lik_est + log_prior, new_cached_res
prop_sampler = lambda theta, prop_scales: np.r_[
theta[0] + prop_scales[0] * prng.normal(),
theta[1] + prop_scales[1] * prng.normal()
]
log_prop_density = lambda theta_prop, theta_curr, prop_scales: (
-0.5 * (
((theta_prop[0] - theta_curr[0]) / prop_scales[0])**2 +
((theta_prop[1] - theta_curr[1]) / prop_scales[1])**2
)
)
init_prop_scales = np.array([
init_log_sigma_prop_scale,
init_log_tau_prop_scale
])
sampler = smp.APMMetIndPlusMHSampler(
log_f_estimator, log_prop_density, prop_sampler, init_prop_scales,
lambda: prng.normal(size=(y.shape[0], n_imp_sample)), prng)
Explanation: Create necessary run objects
End of explanation
for c in range(n_chain):
try:
print('Starting chain {0}...'.format(c + 1))
prng.seed(seeds[c])
theta_init = np.array([
np.log(prng.gamma(prior['a_sigma'], 1. / prior['b_sigma'])),
np.log(prng.gamma(prior['a_tau'], 1. / prior['b_tau'])),
])
sampler.prop_scales = init_prop_scales
print('Starting initial adaptive run...')
adapt_thetas, adapt_prop_scales, adapt_accept_rates = (
sampler.adaptive_run(
theta_init, adapt_run['batch_size'],
adapt_run['n_batch'], adapt_run['low_acc_thr'],
adapt_run['upp_acc_thr'], utils.adapt_factor_func, True
)
)
print('Final proposal scales: {0}'.format(adapt_prop_scales[-1]))
print('Starting main run...')
ml_estimator.reset_cubic_op_count()
start_time = time.clock()
thetas, n_reject = sampler.get_samples(adapt_thetas[-1], n_sample_main)
comp_time = time.clock() - start_time
n_cubic_ops = ml_estimator.n_cubic_ops
tag = '{0}_{1}_chain_{2}'.format(data_set, method, c + 1 + chain_offset)
print('Main run completed: accept rate (u|theta) {0:.1f}%, '
'accept rate (theta|u) {1:.1f}%, time {2}s, # cubic ops {3}'
.format((1. - n_reject[0] * 1./ n_sample_main) * 100.,
(1. - n_reject[1] * 1./ n_sample_main) * 100.,
comp_time, n_cubic_ops))
utils.save_adaptive_run(exp_dir, tag, adapt_thetas, adapt_prop_scales,
adapt_accept_rates, thetas, n_reject,
n_cubic_ops, comp_time, run_params)
utils.plot_trace(thetas)
plt.show()
except Exception as e:
print('Exception encountered')
print(e.message)
print(traceback.format_exc())
print('Skipping to next chain')
continue
Explanation: Run chains, starting from random sample from prior in each and saving results to experiments directory
End of explanation |
9,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts.
Step1: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https
Step2: For this analysis I am going to group data by the car maker. The make is not directly stored in the data, but all the names start with the make, so extract the first word in that column.
Step3: The data has some inconsistencies with the make strings (misspellings or alternate spellings). Do some simple fixes.
Step4: Use toyplot to plot the measurements of horsepower vs weight. We should expect a general trend to higher horsepower to weight with some outliers (such as for sports cars).
We are using this to demonstrate coloring. First, do a simple coloring by country origin with the default color map. This should be reasonable colors.
Step5: Repeate the plot colored by the make. This is a crazy amount of colors. Also choose a bad color palette.
To color by make, we actually need to convert the strings to numbers that toyplot can look up in a linear map. Create that map and make a column of make indices.
Step6: I am also going to demonstrate a bad set of colors. Toyplot actually cares about good colors, so I have to jump through a few hoops to load up a bad color map. | Python Code:
import pandas
import numpy
import toyplot
import toyplot.pdf
import toyplot.png
import toyplot.svg
print('Pandas version: ', pandas.__version__)
print('Numpy version: ', numpy.__version__)
print('Toyplot version: ', toyplot.__version__)
Explanation: When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts.
End of explanation
column_names = ['MPG',
'Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin',
'Car Name']
data = pandas.read_table('auto-mpg.data',
delim_whitespace=True,
names=column_names,
index_col=False)
Explanation: Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https://archive.ics.uci.edu/ml/datasets/Auto+MPG.
The data are stored in a text file containing columns of data. We use the pandas.read_table() method to parse the data and load it in a pandas DataFrame. The file does not contain a header row, so we need to specify the names of the columns manually.
End of explanation
data['Make'] = data['Car Name'].str.split().str.get(0)
Explanation: For this analysis I am going to group data by the car maker. The make is not directly stored in the data, but all the names start with the make, so extract the first word in that column.
End of explanation
data.ix[data['Make'] == 'chevroelt', 'Make'] = 'chevrolet'
data.ix[data['Make'] == 'chevy', 'Make'] = 'chevrolet'
data.ix[data['Make'] == 'maxda', 'Make'] = 'mazda'
data.ix[data['Make'] == 'mercedes-benz', 'Make'] = 'mercedes'
data.ix[data['Make'] == 'vokswagen', 'Make'] = 'volkswagen'
data.ix[data['Make'] == 'vw', 'Make'] = 'volkswagen'
Explanation: The data has some inconsistencies with the make strings (misspellings or alternate spellings). Do some simple fixes.
End of explanation
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-3,3,-44),
xlabel = 'Weight',
ylabel = 'Horsepower')
colormap = toyplot.color.CategoricalMap()
# Note that this data has some invalid measurements for Horsepower. Thus, we need
# to filter those rows out. That is what the [data['Horsepower'] != '?'] is for
axes.scatterplot(data['Weight'][data['Horsepower'] != '?'],
data['Horsepower'][data['Horsepower'] != '?'],
color=(numpy.array(data['Origin'][data['Horsepower'] != '?'])-1,colormap))
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
# Add some labels
axes.text(4700, 125, 'USA')
axes.text(2100, 28, 'Europe')
axes.text(2820, 145, 'Japan')
toyplot.pdf.render(canvas, 'Colors.pdf')
toyplot.svg.render(canvas, 'Colors.svg')
toyplot.png.render(canvas, 'Colors.png', scale=5)
Explanation: Use toyplot to plot the measurements of horsepower vs weight. We should expect a general trend to higher horsepower to weight with some outliers (such as for sports cars).
We are using this to demonstrate coloring. First, do a simple coloring by country origin with the default color map. This should be reasonable colors.
End of explanation
unique_makes = data['Make'].unique()
make_index_map = pandas.Series(index=unique_makes,
data=xrange(0, len(unique_makes)))
data['Make Index'] = numpy.array(make_index_map[data['Make']])
Explanation: Repeate the plot colored by the make. This is a crazy amount of colors. Also choose a bad color palette.
To color by make, we actually need to convert the strings to numbers that toyplot can look up in a linear map. Create that map and make a column of make indices.
End of explanation
bad_color_palette = toyplot.color.Palette(
['#FF0000', '#FFFF00', '#00FF00',
'#00FFFF', '#0000FF'])
bad_colormap = toyplot.color.LinearMap(bad_color_palette)
canvas = toyplot.Canvas('4in', '2.6in')
axes = canvas.cartesian(bounds=(41,-3,3,-44),
xlabel = 'Weight (lb)',
ylabel = 'Horsepower')
# Note that this data has some invalid measurements for Horsepower. Thus, we need
# to filter those rows out. That is what the [data['Horsepower'] != '?'] is for
axes.scatterplot(data['Weight'][data['Horsepower'] != '?'],
data['Horsepower'][data['Horsepower'] != '?'],
color=(numpy.array(data['Make Index'][data['Horsepower'] != '?'])-1,
bad_colormap))
# It's usually best to make the y-axis 0-based.
axes.y.domain.min = 0
toyplot.pdf.render(canvas, 'Colors_Bad.pdf')
toyplot.svg.render(canvas, 'Colors_Bad.svg')
toyplot.png.render(canvas, 'Colors_Bad.png', scale=5)
Explanation: I am also going to demonstrate a bad set of colors. Toyplot actually cares about good colors, so I have to jump through a few hoops to load up a bad color map.
End of explanation |
9,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: List Superfund sites
The individual sites are instances of SuperfundSite, and are listed in the corresponding Graph Browser page.
To programmatically list all Superfund sites, use the get_places_in API as follows
Step2: In place of USA, you can specify any US state or county. You can use place search to find the corresponding DCID, as illustrated here.
Get statistics
Superfund sites have associated statistical variables like CrsiScore_Superfundsite, NaturalHazardRiskScore_Superfundsite, etc. To see the list of all variables, you can visit a Superfund site's Graph Browser page (example).
To get stats for all variables for all sites, use the build_multivariate_dataframe API, as follows
Step3: Additional statistics for Tar Creek site
Tar Creek is one of the largest Superfund sites, and only for that site we have additional stats on contaminants in the ground water from sampling wells. These stats are attached to instances of SuperfundMeasurementSite contained in Tar Creek.
The measurement sites are listed on the Tar Creek Graph Browser page, and a measurement site's Graph Browser page (example) lists all available statistical variables.
To get these stats, list the measurement sites within Tar Creek, and then provide all associated variables, as follows
Step4: Get non-statistical attributes
The Superfund sites have non-statistical properties like latitude/longitude, ownership status, EPA region code, and so on. These are listed in each site's Graph Browser page (example).
To get these values, use the get_property_labels and get_property_values APIs, and append to the existing dataframe (site_df), as follows | Python Code:
!pip install datacommons_pandas datacommons --upgrade --quiet
# Import Data Commons libraries
import datacommons as dc
import datacommons_pandas as dcpd
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/Accessing_Superfund_data_from_Data_Commons.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Notebook Version - 1.0.0
Accessing Superfund data from Data Commons
Superfund sites are US locations contaminated with hazardous substances that the EPA seeks to investigate and clean up. Data Commons (DC) includes data about Superfund sites, and this notebook illustrates how that can be accessed using the DC python APIs with Pandas extension.
An extended version of this notebook that does some analysis with the extracted Superfund data can be found here.
Set-up
Import the DC python APIs, as follows:
End of explanation
# Gets all Superfund sites within USA
place_dcid = 'country/USA' # DCID of USA
site_list = dc.get_places_in([place_dcid], 'SuperfundSite')[place_dcid]
site_list[:5]
Explanation: List Superfund sites
The individual sites are instances of SuperfundSite, and are listed in the corresponding Graph Browser page.
To programmatically list all Superfund sites, use the get_places_in API as follows:
End of explanation
# Gets stats for the listed variables from all sites in a Pandas table
site_df = dcpd.build_multivariate_dataframe(site_list,
['CrsiScore_SuperfundSite',
'NaturalHazardExposureScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite',
'NaturalHazardRiskScore_SuperfundSite_CoastalFloodEvent',
'NaturalHazardRiskScore_SuperfundSite_DroughtEvent',
'NaturalHazardRiskScore_SuperfundSite_EarthquakeEvent',
'NaturalHazardRiskScore_SuperfundSite_ExcessiveHeatEvent',
'NaturalHazardRiskScore_SuperfundSite_ExtremeColdWindChillEvent',
'NaturalHazardRiskScore_SuperfundSite_FloodEvent',
'NaturalHazardRiskScore_SuperfundSite_HailEvent',
'NaturalHazardRiskScore_SuperfundSite_HighWindEvent',
'NaturalHazardRiskScore_SuperfundSite_HurricaneEvent',
'NaturalHazardRiskScore_SuperfundSite_LandslideEvent',
'NaturalHazardRiskScore_SuperfundSite_TornadoEvent',
'NaturalHazardRiskScore_SuperfundSite_WildfireEvent'])
site_df.head()
Explanation: In place of USA, you can specify any US state or county. You can use place search to find the corresponding DCID, as illustrated here.
Get statistics
Superfund sites have associated statistical variables like CrsiScore_Superfundsite, NaturalHazardRiskScore_Superfundsite, etc. To see the list of all variables, you can visit a Superfund site's Graph Browser page (example).
To get stats for all variables for all sites, use the build_multivariate_dataframe API, as follows:
End of explanation
# Gets all measurement sites contained in Tar Creek
tar_creek_site = 'epaSuperfundSiteId/OKD980629844' # DCID of Tar Creek
measurement_sites = dc.get_places_in([tar_creek_site], 'SuperfundMeasurementSite')[tar_creek_site]
# Gets stats for contaminant variables for said measurement sites
tar_creek_df = dcpd.build_multivariate_dataframe(
measurement_sites,
[
'Concentration_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Cadmium_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Iron_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Lead_BodyOfWater_GroundWater',
'Concentration_DissolvedContaminant_Zinc_BodyOfWater_GroundWater',
'Concentration_Iron_BodyOfWater_GroundWater',
'Concentration_Lead_BodyOfWater_GroundWater',
'Concentration_Sulfate_BodyOfWater_GroundWater',
'DissolvedOxygen_BodyOfWater_GroundWater',
'Concentration_Zinc_BodyOfWater_GroundWater',
'PotentialOfHydrogen_BodyOfWater_GroundWater',
'ElectricalConductivity_BodyOfWater_GroundWater',
'Temperature_BodyOfWater_GroundWater',
'WaterHardness_BodyOfWater_GroundWater'
])
tar_creek_df.head()
Explanation: Additional statistics for Tar Creek site
Tar Creek is one of the largest Superfund sites, and only for that site we have additional stats on contaminants in the ground water from sampling wells. These stats are attached to instances of SuperfundMeasurementSite contained in Tar Creek.
The measurement sites are listed on the Tar Creek Graph Browser page, and a measurement site's Graph Browser page (example) lists all available statistical variables.
To get these stats, list the measurement sites within Tar Creek, and then provide all associated variables, as follows:
End of explanation
# Lists properties for a sample site
site_props = dc.get_property_labels([tar_creek_site], out=True)[tar_creek_site]
for prop in site_props:
# Gets values for a given property. pvs is dict from site-id -> list of values
pvs = dc.get_property_values(site_list, prop)
# Turns the list of values into a comma-separated a single-value
pvs = {p: ', '.join(v) for p, v in pvs.items()}
# Extends the dataframe
site_df[prop] = site_df.index.map(pvs)
site_df.head()
Explanation: Get non-statistical attributes
The Superfund sites have non-statistical properties like latitude/longitude, ownership status, EPA region code, and so on. These are listed in each site's Graph Browser page (example).
To get these values, use the get_property_labels and get_property_values APIs, and append to the existing dataframe (site_df), as follows:
End of explanation |
9,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradio and HuggingFace
In this demo, we show how to build ready to deploy or use deep learning models.
Hugging Face hosts thousands of pre-trained models in Model Hub. They also built high-level APIs so we can easily use and deploy pre-trained models using Pipeline.
gradio provides APIs so we can easily build web applications that use our pre-trained models from Hugging Face. gradio also provides APIs so we can easily incorporate input and output web UIs.
After building the gradio, we can have permanent hosting using Hugging Face Spaces.
Let us first install Hugging Face transformers and gradio.
Note
Step1: Hello world in gradio
As a tradition, let us build the simplest gradio app. It accepts a text input and calls the greet() function to process this input and convert into another text. The output of greet() becomes the output of the gradio app.
To see our application, we call launch() after constructing our gradio Interface.
Step2: Object Recognition using ResNet18
In our discussion about PyTorch, we used a pre-trained ResNet18 model from torchvision. We use jupyter notebook to show the results. The jupyter notebook is not an application that we can deploy and other people use with ease. The same with Google's colab.
In this example, we use gradio to build a simple app that an end user can easily interact with. We reuse the code from our previous example.
Step3: Using HuggingFace and Gradio
Loading a pre-trained model from torchvision, pre-processing the input, and post processing the output are all messy. Sometimes, we just want to load and use a machine learning model. Hugging Face provides a shortcut for all these steps through the use of pipeline. In pipeline, we supply the task name and the pre-trained model that is stored in Hugging Face Model Hub.
In this example, we use a much better model compared to ResNet18. It is called BEIT and can classify objects up to about 22k categories. We construct the gradio app by calling from_pipeline().
Step4: Automatic Speech Recognition (ASR)
Let us shift to audio or speech domain. In this example, we demonstrate an Automatic Speech Recognition (ASR) system. We will use our microphone to record audio which is then converted to text using ASR. In this example, best to open the application in another browser tab by setting inbrowser=True.
Before running the gradio app, this ASR requires sentencepice module. Let us install it first.
Step5: In this ASR, we use Facebook S2T, a transformer-based speech to text model that is trained on librispeech dataset.
Step6: Text to Speech (TTS)
Let us do the reverse of ASR or Text to Speech (TTS). In this example, we supply text and this text is converted to speech using the voice of Linda Johnson. We use a pre-trained model of FastSpeech2 that is provided by Facebook in Model Hub.
In this example, we use load() method to load the pre-trained model.
Step7: Text Generation using GPT2
In this example, we use a large language model (LLM) by OpenAI to generate text. It is called GPT2. Text generation is one of the tasks where a language model can help us. Basically, we provide the initial text made of few words. Then, the model will continue it.
Step8: Using gr.Series() to automatically chain the I/O of multiple models
In this example, we use the previous text generator as input to our TTS. We can for example use to generate a podcast on a certain topic. In this case, we replace GPT2 with a much better model called GPT Neo. | Python Code:
!pip install transformers
!pip install gradio
Explanation: Gradio and HuggingFace
In this demo, we show how to build ready to deploy or use deep learning models.
Hugging Face hosts thousands of pre-trained models in Model Hub. They also built high-level APIs so we can easily use and deploy pre-trained models using Pipeline.
gradio provides APIs so we can easily build web applications that use our pre-trained models from Hugging Face. gradio also provides APIs so we can easily incorporate input and output web UIs.
After building the gradio, we can have permanent hosting using Hugging Face Spaces.
Let us first install Hugging Face transformers and gradio.
Note: For some examples, it is best to launch the app in another tab to enable access to the required inputs such as microphone or webcam. Running the app may also lock the python kernel and the notebook becomes unresponsive. In that case, please restart the kernel and clear the output.
End of explanation
import gradio as gr
def greet(name):
return "Hello " + name + "!!"
gr.Interface(fn=greet, inputs="text", outputs="text").launch()
Explanation: Hello world in gradio
As a tradition, let us build the simplest gradio app. It accepts a text input and calls the greet() function to process this input and convert into another text. The output of greet() becomes the output of the gradio app.
To see our application, we call launch() after constructing our gradio Interface.
End of explanation
import gradio as gr
import torch
import torchvision
import torchvision.transforms as transforms
import requests
from einops import rearrange
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
resnet = torchvision.models.resnet18(pretrained=True)
resnet.eval()
# Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def classify(img):
# By default, gradio image is numpy
img = torch.from_numpy(img)
# Numpy image is channel last. PyTorch is channel 1st.
img = rearrange(img, 'h w c -> c h w')
# The transforms before prediction
img = torchvision.transforms.Resize(256)(img)
img = torchvision.transforms.CenterCrop(224)(img).float()/255.
img = normalize(img)
# We insert batch size of 1
img = rearrange(img, 'c h w -> 1 c h w')
# The actual prediction
with torch.no_grad():
pred = resnet(img)
# Convert the prediction to probabilities
pred = torch.nn.functional.softmax(pred, dim=1)
# Remove the batch dim. torch.squeeze() can also be used.
pred = rearrange(pred, "1 j->j")
# torch to numpy space
pred = pred.cpu().numpy()
return {labels[i]: float(pred[i]) for i in range(1000)}
gr.Interface(fn=classify,
inputs=gr.inputs.Image(shape=(224, 224)),
outputs=gr.outputs.Label(num_top_classes=5),
title="1k Object Recognition",
examples=['wonder_cat.jpg', 'aki_dog.jpg',],
description="Demonstrates a pre-trained model from torchvision for image classification.",
allow_flagging="never").launch(inbrowser=True)
Explanation: Object Recognition using ResNet18
In our discussion about PyTorch, we used a pre-trained ResNet18 model from torchvision. We use jupyter notebook to show the results. The jupyter notebook is not an application that we can deploy and other people use with ease. The same with Google's colab.
In this example, we use gradio to build a simple app that an end user can easily interact with. We reuse the code from our previous example.
End of explanation
import gradio as gr
from transformers import pipeline
pipe = pipeline(task="image-classification",
# model that can do 22k-category classification
model="microsoft/beit-base-patch16-224-pt22k-ft22k")
gr.Interface.from_pipeline(pipe,
title="22k Image Classification",
description="Object Recognition using Microsoft BEIT",
examples = ['wonder_cat.jpg', 'aki_dog.jpg',],
allow_flagging="never").launch(inbrowser=True)
Explanation: Using HuggingFace and Gradio
Loading a pre-trained model from torchvision, pre-processing the input, and post processing the output are all messy. Sometimes, we just want to load and use a machine learning model. Hugging Face provides a shortcut for all these steps through the use of pipeline. In pipeline, we supply the task name and the pre-trained model that is stored in Hugging Face Model Hub.
In this example, we use a much better model compared to ResNet18. It is called BEIT and can classify objects up to about 22k categories. We construct the gradio app by calling from_pipeline().
End of explanation
!pip install sentencepiece
Explanation: Automatic Speech Recognition (ASR)
Let us shift to audio or speech domain. In this example, we demonstrate an Automatic Speech Recognition (ASR) system. We will use our microphone to record audio which is then converted to text using ASR. In this example, best to open the application in another browser tab by setting inbrowser=True.
Before running the gradio app, this ASR requires sentencepice module. Let us install it first.
End of explanation
import gradio as gr
from transformers import pipeline
pipe = pipeline(task="automatic-speech-recognition",
model="facebook/s2t-medium-librispeech-asr")
gr.Interface.from_pipeline(pipe,
title="Automatic Speech Recognition (ASR)",
description="Using pipeline with Facebook S2T for ASR.",
examples=['data/ljspeech.wav',],
).launch(inbrowser=True)
Explanation: In this ASR, we use Facebook S2T, a transformer-based speech to text model that is trained on librispeech dataset.
End of explanation
import gradio as gr
gr.Interface.load("huggingface/facebook/fastspeech2-en-ljspeech",
description="TTS using FastSpeech2",
title="Text to Speech (TTS)",
examples=[["The quick brown fox jumps over the lazy dog."]]
).launch()
Explanation: Text to Speech (TTS)
Let us do the reverse of ASR or Text to Speech (TTS). In this example, we supply text and this text is converted to speech using the voice of Linda Johnson. We use a pre-trained model of FastSpeech2 that is provided by Facebook in Model Hub.
In this example, we use load() method to load the pre-trained model.
End of explanation
import gradio as gr
gr.Interface.load("huggingface/gpt2",
title="Text Generation",
description="Using GPT2.",
allow_flagging="never").launch(inbrowser=True)
Explanation: Text Generation using GPT2
In this example, we use a large language model (LLM) by OpenAI to generate text. It is called GPT2. Text generation is one of the tasks where a language model can help us. Basically, we provide the initial text made of few words. Then, the model will continue it.
End of explanation
import gradio as gr
textgen = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B”)
tts = gr.Interface.load("huggingface/facebook/fastspeech2-en-ljspeech”)
iface = gr.Series(textgen, tts)
iface.title = "Generated Text to Speech"
iface.allow_flagging = "never"
iface.launch(inbrowser=True)
Explanation: Using gr.Series() to automatically chain the I/O of multiple models
In this example, we use the previous text generator as input to our TTS. We can for example use to generate a podcast on a certain topic. In this case, we replace GPT2 with a much better model called GPT Neo.
End of explanation |
9,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hh', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-HH
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
9,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2018 Geosoft Inc.
https
Step1: Calculate the depth from the tilt-angle and tilt-derivative
The depth is the reciprocal of the horizontal gradient at the zero-contour of the tilt-angle.
Step2: Plot depths as coloured symbols | Python Code:
import geosoft.gxpy.gx as gx
import geosoft.gxpy.utility as gxu
import geosoft.gxpy.grid as gxgrd
import geosoft.gxpy.grid_utility as gxgrdu
import geosoft.gxpy.map as gxmap
import geosoft.gxpy.view as gxview
import geosoft.gxpy.group as gxgrp
import numpy as np
from IPython.display import Image
gxc = gx.GXpy()
gxu.check_version('9.4.0b0')
# get the sample data from github
url = 'https://github.com/GeosoftInc/gxpy/raw/master/examples/data/'
grid = 'bhn_tmi_250m.grd'
gxu.url_retrieve(url + grid)
gxu.url_retrieve(url + grid + '.gi')
gxu.url_retrieve(url + grid + '.xml')
grd = gxgrd.Grid.open(grid)
Image(grd.image_file(shade=True, pix_width=500))
Explanation: Copyright (c) 2018 Geosoft Inc.
https://github.com/GeosoftInc/gxpy/blob/master/README.md
BSD 2-clause License
Tilt Depth
The depth to magnetic sources from the edges of magnetic features can be determined from the reciprocal of the gradient of the tilt angle at the zero-crossover. The geosoft.gxpy.grid_utility.tilt_depth makes this calculation and returns a set of (x, y, z) locations that represent magnetic depth.
Reference: Salem et al, 2008, Interpretation of magnetic data using tilt-angle derivatives
The procedure implemented in Geosoft follows a process developed by Blakely, 2016.
TMI Grid
Calculate the depth from the tilt-angle and tilt-derivative
Plot depths as coloured symbols
TMI Grid
This is Total Magnetic Intensity (TMI) data from the Black Hills Norite in South Australia.
Reference: https://doi.org/10.1071/ASEG2016ab115
End of explanation
td_pp = gxgrdu.tilt_depth(grd)
Explanation: Calculate the depth from the tilt-angle and tilt-derivative
The depth is the reciprocal of the horizontal gradient at the zero-contour of the tilt-angle.
End of explanation
with gxmap.Map.figure(td_pp.extent_xy,
title='Depth from Tilt-Derivative',
margins=(1, 3.5, 3, 1)) as gmap:
map_file = gmap.file_name
with gxview.View.open(gmap, "data") as v:
cmap = gxgrp.Color_map(title='depth',
unit_of_measure=td_pp.coordinate_system.unit_of_measure)
depths = td_pp.pp[:,2]
depths = depths[~np.isnan(depths)]
cmap.set_linear(np.min(depths), 2000) # np.max(depths))
gxgrp.Color_symbols_group.new(v, 'tilt-depth', td_pp.pp, cmap)
gxgrp.legend_color_bar(v, 'depth_legend', cmap=cmap)
Image(gxmap.Map.open(map_file).image_file(pix_width=800))
Explanation: Plot depths as coloured symbols
End of explanation |
9,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
odeint.
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
theta,omega=y
dtheta=omega
domega=[(-g/l)*np.sin(theta)-a*omega-b*np.sin(omega0*t)]
dy=[dtheta,domega]
return dy
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
theta=y[0,::1]
omega=y[1,::1]
Em=g*l*(1-np.cos(theta))+.5*l**2*omega**2
return Em
a=np.ones((10,2))
print(a[::1,0])
#assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.