Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Drive-it DQN
Model
\begin{equation}
l_1 = relu( x W_1 + b_1) \
l_2 = relu( x W_2 + b_2) \
l_3 = relu( x W_3 + b_3) \
Q(s,a) = l_1 W_o + b_o \
\end{equation}
Step1: Visualization
We use PCA decomposition on memory samples to visualize the $Q(s,a)$ values across the state space
Step2: For a more meaningful plot, just project sample across $x_m$ and $y_m$.
Step3: Training
Step4:
Step5: Exploration - exploitation trade-off
Note initiall $\epsilon$ is set to 1 which implies we are enitrely exploraing but as steps increase we reduce exploration and start leveraging the learnt space to collect rewards (a.k.a exploitation) as well.
Step6: Discounted Reward
We tune $\gamma$ to look ahead only a short timespan, in which the current action is of significant importance. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
import seaborn as sns
style.use('ggplot')
%matplotlib inline
sns.set()
Explanation: Drive-it DQN
Model
\begin{equation}
l_1 = relu( x W_1 + b_1) \
l_2 = relu( x W_2 + b_2) \
l_3 = relu( x W_3 + b_3) \
Q(s,a) = l_1 W_o + b_o \
\end{equation}
End of explanation
def pca_plot(n=10000, alpha=1.0, size=5):
_, samples = agent.memory.sample(n)
states = np.array([ o[0] for o in samples ], dtype=np.float32)
qsa = agent.brain.predict(states)[0]
from sklearn import decomposition
pca = decomposition.PCA(n_components=2)
pca.fit(states)
X = pca.transform(states)
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
ax = axes[0,0]; plt.sca(ax);ax.set_title('Action')
plt.scatter(X[:, 0], X[:, 1], c=np.argmax(qsa, 1), alpha=alpha, s=size, cmap="rainbow")
ax = axes[0,1]; plt.sca(ax);ax.set_title('Q(s,no-change)')
plt.scatter(X[:, 0], X[:, 1], c=qsa[:,0], alpha=alpha, s=size, cmap="rainbow")
ax = axes[1,0]; plt.sca(ax);ax.set_title('Q(s,left)')
plt.scatter(X[:, 0], X[:, 1], c=qsa[:,1], alpha=alpha, s=size, cmap="rainbow")
ax = axes[1,1]; plt.sca(ax);ax.set_title('Q(s,right)')
plt.scatter(X[:, 0], X[:, 1], c=qsa[:,2], alpha=alpha, s=size, cmap="rainbow")
Explanation: Visualization
We use PCA decomposition on memory samples to visualize the $Q(s,a)$ values across the state space:
End of explanation
def slice_plot(n=10000, alpha=1.0, size=5):
_, samples = agent.memory.sample(n)
states = np.array([ o[0] for o in samples ], dtype=np.float32)
qsa = agent.brain.predict(states)[0]
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True)
ax = axes[0,0]; plt.sca(ax);ax.set_title('Action')
plt.scatter(states[:, 0], states[:, 1], c=np.argmax(qsa, 1), alpha=alpha, s=size, cmap="rainbow")
ax = axes[0,1]; plt.sca(ax);ax.set_title('Q(s,no-change)')
plt.scatter(states[:, 0], states[:, 1], c=qsa[:,0], alpha=alpha, s=size, cmap="rainbow")
ax = axes[1,0]; plt.sca(ax);ax.set_title('Q(s,left)')
plt.scatter(states[:, 0], states[:, 1], c=qsa[:,1], alpha=alpha, s=size, cmap="rainbow")
ax = axes[1,1]; plt.sca(ax);ax.set_title('Q(s,right)')
plt.scatter(states[:, 0], states[:, 1], c=qsa[:,2], alpha=alpha, s=size, cmap="rainbow")
axes[0,0].set_ylabel('$y_m$')
axes[1,0].set_ylabel('$y_m$')
axes[1,0].set_xlabel('$x_m$')
axes[1,1].set_xlabel('$x_m$')
Explanation: For a more meaningful plot, just project sample across $x_m$ and $y_m$.
End of explanation
def run_episode(agent, render=False):
s = env.reset()
R = 0
while True:
if render: env.render()
a = agent.act(s.astype(np.float32))
s_, r, done, info = env.step(a)
if done:
s_ = None
agent.observe((s, a, r, s_))
s = s_
R += r
if done:
agent.endEpisode()
return R
from DriveItGym import DriveItEnv
from agent import Agent
BATCH_SIZE = 20000
env = DriveItEnv(time_limit=10.0, throttle_limit=1.0)
stateCnt = env.observation_space.shape[0]
actionCnt = env.action_space.n
agent = Agent(stateCnt, actionCnt)
episode_number = 0
last_batch_episode = 0
last_batch_steps = 0
episodes = []
reward_sum = 0
reward_best = 18.0
(stateCnt, actionCnt)
while episode_number < 50000 and agent.steps < 1980000:
episode_number += 1
reward = run_episode(agent, render=False)
reward_sum += reward
if agent.steps >= last_batch_steps + BATCH_SIZE:
reward_avg = reward_sum / (episode_number - last_batch_episode)
last_batch_episode = episode_number
last_batch_steps = int(agent.steps / BATCH_SIZE) * BATCH_SIZE
episodes.append((episode_number, agent.steps, reward_avg))
print('Episode: %d, steps: %d, epsilon: %f, average reward: %f.' \
% (episode_number, agent.steps, agent.epsilon, reward_avg))
if reward_avg > reward_best:
reward_best = reward_avg
agent.brain.model.save_model('best.mod')
reward_sum = 0
agent.brain.model.save_model('last.mod')
print('Done.')
plt.plot([e[1]/1000 for e in episodes], [e[2] for e in episodes])
plt.xlabel('steps x 1000');plt.ylabel('reward')
Explanation: Training
End of explanation
while episode_number < 30000 and agent.steps < 2980000:
episode_number += 1
reward = run_episode(agent, render=False)
reward_sum += reward
if agent.steps >= last_batch_steps + BATCH_SIZE:
reward_avg = reward_sum / (episode_number - last_batch_episode)
last_batch_episode = episode_number
last_batch_steps = int(agent.steps / BATCH_SIZE) * BATCH_SIZE
episodes.append((episode_number, agent.steps, reward_avg))
print('Episode: %d, steps: %d, epsilon: %f, average reward: %f.' \
% (episode_number, agent.steps, agent.epsilon, reward_avg))
if reward_avg > reward_best:
reward_best = reward_avg
agent.brain.model.save_model('best.mod', False)
reward_sum = 0
agent.brain.model.save_model('last.mod', False)
print('Done.')
plt.plot([e[1]/1000 for e in episodes], [e[2] for e in episodes])
plt.xlabel('steps x 1000');plt.ylabel('reward')
plt.savefig('learning.png', dpi=300)
slice_plot(n=20000, size=5, alpha=0.5)
plt.savefig('qslice.png', dpi=300)
pca_plot(n=20000, size=5, alpha=0.5)
plt.savefig('pca.png', dpi=300)
Explanation:
End of explanation
def epsilon(steps):
return MIN_EPSILON + (MAX_EPSILON - MIN_EPSILON) * np.exp(-LAMBDA * steps)
r = range(0,EXPLORATION_STOP,int(EXPLORATION_STOP/100))
plt.plot(r, [min(epsilon(x),1) for x in r], 'r')
#plt.plot(r, [min(epsilon(x),1)**EPSILON_TRAIN_FACTOR for x in r], 'b')
plt.xlabel('step');plt.ylabel('$\epsilon$')
Explanation: Exploration - exploitation trade-off
Note initiall $\epsilon$ is set to 1 which implies we are enitrely exploraing but as steps increase we reduce exploration and start leveraging the learnt space to collect rewards (a.k.a exploitation) as well.
End of explanation
r = range(0,600)
plt.plot([t/60.0 for t in r], [GAMMA ** x for x in r], 'r')
plt.xlabel('time [s]');plt.ylabel('discount')
GAMMA
Explanation: Discounted Reward
We tune $\gamma$ to look ahead only a short timespan, in which the current action is of significant importance.
End of explanation |
9,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NoSQL (MongoDB) (sesión 4)
Esta hoja muestra cómo acceder a bases de datos MongoDB y también a conectar la salida con Jupyter. Se puede utilizar el shell propio de MongoDB en la máquina virtual usando el programa mongo. La diferencia es que ese programa espera código Javascript y aquí trabajaremos con Python.
Step1: Usaremos la librería pymongo para python. La cargamos a continuación.
Step2: La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).
Step3: Format
Step4: Las bases de datos están compuestas por un conjunto de colecciones. Cada colección aglutina a un conjunto de objetos (documentos) del mismo tipo, aunque como vimos en teoría, cada documento puede tener un conjunto de atributos diferente.
Step7: Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.
Step8: El API de colección en Python se puede encontrar aquí
Step9: Map-Reduce
Mongodb incluye dos APIs para procesar y buscar documentos
Step10: Se le puede añadir una etiqueta para especificar sobre qué elementos queremos trabajar (query)
Step11: EJERCICIO (resuelto)
Step14: Esto demuestra que en general el esquema de datos en MongoDB no estaría así desde el principio.
Después del primer paso de map/reduce, tenemos que construir la colección final que asocia cada Post con sus comentarios. Como hemos construido antes la colección post_comments indizada por el Id del Post, podemos utilizar ahora una ejecución de map/reduce que mezcle los datos en post_comments con los datos en posts.
La segunda ejecución de map/reduce la haremos sobre posts, para que el resultado sea completo, incluso para los Posts que no aparecen en comentarios, y por lo tanto tendrán el atributo comments vacío.
En este caso, debemos hacer que la función map() produzca una salida de documentos que también están indizados con el atributo Id, y, como sólo hay uno para cada Id, la función reduce() no se ejecutará. Tan sólo se ejecutará para mezclar ambas colecciones, así que la función reduce() tendrá que estar preparada para mezclar objetos de tipo "comment" y Posts. En cualquier caso, como se puede ver, es válida también aunque sólo se llame con un objeto de tipo Post. Finalmente, la función map() prepara a cada objeto Post, inicialmente, con una lista de comentarios vacíos
Step15: Framework de Agregación
Framework de agregación
Step16: Lookup!
Step17: El $lookup genera un array con todos los resultados. El operador $arrayElementAt accede al primer elemento.
Step18: $unwind también puede usarse. "Desdobla" cada fila por cada elemento del array. En este caso, como sabemos que el array sólo contiene un elemento, sólo habrá una fila por fila original, pero sin el array. Finalmente se puede proyectar el campo que se quiera.
Step19: Ejemplo de realización de la consulta RQ4
Como ejemplo de consulta compleja con el Framework de Agregación, adjunto una posible solución a la consulta RQ4
Step20: La explicación es como sigue
Step23: Ejemplo de consulta
Step24: Esto sólo calcula el tiempo mínimo de cada pregunta a su respuesta. Después habría que aplicar lo visto en otros ejemplos para calcular la media. Con agregación, a continuación, sí que se puede calcular la media de forma relativament sencilla | Python Code:
!pip install --upgrade pymongo
from pprint import pprint as pp
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
Explanation: NoSQL (MongoDB) (sesión 4)
Esta hoja muestra cómo acceder a bases de datos MongoDB y también a conectar la salida con Jupyter. Se puede utilizar el shell propio de MongoDB en la máquina virtual usando el programa mongo. La diferencia es que ese programa espera código Javascript y aquí trabajaremos con Python.
End of explanation
import pymongo
from pymongo import MongoClient
Explanation: Usaremos la librería pymongo para python. La cargamos a continuación.
End of explanation
client = MongoClient("mongo",27017)
client
client.list_database_names()
Explanation: La conexión se inicia con MongoClient en el host descrito en el fichero docker-compose.yml (mongo).
End of explanation
db = client.stackoverflow
db = client['stackoverflow']
db
Explanation: Format: 7zipped
Files:
badges.xml
UserId, e.g.: "420"
Name, e.g.: "Teacher"
Date, e.g.: "2008-09-15T08:55:03.923"
comments.xml
Id
PostId
Score
Text, e.g.: "@Stu Thompson: Seems possible to me - why not try it?"
CreationDate, e.g.:"2008-09-06T08:07:10.730"
UserId
posts.xml
Id
PostTypeId
1: Question
2: Answer
ParentID (only present if PostTypeId is 2)
AcceptedAnswerId (only present if PostTypeId is 1)
CreationDate
Score
ViewCount
Body
OwnerUserId
LastEditorUserId
LastEditorDisplayName="Jeff Atwood"
LastEditDate="2009-03-05T22:28:34.823"
LastActivityDate="2009-03-11T12:51:01.480"
CommunityOwnedDate="2009-03-11T12:51:01.480"
ClosedDate="2009-03-11T12:51:01.480"
Title=
Tags=
AnswerCount
CommentCount
FavoriteCount
posthistory.xml
Id
PostHistoryTypeId
- 1: Initial Title - The first title a question is asked with.
- 2: Initial Body - The first raw body text a post is submitted with.
- 3: Initial Tags - The first tags a question is asked with.
- 4: Edit Title - A question's title has been changed.
- 5: Edit Body - A post's body has been changed, the raw text is stored here as markdown.
- 6: Edit Tags - A question's tags have been changed.
- 7: Rollback Title - A question's title has reverted to a previous version.
- 8: Rollback Body - A post's body has reverted to a previous version - the raw text is stored here.
- 9: Rollback Tags - A question's tags have reverted to a previous version.
- 10: Post Closed - A post was voted to be closed.
- 11: Post Reopened - A post was voted to be reopened.
- 12: Post Deleted - A post was voted to be removed.
- 13: Post Undeleted - A post was voted to be restored.
- 14: Post Locked - A post was locked by a moderator.
- 15: Post Unlocked - A post was unlocked by a moderator.
- 16: Community Owned - A post has become community owned.
- 17: Post Migrated - A post was migrated.
- 18: Question Merged - A question has had another, deleted question merged into itself.
- 19: Question Protected - A question was protected by a moderator
- 20: Question Unprotected - A question was unprotected by a moderator
- 21: Post Disassociated - An admin removes the OwnerUserId from a post.
- 22: Question Unmerged - A previously merged question has had its answers and votes restored.
PostId
RevisionGUID: At times more than one type of history record can be recorded by a single action. All of these will be grouped using the same RevisionGUID
CreationDate: "2009-03-05T22:28:34.823"
UserId
UserDisplayName: populated if a user has been removed and no longer referenced by user Id
Comment: This field will contain the comment made by the user who edited a post
Text: A raw version of the new value for a given revision
If PostHistoryTypeId = 10, 11, 12, 13, 14, or 15 this column will contain a JSON encoded string with all users who have voted for the PostHistoryTypeId
If PostHistoryTypeId = 17 this column will contain migration details of either "from <url>" or "to <url>"
CloseReasonId
1: Exact Duplicate - This question covers exactly the same ground as earlier questions on this topic; its answers may be merged with another identical question.
2: off-topic
3: subjective
4: not a real question
7: too localized
postlinks.xml
Id
CreationDate
PostId
RelatedPostId
PostLinkTypeId
1: Linked
3: Duplicate
users.xml
Id
Reputation
CreationDate
DisplayName
EmailHash
LastAccessDate
WebsiteUrl
Location
Age
AboutMe
Views
UpVotes
DownVotes
votes.xml
Id
PostId
VoteTypeId
1: AcceptedByOriginator
2: UpMod
3: DownMod
4: Offensive
5: Favorite - if VoteTypeId = 5 UserId will be populated
6: Close
7: Reopen
8: BountyStart
9: BountyClose
10: Deletion
11: Undeletion
12: Spam
13: InformModerator
CreationDate
UserId (only for VoteTypeId 5)
BountyAmount (only for VoteTypeId 9)
Las bases de datos se crean conforme se nombran. Se puede utilizar la notación punto o la de diccionario. Las colecciones también.
End of explanation
posts = db.posts
posts
Explanation: Las bases de datos están compuestas por un conjunto de colecciones. Cada colección aglutina a un conjunto de objetos (documentos) del mismo tipo, aunque como vimos en teoría, cada documento puede tener un conjunto de atributos diferente.
End of explanation
import os
import os.path as path
from urllib.request import urlretrieve
def download_csv_upper_dir(baseurl, filename):
file = path.abspath(path.join(os.getcwd(),os.pardir,filename))
if not os.path.isfile(file):
urlretrieve(baseurl + '/' + filename, file)
baseurl = 'http://neuromancer.inf.um.es:8080/es.stackoverflow/'
download_csv_upper_dir(baseurl, 'Posts.csv')
download_csv_upper_dir(baseurl, 'Users.csv')
download_csv_upper_dir(baseurl, 'Tags.csv')
download_csv_upper_dir(baseurl, 'Comments.csv')
download_csv_upper_dir(baseurl, 'Votes.csv')
import csv
from datetime import datetime
def csv_to_mongo(file, coll):
Carga un fichero CSV en Mongo. file especifica el fichero, coll la colección
dentro de la base de datos, y date_cols las columnas que serán interpretadas
como fechas.
# Convertir todos los elementos que se puedan a números
def to_numeric(d):
try:
return int(d)
except ValueError:
try:
return float(d)
except ValueError:
return d
def to_date(d):
To ISO Date. If this cannot be converted, return NULL (None)
try:
return datetime.strptime(d, "%Y-%m-%dT%H:%M:%S.%f")
except ValueError:
return None
coll.drop()
with open(file, encoding='utf-8') as f:
# La llamada csv.reader() crea un iterador sobre un fichero CSV
reader = csv.reader(f, dialect='excel')
# Se leen las columnas. Sus nombres se usarán para crear las diferentes columnas en la familia
columns = next(reader)
# Las columnas que contienen 'Date' se interpretan como fechas
func_to_cols = list(map(lambda c: to_date if 'date' in c.lower() else to_numeric, columns))
docs=[]
for row in reader:
row = [func(e) for (func,e) in zip(func_to_cols, row)]
docs.append(dict(zip(columns, row)))
coll.insert_many(docs)
csv_to_mongo('../Posts.csv',db.posts)
csv_to_mongo('../Users.csv',db.users)
csv_to_mongo('../Votes.csv',db.votes)
csv_to_mongo('../Comments.csv',db.comments)
csv_to_mongo('../Tags.csv',db.tags)
posts.count_documents()
Explanation: Importación de los ficheros CSV. Por ahora creamos una colección diferente para cada uno. Después estudiaremos cómo poder optimizar el acceso usando agregación.
End of explanation
(
db.posts.create_index([('Id', pymongo.HASHED)]),
db.comments.create_index([('Id', pymongo.HASHED)]),
db.users.create_index([('Id', pymongo.HASHED)])
)
Explanation: El API de colección en Python se puede encontrar aquí: https://api.mongodb.com/python/current/api/pymongo/collection.html. La mayoría de libros y referencias muestran el uso de mongo desde Javascript, ya que el shell de MongoDB acepta ese lenguaje. La sintaxis con respecto a Python cambia un poco, y se puede seguir en el enlace anterior.
Creación de índices
Para que el proceso map-reduce y de agregación funcione mejor, voy a crear índices sobre los atributos que se usarán como índice... Ojo, si no se crea las consultas pueden tardar mucho.
End of explanation
from bson.code import Code
map = Code(
'''
function () {
emit(this.OwnerUserId, 1);
}
''')
reduce = Code(
'''
function (key, values)
{
return Array.sum(values);
}
''')
results = posts.map_reduce(map, reduce, "posts_by_userid")
posts_by_userid = db.posts_by_userid
list(posts_by_userid.find())
Explanation: Map-Reduce
Mongodb incluye dos APIs para procesar y buscar documentos: el API de Map-Reduce y el API de agregación. Veremos primero el de Map-Reduce. Manual: https://docs.mongodb.com/manual/aggregation/#map-reduce
End of explanation
db.posts.distinct('Score')
Explanation: Se le puede añadir una etiqueta para especificar sobre qué elementos queremos trabajar (query):
La función map_reduce puede llevar añadida una serie de keywords, los mismos especificados en la documentación:
query: Restringe los datos que se tratan
sort: Ordena los documentos de entrada por alguna clave
limit: Limita el número de resultados
out: Especifica la colección de salida y otras opciones. Lo veremos después.
etc.
En el parámetro out se puede especificar en qué colección se quedarán los datos resultado del map-reduce. Por defecto, en la colección origen. (Todos los parámetros aquí: https://docs.mongodb.com/manual/reference/command/mapReduce/#mapreduce-out-cmd). En la operación map_reduce() podemos especificar la colección de salida, pero también podemos añadir un parámetro final out={...}.
Hay varias posibilidades para out:
replace: Sustituye la colección, si la hubiera, con la especificada (p. ej.: out={ "replace" : "coll" }.
merge: Mezcla la colección existente, sustituyendo los documentos que existan por los generados.
reduce: Si existe un documento con el mismo _id en la colección, se aplica la función reduce para fusionar ambos documentos y producir un nuevo documento.
Veremos a continuación, al resolver el ejercicio de crear post_comments con map-reduce cómo se utilizan estas posibilidades.
También hay operaciones específicas de la coleción, como count(), groupby() y distinct():
End of explanation
from bson.code import Code
comments_map = Code('''
function () {
emit(this.PostId, { type: 'comment', comments: [this]});
}
''')
comments_reduce = Code('''
function (key, values) {
comments = [];
values.forEach(function(v) {
if ('comments' in v)
comments = comments.concat(v.comments)
})
return { type: 'comment', comments: comments };
}
''')
db.comments.map_reduce(comments_map, comments_reduce, "post_comments")
list(db.post_comments.find()[:10])
Explanation: EJERCICIO (resuelto): Construir, con el API de Map-Reduce, una colección 'post_comments', donde se añade el campo 'Comments' a cada Post con la lista de todos los comentarios referidos a un Post.
Veremos la resolución de este ejercicio para que haga de ejemplo para los siguientes a implementar. En primer lugar, una operación map/reduce sólo se puede ejecutar sobre una colección, así que sólo puede contener resultados de la misma. Por lo tanto, con sólo una operación map/reduce no va a ser posible realizar todo el ejercicio.
Así, en primer lugar, parece interesante agrupar todos los comentarios que se han producido de un Post en particular. En cada comentario, el atributo PostId marca una referencia al Post al que se refiere.
Es importante cómo se construyen las operaciones map() y reduce(). Primero, la función map() se ejecutará para todos los documentos (o para todos los que cumplan la condición si se utiliza el modificador query=). Sin embargo, la función reduce() no se ejecutará a no ser que haya más de un elemento asociado a la misma clave.
Por lo tanto, la salida de la función map() debe ser la misma que la de la función reduce(). En nuestro caso, es un objeto JSON de la forma:
{ type: 'comment', comments: [ {comentario1, comentario2} ] }
En el caso de que sólo se ejecute la función map(), nótese cómo el objeto tiene la misma composición, pero con un array de sólo un elemento (comentario): sí mismo.
End of explanation
posts_map = Code(
function () {
this.comments = [];
emit(this.Id, this);
}
)
posts_reduce = Code(
function (key, values) {
comments = []; // The set of comments
obj = {}; // The object to return
values.forEach(function(v) {
if (v['type'] === 'comment')
comments = comments.concat(v.comments);
else // Object
{
obj = v;
// obj.comments will always be there because of the map() operation
comments = comments.concat(obj.comments);
}
})
// Finalize: Add the comments to the object to return
obj.comments = comments;
return obj;
}
)
db.posts.map_reduce(posts_map, posts_reduce, out={'reduce' : 'post_comments'})
list(db.post_comments.find()[:10])
Explanation: Esto demuestra que en general el esquema de datos en MongoDB no estaría así desde el principio.
Después del primer paso de map/reduce, tenemos que construir la colección final que asocia cada Post con sus comentarios. Como hemos construido antes la colección post_comments indizada por el Id del Post, podemos utilizar ahora una ejecución de map/reduce que mezcle los datos en post_comments con los datos en posts.
La segunda ejecución de map/reduce la haremos sobre posts, para que el resultado sea completo, incluso para los Posts que no aparecen en comentarios, y por lo tanto tendrán el atributo comments vacío.
En este caso, debemos hacer que la función map() produzca una salida de documentos que también están indizados con el atributo Id, y, como sólo hay uno para cada Id, la función reduce() no se ejecutará. Tan sólo se ejecutará para mezclar ambas colecciones, así que la función reduce() tendrá que estar preparada para mezclar objetos de tipo "comment" y Posts. En cualquier caso, como se puede ver, es válida también aunque sólo se llame con un objeto de tipo Post. Finalmente, la función map() prepara a cada objeto Post, inicialmente, con una lista de comentarios vacíos
End of explanation
respuestas = db['posts'].aggregate( [ {'$project' : { 'Id' : True }}, {'$limit': 20} ])
list(respuestas)
Explanation: Framework de Agregación
Framework de agregación: https://docs.mongodb.com/manual/reference/operator/aggregation/. Y aquí una presentación interesante sobre el tema: https://www.mongodb.com/presentations/aggregation-framework-0?jmp=docs&_ga=1.223708571.1466850754.1477658152
<video style="width:100%;" src="https://docs.mongodb.com/manual/_images/agg-pipeline.mp4" controls> </video>
Proyección:
End of explanation
respuestas = posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
}
])
list(respuestas)
Explanation: Lookup!
End of explanation
respuestas = db.posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
},
{ '$project' :
{
'Id' : True,
'Score' : True,
'username' : {'$arrayElemAt' : ['$owner.DisplayName', 0]},
'owner.DisplayName' : True
}}
])
list(respuestas)
Explanation: El $lookup genera un array con todos los resultados. El operador $arrayElementAt accede al primer elemento.
End of explanation
respuestas = db.posts.aggregate( [
{'$match': { 'Score' : {'$gte': 40}}},
{'$lookup': {
'from': "users",
'localField': "OwnerUserId",
'foreignField': "Id",
'as': "owner"}
},
{ '$unwind': '$owner'},
{ '$project' :
{
'username': '$owner.DisplayName'
}
}
])
list(respuestas)
Explanation: $unwind también puede usarse. "Desdobla" cada fila por cada elemento del array. En este caso, como sabemos que el array sólo contiene un elemento, sólo habrá una fila por fila original, pero sin el array. Finalmente se puede proyectar el campo que se quiera.
End of explanation
RQ4 = db.posts.aggregate( [
{ "$match" : {"PostTypeId": 2}},
{'$lookup': {
'from': "posts",
'localField': "ParentId",
'foreignField': "Id",
'as': "question"
}
},
{
'$unwind' : '$question'
},
{
'$project' : { 'OwnerUserId': True,
'OP' : '$question.OwnerUserId'
}
},
{
'$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },
'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},
'pairs' : {'$addToSet' : { '0q': '$OP', '1a': '$OwnerUserId'}}
}
},
{
'$project': {
'pairs' : True,
'npairs' : { '$size' : '$pairs'}
}
},
{
'$match' : { 'npairs' : { '$eq' : 2}}
}
])
RQ4 = list(RQ4)
RQ4
Explanation: Ejemplo de realización de la consulta RQ4
Como ejemplo de consulta compleja con el Framework de Agregación, adjunto una posible solución a la consulta RQ4:
End of explanation
RQ4 = db.posts.aggregate( [
{'$match': { 'PostTypeId' : 2}},
{'$lookup': {
'from': "posts",
'localField': "ParentId",
'foreignField': "Id",
'as': "question"}
},
{
'$unwind' : '$question'
},
{
'$project' : {'OwnerUserId': True,
'QId' : '$question.Id',
'AId' : '$Id',
'OP' : '$question.OwnerUserId'
}
},
{
'$group' : {'_id' : {'min' : { '$min' : ['$OwnerUserId' , '$OP'] },
'max' : { '$max' : ['$OwnerUserId' , '$OP'] }},
'pairs' : {'$addToSet' : { '0q':'$OP', '1a': '$OwnerUserId'}},
'considered_pairs' : { '$push' : {'QId' : '$QId', 'AId' : '$AId'}}
}
},
{
'$project': {
'pairs' : True,
'npairs' : { '$size' : '$pairs'},
'considered_pairs' : True
}
},
{
'$match' : { 'npairs' : { '$eq' : 2}}
}
])
RQ4 = list(RQ4)
RQ4
(db.posts.find_one({'Id': 238}), db.posts.find_one({'Id': 243}),
db.posts.find_one({'Id': 222}), db.posts.find_one({'Id': 223}))
Explanation: La explicación es como sigue:
Se eligen sólo las respuestas
Se accede a la tabla posts para recuperar los datos de la pregunta
A continuación se proyectan sólo el usuario que pregunta y el que hace la respuesta
El paso más imaginativo es el de agrupación. Lo que se intenta es que ambos pares de usuarios que están relacionados como preguntante -> respondiente y viceversa, caigan en la misma clave. Por ello, se coge el máximo y el mínimo de ambos identificadores de usuarios y se construye una clave con ambos números en las mismas posiciones. Así, ambas combinaciones de usuario que pregunta y que responde caerán en la misma clave. También se usa un conjunto (en pairs), y sólo se añadirá una vez las posibles combinaciones iguales de preguntador/respondiente.
Sólo nos interesan aquellas tuplas cuyo tamaño del conjunto de pares de pregunta/respuesta sea igual a dos (en un elemento uno de los dos usuarios habrá preguntado y el otro habrá respondido y en el otro viceversa).
La implementación en Map-Reduce se puede realizar con la misma idea.
En el caso de que queramos tener como referencia las preguntas y respuestas a las que se refiere la conversación, se puede añadir un campo más que guarde todas las preguntas junto con sus respuestas consideradas
End of explanation
from bson.code import Code
# La función map agrupará todas las respuestas, pero también necesita las
mapcode = Code(
function () {
if (this.PostTypeId == 2)
emit(this.ParentId, {q: null, a: {Id: this.Id, CreationDate: this.CreationDate}, diff: null})
else if (this.PostTypeId == 1)
emit(this.Id, {q: {Id: this.Id, CreationDate: this.CreationDate}, a: null, diff: null})
}
)
reducecode = Code(
function (key, values) {
q = null // Pregunta
a = null // Respuesta con la fecha más cercana a la pregunta
values.forEach(function(v) {
if (v.q != null) // Pregunta
q = v.q
if (v.a != null) // Respuesta
{
if (a == null || v.a.CreationDate < a.CreationDate)
a = v.a
}
})
mindiff = null
if (q != null && a != null)
mindiff = a.CreationDate - q.CreationDate;
return {q: q, a: a, diff: mindiff}
}
)
db.posts.map_reduce(mapcode, reducecode, "min_response_time")
mrt = list(db.min_response_time.find())
from pandas.io.json import json_normalize
df = json_normalize(mrt)
df.index=df["_id"]
df
df['value.diff'].plot()
Explanation: Ejemplo de consulta: Tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta
Veamos cómo calcular el tiempo medio desde que se hace una pregunta hasta que se le da la primera respuesta. En este caso se puede utilizar las respuestas para apuntar a qué pregunta correspondieron. No se considerarán pues las preguntas que no tienen respuesta, lo cual es razonable. Sin embargo, la función map debe guardar también las preguntas para poder calcular el tiempo menor (la primera repuesta).
End of explanation
min_answer_time = db.posts.aggregate([
{"$match" : {"PostTypeId" : 2}},
{
'$group' : {'_id' : '$ParentId',
# 'answers' : { '$push' : {'Id' : "$Id", 'CreationDate' : "$CreationDate"}},
'min' : {'$min' : "$CreationDate"}
}
},
{ "$lookup" : {
'from': "posts",
'localField': "_id",
'foreignField': "Id",
'as': "post"}
},
{ "$unwind" : "$post"},
{"$project" :
{"_id" : True,
"min" : True,
#"post" : True,
"diff" : {"$subtract" : ["$min", "$post.CreationDate"]}}
},
# { "$sort" : {'_id' : 1} }
{
"$group" : {
"_id" : None,
"avg" : { "$avg" : "$diff"}
}
}
])
min_answer_time = list(min_answer_time)
min_answer_time
Explanation: Esto sólo calcula el tiempo mínimo de cada pregunta a su respuesta. Después habría que aplicar lo visto en otros ejemplos para calcular la media. Con agregación, a continuación, sí que se puede calcular la media de forma relativament sencilla:
End of explanation |
9,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3. ANOVA tables and post-hoc comparisons
<div class="alert alert-info"><h4>Note</h4><p>ANOVAs and post-hoc tests are only available for
Step1: Type III SS inferences will only be valid if data are fully balanced across levels or if contrasts between levels are orthogonally coded and sum to 0. Below we tell
Step2: Marginal estimates and post-hoc comparisons
Step3: Example 1
~~~~~~~~~
Compare each level of IV3 to each other level of IV3, within each level of IV4. Use default Tukey HSD p-values.
Step4: Example 2
~~~~~~~~~
Compare each unique IV3,IV4 "cell mean" to every other IV3,IV4 "cell mean" and used FDR correction for multiple comparisons
Step5: Example 3
~~~~~~~~~
For this example we'll estimate a more complicated ANOVA with 1 continuous IV and 2 categorical IVs with 3 levels each. This is the same model as before but with IV2 thrown into the mix. Now, pairwise comparisons reflect changes in the slope of the continuous IV (IV2) between levels of the categorical IVs (IV3 and IV4).
First let's get the ANOVA table
Step6: Now we can compute the pairwise difference in slopes | Python Code:
# import basic libraries and sample data
import os
import pandas as pd
from pymer4.utils import get_resource_path
from pymer4.models import Lmer
# IV3 is a categorical predictors with 3 levels in the sample data
df = pd.read_csv(os.path.join(get_resource_path(), "sample_data.csv"))
# # We're going to fit a multi-level regression using the
# categorical predictor (IV3) which has 3 levels
model = Lmer("DV ~ IV3 + (1|Group)", data=df)
# Using dummy-coding; suppress summary output
model.fit(factors={"IV3": ["1.0", "0.5", "1.5"]}, summarize=False)
# Get ANOVA table
print(model.anova())
Explanation: 3. ANOVA tables and post-hoc comparisons
<div class="alert alert-info"><h4>Note</h4><p>ANOVAs and post-hoc tests are only available for :code:`Lmer` models estimated using the :code:`factors` argument of :code:`model.fit()` and rely on implementations in R</p></div>
In the previous tutorial where we looked at categorical predictors, behind the scenes :code:pymer4 was using the :code:factor functionality in R. This means the output of :code:model.fit() looks a lot like :code:summary() in R applied to a model with categorical predictors. But what if we want to compute an F-test across all levels of our categorical predictor?
:code:pymer4 makes this easy to do, and makes it easy to ensure Type III sums of squares infereces are valid. It also makes it easy to follow up omnibus tests with post-hoc pairwise comparisons.
ANOVA tables and orthogonal contrasts
Because ANOVA is just regression, :code:pymer4 can estimate ANOVA tables with F-results using the :code:.anova() method on a fitted model. This will compute a Type-III SS table given the coding scheme provided when the model was initially fit. Based on the distribution of data across factor levels and the specific coding-scheme used, this may produce invalid Type-III SS computations. For this reason the :code:.anova() method has a :code:force-orthogonal=True argument that will reparameterize and refit the model using orthogonal polynomial contrasts prior to computing an ANOVA table.
Here we first estimate a mode with dummy-coded categories and suppress the summary output of :code:.fit(). Then we use :code:.anova() to examine the F-test results.
End of explanation
# Get ANOVA table, but this time force orthogonality
# for valid SS III inferences
# In this case the data are balanced so nothing changes
print(model.anova(force_orthogonal=True))
# Checkout current contrast scheme (for first contrast)
# Notice how it's simply a linear contrast across levels
print(model.factors)
# Checkout previous contrast scheme
# which was a treatment contrast with 1.0
# as the reference level
print(model.factors_prev_)
Explanation: Type III SS inferences will only be valid if data are fully balanced across levels or if contrasts between levels are orthogonally coded and sum to 0. Below we tell :code:pymer4 to respecify our contrasts to ensure this before estimating the ANOVA. :code:pymer4 also saves the last set of contrasts used priory to forcing orthogonality.
Because the sample data is balanced across factor levels and there are not interaction terms, in this case orthogonal contrast coding doesn't change the results.
End of explanation
# Fix the random number generator
# for reproducibility
import numpy as np
np.random.seed(10)
# Create a new categorical variable with 3 levels
df = df.assign(IV4=np.random.choice(["1", "2", "3"], size=df.shape[0]))
# Estimate model with orthogonal polynomial contrasts
model = Lmer("DV ~ IV4*IV3 + (1|Group)", data=df)
model.fit(
factors={"IV4": ["1", "2", "3"], "IV3": ["1.0", "0.5", "1.5"]},
ordered=True,
summarize=False,
)
# Get ANOVA table
# We can ignore the note in the output because
# we manually specified polynomial contrasts
print(model.anova())
Explanation: Marginal estimates and post-hoc comparisons
:code:pymer4 leverages the :code:emmeans package in order to compute marginal estimates ("cell means" in ANOVA lingo) and pair-wise comparisons of models that contain categorical terms and/or interactions. This can be performed by using the :code:.post_hoc() method on fitted models. Let's see an example:
First we'll quickly create a second categorical IV to demo with and estimate a 3x3 ANOVA to get main effects and the interaction.
End of explanation
# Compute post-hoc tests
marginal_estimates, comparisons = model.post_hoc(
marginal_vars="IV3", grouping_vars="IV4"
)
# "Cell" means of the ANOVA
print(marginal_estimates)
# Pairwise comparisons
print(comparisons)
Explanation: Example 1
~~~~~~~~~
Compare each level of IV3 to each other level of IV3, within each level of IV4. Use default Tukey HSD p-values.
End of explanation
# Compute post-hoc tests
marginal_estimates, comparisons = model.post_hoc(
marginal_vars=["IV3", "IV4"], p_adjust="fdr"
)
# Pairwise comparisons
print(comparisons)
Explanation: Example 2
~~~~~~~~~
Compare each unique IV3,IV4 "cell mean" to every other IV3,IV4 "cell mean" and used FDR correction for multiple comparisons:
End of explanation
model = Lmer("DV ~ IV2*IV3*IV4 + (1|Group)", data=df)
# Only need to polynomial contrasts for IV3 and IV4
# because IV2 is continuous
model.fit(
factors={"IV4": ["1", "2", "3"], "IV3": ["1.0", "0.5", "1.5"]},
ordered=True,
summarize=False,
)
# Get ANOVA table
print(model.anova())
Explanation: Example 3
~~~~~~~~~
For this example we'll estimate a more complicated ANOVA with 1 continuous IV and 2 categorical IVs with 3 levels each. This is the same model as before but with IV2 thrown into the mix. Now, pairwise comparisons reflect changes in the slope of the continuous IV (IV2) between levels of the categorical IVs (IV3 and IV4).
First let's get the ANOVA table
End of explanation
# Compute post-hoc tests with bonferroni correction
marginal_estimates, comparisons = model.post_hoc(
marginal_vars="IV2", grouping_vars=["IV3", "IV4"], p_adjust="bonf"
)
# Pairwise comparisons
print(comparisons)
Explanation: Now we can compute the pairwise difference in slopes
End of explanation |
9,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
YouTube on Android
The goal of this experiment is to run Youtube videos on a Pixel device running Android and collect results.
Step1: Support Functions
This function helps us run our experiments
Step2: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
Step3: Workloads execution
This is done using the experiment helper function defined above which is configured to run a Youtube experiment.
Step4: Benchmarks results
Step5: Traces visualisation
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb. | Python Code:
from conf import LisaLogging
LisaLogging.setup()
%pylab inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
# Import support for Android devices
from android import System, Screen, Workload
# Support for trace events analysis
from trace import Trace
# Suport for FTrace events parsing and visualization
import trappy
import pandas as pd
import sqlite3
Explanation: YouTube on Android
The goal of this experiment is to run Youtube videos on a Pixel device running Android and collect results.
End of explanation
def experiment():
# Configure governor
target.cpufreq.set_all_governors('sched')
# Get workload
wload = Workload(te).getInstance(te, 'YouTube')
# Run Youtube workload
wload.run(te.res_dir, 'https://youtu.be/XSGBVzeBUbk?t=45s',
video_duration_s=60, collect='ftrace')
# Dump platform descriptor
te.platform_dump(te.res_dir)
Explanation: Support Functions
This function helps us run our experiments:
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'android',
"board" : 'pixel',
# Device
#"device" : "FA6A10306347",
# Android home
"ANDROID_HOME" : "/usr/local/google/home/kevindubois/Android/Sdk",
# Folder where all the results will be collected
"results_dir" : "Youtube_example",
# Define devlib modules to load
"modules" : [
'cpufreq' # enable CPUFreq support
],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"cpu_capacity",
"cpu_frequency",
"clock_enable",
"clock_disable",
"clock_set_rate"
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'taskset'],
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False)
target = te.target
Explanation: Test environment setup
For more details on this please check out examples/utils/testenv_example.ipynb.
devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration.
In case more than one Android device are conencted to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
End of explanation
# Intialize Workloads for this test environment
results = experiment()
Explanation: Workloads execution
This is done using the experiment helper function defined above which is configured to run a Youtube experiment.
End of explanation
# Benchmark statistics
db_file = os.path.join(te.res_dir, "framestats.txt")
!sed '/Stats since/,/99th/!d;/99th/q' {db_file}
# For all results:
# !cat {results['db_file']}
Explanation: Benchmarks results
End of explanation
# Parse all traces
platform_file = os.path.join(te.res_dir, 'platform.json')
with open(platform_file, 'r') as fh:
platform = json.load(fh)
trace_file = os.path.join(te.res_dir, 'trace.dat')
trace = Trace(trace_file, my_conf['ftrace']['events'], platform)
trappy.plotter.plot_trace(trace.ftrace)
try:
trace.analysis.frequency.plotClusterFrequencies();
logging.info('Plotting cluster frequencies for [sched]...')
except: pass
trace.analysis.frequency.plotClusterFrequencies()
trace.analysis.frequency.plotPeripheralClock(title="Bus Clock", clk="bimc_clk")
Explanation: Traces visualisation
For more information on this please check examples/trace_analysis/TraceAnalysis_TasksLatencies.ipynb.
End of explanation |
9,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Follow-up to
Step1: BD min/max
Step2: Nestly
assuming fragments already simulated
Step3: Nestly params
Step4: Copying input files
Step5: Multi-window HR-SIP
Step6: Making confusion matrices
Step7: Aggregating the confusion matrix data
Step8: --End of simulation--
Plotting results
Step9: Checking that specificity is not always 1 (perfect) | Python Code:
import os
import glob
import itertools
import nestly
%load_ext rpy2.ipython
%load_ext pushnote
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
Explanation: Goal
Follow-up to: atomIncorp_taxaIncorp
Determining the effect of 'heavy' BD window (number of windows & window sizes) on HR-SIP accuracy
Apply a sparsity cutoff prior to selecting 'heavy' fraction samples
In other words, taxa must be present in most of the gradient fractions across the whole gradient
Variable parameters:
'heavy' BD window sizes
Init
End of explanation
## min G+C cutoff
min_GC = 13.5
## max G+C cutoff
max_GC = 80
## max G+C shift
max_13C_shift_in_BD = 0.036
min_BD = min_GC/100.0 * 0.098 + 1.66
max_BD = max_GC/100.0 * 0.098 + 1.66
max_BD = max_BD + max_13C_shift_in_BD
print 'Min BD: {}'.format(min_BD)
print 'Max BD: {}'.format(max_BD)
Explanation: BD min/max
End of explanation
# paths
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1147/'
buildDir = os.path.join(workDir, 'atomIncorp_taxaIncorp_MW-HR-SIP_preSpar')
dataDir = os.path.join(workDir, 'atomIncorp_taxaIncorp')
if not os.path.isdir(buildDir):
os.makedirs(buildDir)
%cd $buildDir
# making an experimental design file for qSIP
x = range(1,7)
y = ['control', 'treatment']
expDesignFile = os.path.join(buildDir, 'qSIP_exp_design.txt')
with open(expDesignFile, 'wb') as outFH:
for i,z in itertools.izip(x,itertools.cycle(y)):
line = '\t'.join([str(i),z])
outFH.write(line + '\n')
!head $expDesignFile
Explanation: Nestly
assuming fragments already simulated
End of explanation
# building tree structure
nest = nestly.Nest()
# varying params
nest.add('percIncorp', [0, 15, 25, 50, 100])
nest.add('percTaxa', [1, 5, 10, 25, 50])
nest.add('rep', range(1,11))
## set params
nest.add('abs', ['1e9'], create_dir=False)
nest.add('np', [10], create_dir=False)
nest.add('Monte_rep', [100000], create_dir=False)
nest.add('subsample_dist', ['lognormal'], create_dir=False)
nest.add('subsample_mean', [9.432], create_dir=False)
nest.add('subsample_scale', [0.5], create_dir=False)
nest.add('subsample_min', [10000], create_dir=False)
nest.add('subsample_max', [30000], create_dir=False)
nest.add('min_BD', [min_BD], create_dir=False)
nest.add('max_BD', [max_BD], create_dir=False)
nest.add('DBL_scaling', [0.5], create_dir=False)
nest.add('bandwidth', [0.8], create_dir=False)
nest.add('heavy_BD_min', [1.71], create_dir=False)
nest.add('heavy_BD_max', [1.75], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('occurs', ['0.0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5'], create_dir=False)
### input/output files
nest.add('buildDir', [buildDir], create_dir=False)
nest.add('exp_design', [expDesignFile], create_dir=False)
# building directory tree
nest.build(buildDir)
# bash file to run
bashFile = os.path.join(buildDir, 'SIPSimRun.sh')
Explanation: Nestly params
End of explanation
files = !find . -name "*.json"
dirs = [os.path.split(x)[0] for x in files]
srcFiles = ['OTU_abs1e9_PCR_sub_w.txt', 'OTU_abs1e9_PCR_sub_meta.txt', 'BD-shift_stats.txt']
for d in dirs:
for f in srcFiles:
f1 = os.path.join(dataDir, d, f)
f2 = os.path.join(buildDir, d, f)
cmd = 'cp -f {} {}'.format(f1, f2)
!$cmd
Explanation: Copying input files
End of explanation
bashFileTmp = os.path.splitext(bashFile)[0] + '_HRSIP_multi.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# phyloseq
## making phyloseq object from OTU table
SIPSimR phyloseq_make \
OTU_abs{abs}_PCR_sub_w.txt \
-s OTU_abs{abs}_PCR_sub_meta.txt \
> OTU_abs{abs}_PCR_sub.physeq
## HR SIP pipeline
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all {occurs} \
-w 1.71-1.75 \
--all OTU_abs1e9_PCR_sub_MW1_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW1_DS2.txt
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all {occurs} \
-w 1.71-1.78 \
--all OTU_abs1e9_PCR_sub_MW2_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW2_DS2.txt
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all {occurs} \
-w 1.69-1.74,1.73-1.78 \
--all OTU_abs1e9_PCR_sub_MW3_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW3_DS2.txt
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all {occurs} \
-w 1.70-1.73,1.72-1.75,1.74-1.77 \
--all OTU_abs1e9_PCR_sub_MW4_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW4_DS2.txt
SIPSimR phyloseq_DESeq2 \
--log2 {log2} \
--hypo greater \
--cont 1,3,5 \
--treat 2,4,6 \
--occur_all {occurs} \
-w 1.69-1.73,1.72-1.76,1.75-1.79 \
--all OTU_abs1e9_PCR_sub_MW5_all.txt \
OTU_abs{abs}_PCR_sub.physeq \
> OTU_abs1e9_PCR_sub_MW5_DS2.txt
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file HR-SIP_multi.log -j 10
%pushnote preSpar MW-HR-SIP complete
Explanation: Multi-window HR-SIP
End of explanation
bashFileTmp = os.path.splitext(bashFile)[0] + '_cMtx.sh'
bashFileTmp
%%writefile $bashFileTmp
#!/bin/bash
# HR-SIP multiple 'heavy' BD windows
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW1-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW1_DS2.txt
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW2-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW2_DS2.txt
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW3-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW3_DS2.txt
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW4-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW4_DS2.txt
SIPSimR DESeq2_confuseMtx \
--libs 2,4,6 \
--padj {padj} \
-o DESeq2_MW5-cMtx \
BD-shift_stats.txt \
OTU_abs1e9_PCR_sub_MW5_DS2.txt
!chmod 777 $bashFileTmp
!cd $workDir; \
nestrun --template-file $bashFileTmp -d $buildDir --log-file cMtx.log -j 10
Explanation: Making confusion matrices
End of explanation
def agg_cMtx(prefix):
# all data
#!nestagg delim \
# -d $buildDir \
# -k percIncorp,percTaxa,rep \
# -o $prefix-cMtx_data.txt \
# --tab \
# $prefix-cMtx_data.txt
# overall
x = prefix + '-cMtx_overall.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
# by class
x = prefix + '-cMtx_byClass.txt'
!nestagg delim \
-d $buildDir \
-k percIncorp,percTaxa,rep \
-o $x \
--tab \
$x
agg_cMtx('DESeq2_MW1')
agg_cMtx('DESeq2_MW2')
agg_cMtx('DESeq2_MW3')
agg_cMtx('DESeq2_MW4')
agg_cMtx('DESeq2_MW5')
%pushnote preSpar MW-HR-SIP run complete!
Explanation: Aggregating the confusion matrix data
End of explanation
F = os.path.join(buildDir, '*-cMtx_byClass.txt')
files = glob.glob(F)
files
%%R -i files
df_byClass = list()
for (f in files){
ff = strsplit(f, '/') %>% unlist
fff = ff[length(ff)]
df_byClass[[fff]] = read.delim(f, sep='\t')
}
df_byClass = do.call(rbind, df_byClass)
df_byClass$file = gsub('\\.[0-9]+$', '', rownames(df_byClass))
df_byClass$method = gsub('-cMtx.+', '', df_byClass$file)
rownames(df_byClass) = 1:nrow(df_byClass)
df_byClass %>% head(n=3)
%%R
# renaming method
rename = data.frame(method = c('DESeq2_MW1', 'DESeq2_MW2', 'DESeq2_MW3', 'DESeq2_MW4', 'DESeq2_MW4'),
method_new = c('1.71-1.75',
'1.71-1.78',
'1.69-1.74,\n1.73-1.78',
'1.70-1.73,\n1.72-1.75,\n1.74-1.77',
'1.69-1.73,\n1.72-1.76,\n1.75-1.79'))
df_byClass = inner_join(df_byClass, rename, c('method'='method')) %>%
select(-method) %>%
rename('method' = method_new)
df_byClass$method = factor(df_byClass$method, levels=rename$method_new %>% as.vector)
df_byClass %>% head(n=3)
%%R -w 800 -h 550
# summarize by SIPSim rep & library rep
df_byClass.s = df_byClass %>%
group_by(method, percIncorp, percTaxa, variables) %>%
summarize(mean_value = mean(values),
sd_value = sd(values))
# plotting
ggplot(df_byClass.s, aes(variables, mean_value, color=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
geom_pointrange(alpha=0.8, size=0.2) +
labs(y='Value') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1)
)
%%R -w 850 -h 600
# summarize by SIPSim rep & library rep
vars = c('Balanced Accuracy', 'Sensitivity', 'Specificity')
df_byClass.s.f = df_byClass.s %>%
filter(variables %in% vars)
# plotting
ggplot(df_byClass.s.f, aes(variables, mean_value, fill=method,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
#geom_pointrange(alpha=0.8, size=0.2) +
geom_bar(stat='identity', position='dodge', width=0.8) +
geom_errorbar(stat='identity', position='dodge', width=0.8) +
scale_y_continuous(breaks=seq(0, 1, 0.2)) +
scale_fill_discrete('"Heavy" BD window(s)') +
facet_grid(percTaxa ~ percIncorp) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.x = element_blank(),
axis.text.x = element_text(angle=45, hjust=1),
axis.title.y = element_blank()
)
%%R -w 750 -h 550
# summarize by SIPSim rep & library rep
vars = c('Balanced Accuracy', 'Sensitivity', 'Specificity')
df_byClass.s.f = df_byClass.s %>%
filter(variables %in% vars) %>%
ungroup() %>%
mutate(percTaxa = percTaxa %>% as.character,
percTaxa = percTaxa %>% reorder(percTaxa %>% as.numeric))
# plotting
p.pnt = ggplot(df_byClass.s.f, aes(percIncorp, mean_value,
color=percTaxa,
group=percTaxa,
ymin=mean_value-sd_value,
ymax=mean_value+sd_value)) +
geom_pointrange(alpha=0.8, size=0.2) +
geom_line() +
scale_color_discrete('% incorp-\norators') +
labs(x='% taxa shared among replicate unfractionated communities') +
facet_grid(method ~ variables) +
theme_bw() +
theme(
text = element_text(size=16),
axis.title.y = element_blank()
)
p.pnt
%%R -i workDir
outFile = 'atomIncorp_taxaIncorp_MW-HR-SIP.pdf'
ggsave(outFile, p.pnt, width=9, height=7.3)
cat('File written:', file.path(getwd(), outFile), '\n')
Explanation: --End of simulation--
Plotting results
End of explanation
%%R -h 250 -w 650
df_byClass.sf = df_byClass %>%
filter(variables == 'Specificity')
max_val = max(df_byClass.sf$values, na.rm=TRUE)
ggplot(df_byClass.sf, aes(values)) +
geom_histogram() +
scale_y_log10() +
labs(x='Specificity') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Checking that specificity is not always 1 (perfect)
End of explanation |
9,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Elasticity in 2D
Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a two-phase composite material. The example introduces the governing equations of linear elasticity, along with the unique boundary conditions required for the MKS. It subsequently demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients for all strain fields. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared
with the finite element data for a large problem.
PyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
Elastostatics Equations
For the sake of completeness, a description of the equations of linear elasticity is included. The constitutive equation that describes the linear elastic phenomena is Hook's law.
$$ \sigma_{ij} = C_{ijkl}\varepsilon_{kl} $$
$\sigma$ is the stress, $\varepsilon$ is the strain, and $C$ is the stiffness tensor that relates the stress to the strain fields. For an isotropic material the stiffness tensor can be represented by lower dimension terms which can relate the stress and the strain as follows.
$$ \sigma_{ij} = \lambda \delta_{ij} \varepsilon_{kk} + 2\mu \varepsilon_{ij} $$
$\lambda$ and $\mu$ are the first and second Lame parameters and can be defined in terms of the Young's modulus $E$ and Poisson's ratio $\nu$ in 2D.
$$ \lambda = \frac{E\nu}{(1-\nu)(1-2\nu)} $$
$$ \mu = \frac{E}{3(1+\nu)} $$
Linear strain is related to displacement using the following equation.
$$ \varepsilon_{ij} = \frac{u_{i,j}+u_{j,i}}{2} $$
We can get an equation that relates displacement and stress by plugging the equation above back into our expression for stress.
$$ \sigma_{ij} = \lambda u_{k,k} + \mu( u_{i,j}+u_{j,i}) $$
The equilibrium equation for elastostatics is defined as
$$ \sigma_{ij,j} = 0 $$
and can be cast in terms of displacement.
$$ \mu u_{i,jj}+(\mu + \lambda)u_{j,ij}=0 $$
In this example, a displacement controlled simulation is used to calculate the strain. The domain is a square box of side $L$ which has an macroscopic strain $\bar{\varepsilon}_{xx}$ imposed.
In general, generating the calibration data for the MKS requires boundary conditions that are both periodic and displaced, which are quite unusual boundary conditions and are given by
Step1: Using delta microstructures for the calibration of the first order influence coefficients is essentially the same as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain.
Generating Calibration Data
The make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields with the boundary conditions given above.
In this example, lets look at a two-phase microstructure with elastic moduli values of 100 and 120 and Poisson's ratio values of 0.3 and 0.3 respectively. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
Step2: Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
Step3: Calibrating First Order Influence Coefficients
Now that we have the delta microstructures and their strain fields, we can calibrate the influence coefficients by creating an instance of the MKSLocalizationModel class. Because we have 2 phases we will create an instance of MKSLocalizationModel with the number of states n_states equal to 2. Then, pass the delta microstructures and their strain fields to the fit method.
Step4: Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first-order influence coefficients.
Step5: That's it, the influence coefficient have be calibrated. Let's take a look at them.
Step6: The influence coefficients for $l=0$ have a Gaussian-like shape, while the influence coefficients for $l=1$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.
Predict the Strain Field for a Random Microstructure
Let's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two phase microstructure and compare it with the results from a finite element simulation.
The make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.
Step7: Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.
Now to get the strain field from the MKSLocalizationModel just pass the same microstructure to the predict method.
Step8: Finally let's compare the results from finite element simulation and the MKS model.
Step9: Lastly, let's look at the difference between the two strain fields.
Step10: The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
Resizing the Coefficients to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
Step11: The influence coefficients that have already been calibrated need to be resized to match the shape of the new larger microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.
Step12: Let's now take a look that ther resized influence coefficients.
Step13: Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.
Step14: Again, let's look at the difference between the two strain fields. | Python Code:
import pymks
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
n = 21
from pymks.tools import draw_microstructures
from pymks.datasets import make_delta_microstructures
X_delta = make_delta_microstructures(n_phases=2, size=(n, n))
draw_microstructures(X_delta)
Explanation: Linear Elasticity in 2D
Introduction
This example provides a demonstration of using PyMKS to compute the linear strain field for a two-phase composite material. The example introduces the governing equations of linear elasticity, along with the unique boundary conditions required for the MKS. It subsequently demonstrates how to generate data for delta microstructures and then use this data to calibrate the first order MKS influence coefficients for all strain fields. The calibrated influence coefficients are used to predict the strain response for a random microstructure and the results are compared with those from finite element. Finally, the influence coefficients are scaled up and the MKS results are again compared
with the finite element data for a large problem.
PyMKS uses the finite element tool SfePy to generate both the strain fields to fit the MKS model and the verification data to evaluate the MKS model's accuracy.
Elastostatics Equations
For the sake of completeness, a description of the equations of linear elasticity is included. The constitutive equation that describes the linear elastic phenomena is Hook's law.
$$ \sigma_{ij} = C_{ijkl}\varepsilon_{kl} $$
$\sigma$ is the stress, $\varepsilon$ is the strain, and $C$ is the stiffness tensor that relates the stress to the strain fields. For an isotropic material the stiffness tensor can be represented by lower dimension terms which can relate the stress and the strain as follows.
$$ \sigma_{ij} = \lambda \delta_{ij} \varepsilon_{kk} + 2\mu \varepsilon_{ij} $$
$\lambda$ and $\mu$ are the first and second Lame parameters and can be defined in terms of the Young's modulus $E$ and Poisson's ratio $\nu$ in 2D.
$$ \lambda = \frac{E\nu}{(1-\nu)(1-2\nu)} $$
$$ \mu = \frac{E}{3(1+\nu)} $$
Linear strain is related to displacement using the following equation.
$$ \varepsilon_{ij} = \frac{u_{i,j}+u_{j,i}}{2} $$
We can get an equation that relates displacement and stress by plugging the equation above back into our expression for stress.
$$ \sigma_{ij} = \lambda u_{k,k} + \mu( u_{i,j}+u_{j,i}) $$
The equilibrium equation for elastostatics is defined as
$$ \sigma_{ij,j} = 0 $$
and can be cast in terms of displacement.
$$ \mu u_{i,jj}+(\mu + \lambda)u_{j,ij}=0 $$
In this example, a displacement controlled simulation is used to calculate the strain. The domain is a square box of side $L$ which has an macroscopic strain $\bar{\varepsilon}_{xx}$ imposed.
In general, generating the calibration data for the MKS requires boundary conditions that are both periodic and displaced, which are quite unusual boundary conditions and are given by:
$$ u(L, y) = u(0, y) + L\bar{\varepsilon}_{xx}$$
$$ u(0, L) = u(0, 0) = 0 $$
$$ u(x, 0) = u(x, L) $$
Modeling with MKS
Calibration Data and Delta Microstructures
The first order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure, as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met, we can expect a mean absolute error of 2% or less, when comparing the MKS results with those computed using finite element methods [1].
Because we are using distinct phases and the contrast is low enough to only need the first order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first order influence coefficients [2].
Here we use the make_delta_microstructure function from pymks.datasets to create the two delta microstructures needed to calibrate the first order influence coefficients for a two-phase microstructure. The make_delta_microstructure function uses SfePy to generate the data
End of explanation
from pymks.datasets import make_elastic_FE_strain_delta
from pymks.tools import draw_microstructure_strain
elastic_modulus = (100, 120)
poissons_ratio = (0.3, 0.3)
macro_strain = 0.02
size = (n, n)
X_delta, y_delta = make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio,
size=size, macro_strain=macro_strain)
Explanation: Using delta microstructures for the calibration of the first order influence coefficients is essentially the same as using a unit impulse response to find the kernel of a system in signal processing. Any given delta microstructure is composed of only two phases with the center cell having an alternative phase from the remainder of the domain.
Generating Calibration Data
The make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields with the boundary conditions given above.
In this example, lets look at a two-phase microstructure with elastic moduli values of 100 and 120 and Poisson's ratio values of 0.3 and 0.3 respectively. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function. Note that make_elasticFEstrain_delta does not take a number of samples argument as the number of samples to calibrate the MKS is fixed by the number of phases.
End of explanation
draw_microstructure_strain(X_delta[0], y_delta[0])
Explanation: Let's take a look at one of the delta microstructures and the $\varepsilon_{xx}$ strain field.
End of explanation
from pymks import MKSLocalizationModel
from pymks import PrimitiveBasis
p_basis = PrimitiveBasis(n_states=2, domain=[0, 1])
model = MKSLocalizationModel(basis=p_basis)
Explanation: Calibrating First Order Influence Coefficients
Now that we have the delta microstructures and their strain fields, we can calibrate the influence coefficients by creating an instance of the MKSLocalizationModel class. Because we have 2 phases we will create an instance of MKSLocalizationModel with the number of states n_states equal to 2. Then, pass the delta microstructures and their strain fields to the fit method.
End of explanation
model.fit(X_delta, y_delta)
Explanation: Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first-order influence coefficients.
End of explanation
from pymks.tools import draw_coeff
draw_coeff(model.coef_)
Explanation: That's it, the influence coefficient have be calibrated. Let's take a look at them.
End of explanation
from pymks.datasets import make_elastic_FE_strain_random
np.random.seed(99)
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
Explanation: The influence coefficients for $l=0$ have a Gaussian-like shape, while the influence coefficients for $l=1$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.
Predict the Strain Field for a Random Microstructure
Let's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two phase microstructure and compare it with the results from a finite element simulation.
The make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.
End of explanation
strain_pred = model.predict(X)
Explanation: Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.
Now to get the strain field from the MKSLocalizationModel just pass the same microstructure to the predict method.
End of explanation
from pymks.tools import draw_strains_compare
draw_strains_compare(strain[0], strain_pred[0])
Explanation: Finally let's compare the results from finite element simulation and the MKS model.
End of explanation
from pymks.tools import draw_differences
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
Explanation: Lastly, let's look at the difference between the two strain fields.
End of explanation
m = 3 * n
size = (m, m)
print size
X, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,
poissons_ratio=poissons_ratio, size=size,
macro_strain=macro_strain)
draw_microstructure_strain(X[0] , strain[0])
Explanation: The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.
Resizing the Coefficients to use on Larger Microstructures
The influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger random microstructure and its strain field.
End of explanation
model.resize_coeff(X[0].shape)
Explanation: The influence coefficients that have already been calibrated need to be resized to match the shape of the new larger microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the resize_coeff method.
End of explanation
draw_coeff(model.coef_)
Explanation: Let's now take a look that ther resized influence coefficients.
End of explanation
strain_pred = model.predict(X)
draw_strains_compare(strain[0], strain_pred[0])
Explanation: Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.
End of explanation
draw_differences([strain[0] - strain_pred[0]], ['Finite Element - MKS'])
Explanation: Again, let's look at the difference between the two strain fields.
End of explanation |
9,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
0) Critique the most important figure from a seminal paper in your field. Provide the original figure/caption. In your own words, what story is this figure trying to convey? What does it do well? What could have been done better? What elements didn't need to be present to still convey the same story?
<img src="hw_2_data/critic.png">
<br/>
The figure is trying to show how min cuts can be achieved via clustering. The example is simple enough with insights. But G1 and G2 are not clearly defined in the graph. They may also use dashed line for cuts and clusters. They may also use different color for cuts and clusters. If flow is included in the graph, that would be better.
1) Reproduce one graph
<img src="hw_2_data/truckcost.png">
<br/>
Step1: 2) Reproduce in matplotlib the provided plot stocks.png Use the provided datafiles ny_temps.txt, yahoo_data.txt, and google_data.txt. Provide your new plot and the Python code.
Step4: 3) Make a generic "Brushing" graph | Python Code:
data = pd.read_table("hw_2_data/ay250.txt", sep="\t")
data.head()
np.shape(data)
fig = plt.figure()
ax = fig.add_subplot(111)
colors = ["red", "green", "blue", "black"]
linestyles = ["--", "-"]
for i, col in enumerate(data.columns):
ax.plot(np.arange(50)/5, data[col], label = col, color = colors[i%4], linestyle = linestyles[int(i/4)])
ax.set_xlabel("Truck Capacity")
ax.set_ylabel("Ratio to Centralized Optimal Solution")
ax.set_title("Cost of Decentralization under different policies")
ax.legend()
Explanation: 0) Critique the most important figure from a seminal paper in your field. Provide the original figure/caption. In your own words, what story is this figure trying to convey? What does it do well? What could have been done better? What elements didn't need to be present to still convey the same story?
<img src="hw_2_data/critic.png">
<br/>
The figure is trying to show how min cuts can be achieved via clustering. The example is simple enough with insights. But G1 and G2 are not clearly defined in the graph. They may also use dashed line for cuts and clusters. They may also use different color for cuts and clusters. If flow is included in the graph, that would be better.
1) Reproduce one graph
<img src="hw_2_data/truckcost.png">
<br/>
End of explanation
ny_temps = np.loadtxt("hw_2_data/ny_temps.txt", skiprows=1)
yahoo_data = np.loadtxt("hw_2_data/yahoo_data.txt", skiprows=1)
google_data = np.loadtxt("hw_2_data/google_data.txt", skiprows=1)
fig = plt.figure()
ax1 = fig.add_subplot(111)
yahoo = ax1.plot(yahoo_data[:,0], yahoo_data[:,1], color = 'purple',
linestyle = '-', label = "Yahoo!Stock Value")
google = ax1.plot(google_data[:,0], google_data[:,1], color = 'blue',
linestyle = '-', label = "Google Stock Value")
# add secondary y axis
ax2 = ax1.twinx()
temps = ax2.plot(ny_temps[:,0], ny_temps[:,1], color = 'red',
linestyle = ':', label = "NY Mon. High Temp")
# add legend together
data = yahoo + google + temps
labs = [l.get_label() for l in data]
ax1.legend(data, labs, loc=(0.03,.45), prop={'size':7}, frameon=False)
ax1.set_title("New York Temperature, Google, and Yahoo!",
size = 16, family='Times New Roman', fontweight="bold")
ax1.set_xlabel("Date (MJD)")
ax1.set_ylabel("Value (Dollars)")
ax2.set_ylabel("Temperature ($^\circ$F)")
ax1.set_xlim(48800, 55620)
ax1.set_ylim(-20, 780)
ax1.minorticks_on()
ax2.set_ylim(-150,100)
ax2.minorticks_on()
Explanation: 2) Reproduce in matplotlib the provided plot stocks.png Use the provided datafiles ny_temps.txt, yahoo_data.txt, and google_data.txt. Provide your new plot and the Python code.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.patches import Rectangle
import numpy as np
import plotly
import pandas as pd
import datashader
import seaborn as sns
import sys
import os
from bokeh.io import output_file, show
from bokeh.layouts import gridplot
from bokeh.models import ColumnDataSource
from bokeh.plotting import figure
%matplotlib notebook
flowers = pd.read_table("hw_2_data/flowers.csv", sep=",").set_index("species")
class DrawClass:
def __init__(self, data, colors):
Initialize the brusher plots
data: pd.DataFrame - figure will have NxN subplots where N
is the number of features/columns
colors: np.ndarray - the colors group each row into categories
accordingly
self.data = data
self.colors = colors
self.size = data.shape[1]
self.fig, self.axes = plt.subplots(self.size, self.size)
self.ax_data = {}
self.ax_dict = {}
self.active = np.array(data.shape[0])
self.rect = None
self.current_axes = None
self.xy0 = None
self.xy1 = None
for x_ix, x_data in enumerate(self.data.columns):
for y_ix, y_data in enumerate(self.data.columns):
ax_temp = self.axes[y_ix, x_ix]
scatterplot = ax_temp.scatter(data[x_data], data[y_data], alpha = 0.4)
ax_temp.set_xlim(self.data[x_data].min(), self.data[x_data].max())
ax_temp.set_ylim(self.data[y_data].min(), self.data[y_data].max())
ax_temp.xaxis.set_ticks([])
ax_temp.yaxis.set_ticks([])
scatterplot.set_color(colors)
self.ax_data[x_ix, y_ix] = scatterplot
self.ax_dict[str(ax_temp)] = (x_data, y_data)
self.cids = {}
self.cids['button_press_event'] = self.fig.canvas.mpl_connect('button_press_event', self.onclick)
self.cids['button_release_event'] = self.fig.canvas.mpl_connect('button_release_event', self.offclick)
self.fig.show()
self.flush()
def flush(self):
Desciption:
Flush std out and draw canvas - to make sure everything is written right now
sys.stdout.flush()
self.fig.canvas.draw()
def onclick(self, event):
if event.x is None or event.y is None:
return
#self.active = np.ones(self.data.shape[0])
#self.update_colors(active = self.active.values)
if self.rect is not None:
self.rect.remove()
self.ax0 = event.inaxes
self.xy0 = (event.xdata, event.ydata)
self.flush()
def offclick(self, event):
if event.xdata is None or event.ydata is None: return
if event.inaxes != self.ax0: return
self.xy1 = (event.xdata, event.ydata)
# Make a rectangular, finding upper left point, width, height
xmin = min(self.xy0[0], self.xy1[0])
xmax = max(self.xy0[0], self.xy1[0])
ymin = min(self.xy0[1], self.xy1[1])
ymax = max(self.xy0[1], self.xy1[1])
width = xmax - xmin
height = ymax - ymin
area = width * height
self.rect = Rectangle((xmin, ymin), width, height, color = 'k', alpha = 0.1)
self.ax0.add_patch(self.rect)
self.update_state(xmin, xmax, ymin, ymax)
if area < 0.01:
self.update_colors()
self.flush()
def update_state(self, xmin, xmax, ymin, ymax):
# find out column names
x_label, y_label = self.ax_dict[str(self.ax0)]
# get indices for active datas
self.active = (self.data[x_label] > xmin) & (self.data[x_label] < xmax)
self.active = self.active & (self.data[y_label] > ymin) & (self.data[y_label] < ymax)
# update the colors
self.update_colors(active = self.active.values)
def update_colors(self, active = None):
# update colors
colors = self.colors.copy()
if active is not None:
colors[~active] = (0, 0, 0, 0.1)
# set the colors for each axis
for (x, y), data in self.ax_data.items():
data.set_color(colors)
color_map = {
'setosa': (0.6, 0, 0, 0.4),
'versicolor': (0, 0.6, 0, 0.4),
'virginica': (0, 0, 0.6, 0.4)
}
colors = np.array([color_map[x] for x in flowers.index])
DrawClass(data = flowers, colors = colors)
Explanation: 3) Make a generic "Brushing" graph
End of explanation |
9,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Some notes
should rename the tables consistently
e.g. dfsummary, dfdata, dfinfo, dfsteps, dffid
have to take care so that it also can read "old" cellpy-files
should make (or check if it is already made) an option for giving a "custom" config-file in starting the session
Step1: Querying cellpy file (hdf5)
load steptable
get the stepnumbers for given cycle
create query and run it
scale the charge (100_000/mass)
Step2: Result
65% penalty for using "hdf5" query lookup
5.03 vs 3.05 ms | Python Code:
my_data.make_step_table()
filename2 = Path("/Users/jepe/Arbeid/Data/celldata/20171120_nb034_11_cc.nh5")
my_data.save(filename2)
print(f"size: {filename2.stat().st_size/1_048_576} MB")
my_data2 = cellreader.CellpyData()
my_data2.load(filename2)
dataset2 = my_data2.dataset
print(dataset2.steps.columns)
del my_data2
del dataset2
# next: dont load the full hdf5-file, only get datapoints for a cycle from step_table
# then: query the hdf5-file for the data (and time it)
# ex: store.select('/CellpyData/dfdata', "data_point>20130104 & data_point<20130104 & columns=['A', 'B']")
infoname = "/CellpyData/info"
dataname = "/CellpyData/dfdata"
summaryname = "/CellpyData/dfsummary"
fidname = "/CellpyData/fidtable"
stepname = "/CellpyData/step_table"
store = pd.HDFStore(filename2)
store.select("/CellpyData/dfdata", where="index>21 and index<32")
store.select(
"/CellpyData/dfdata", "index>21 & index<32 & columns=['Test_Time', 'Step_Index']"
)
Explanation: Some notes
should rename the tables consistently
e.g. dfsummary, dfdata, dfinfo, dfsteps, dffid
have to take care so that it also can read "old" cellpy-files
should make (or check if it is already made) an option for giving a "custom" config-file in starting the session
End of explanation
steptable = store.select(stepname)
s = my_data.get_step_numbers(
steptype="charge",
allctypes=True,
pdtype=True,
cycle_number=None,
steptable=steptable,
)
cycle_mask = (
s["cycle"] == 2
) # also possible to give cycle_number in get_step_number instead
s.head()
a = s.loc[cycle_mask, ["point_first", "point_last"]].values[0]
v_hdr = "Voltage"
c_hdr = "Charge_Capacity"
d_hdr = "Discharge_Capacity"
i_hdr = "Current"
q = f"index>={ a[0] } & index<={ a[1] }"
q += f"& columns = ['{c_hdr}', '{v_hdr}']"
mass = dataset.mass
print(f"mass from dataset.mass = {mass:5.4} mg")
%%timeit
my_data.get_ccap(2)
%%timeit
c2 = store.select("/CellpyData/dfdata", q)
c2[c_hdr] = c2[c_hdr] * 1000000 / mass
5.03 / 3.05
Explanation: Querying cellpy file (hdf5)
load steptable
get the stepnumbers for given cycle
create query and run it
scale the charge (100_000/mass)
End of explanation
plt.plot(c2[c_hdr], c2[v_hdr])
store.close()
Explanation: Result
65% penalty for using "hdf5" query lookup
5.03 vs 3.05 ms
End of explanation |
9,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lineare Diskriminanzanalyse
Araz, Hasenklever, Pede
Laden von Bibliotheken
Step1: Laden der Merkmalsmatrix
und Vorverarbeitung von Zeilen und Spalten nach Anzahl von NaNs und stark korrelierenden Merkmalen
Step2: Produktweise Sortierung der Daten
minimale Anzahl der Rohre je Walzlos festlegen, um die Schätzung je Walzlos zu verbessern,
und Daten nach Produkten sortieren
Step3: Auswahl des Produtes durch Schieberegler implementieren
TODO Produkteigenschaften (LG, AD, WD) ausgeben
Step4: Auswahl eines Produktes und Ausgabe der Anzahl von Walzlosen mit "genügend" Rohren
Step5: Verbleibende Merkmale
Step6: Aufteilen der Daten in Test- und Trainingsdaten
Step7: Normalisierung der Daten
Step8: Kovarianzmatrix von Trainings- und Testdaten
Step9: Dürchführen der LDA auf die Trainingsdaten
Step10: Darstellung der Eigenwerte
Step11: Testen der Klassifikation
Step12: Darstellung der transformierten Trainingsdaten und Klassenzugehörigkeit
Step13: Interpretation der LDA-Ergebnisse
Die Helligkeit der Punkte bildet die Größe des Beitrags des Merkmals im jeweiligen Eigenvektor ab.
Step14: Darstellung der Eigenvektoren | Python Code:
%reload_ext autoreload
%autoreload 2
import numpy as np
import os
import pandas as pd
import random
import scipy
from scipy.stats import zscore
# interactive
from ipywidgets.widgets import interact, IntSlider, FloatSlider
from IPython.display import display
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
from multiDatenanalyse import *
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
mmPfad = r"D:\C\Uni\Master\KSS\MV_Analyse\Messmatrix.csv"#'../data/Messmatrix.csv'
Explanation: Lineare Diskriminanzanalyse
Araz, Hasenklever, Pede
Laden von Bibliotheken
End of explanation
df = load_data(mmPfad)
Explanation: Laden der Merkmalsmatrix
und Vorverarbeitung von Zeilen und Spalten nach Anzahl von NaNs und stark korrelierenden Merkmalen
End of explanation
min_num_walzlos = 300
df_all_prod = [extract_product(df, product_id=product_id, min_num_walzlos=min_num_walzlos) for product_id in range(26)]
Explanation: Produktweise Sortierung der Daten
minimale Anzahl der Rohre je Walzlos festlegen, um die Schätzung je Walzlos zu verbessern,
und Daten nach Produkten sortieren
End of explanation
@interact(index=IntSlider(min=0, max=25, value = 11))
def count_per_product(index):
print("Anzahl der Walzlose: "+str(len(pd.unique(df_all_prod[index]["Header_Walzlos"]))))
Explanation: Auswahl des Produtes durch Schieberegler implementieren
TODO Produkteigenschaften (LG, AD, WD) ausgeben
End of explanation
product_id = 11
df_prod = df_all_prod[product_id]
print("Anzahl der Walzlose: "+str(len(pd.unique(df_prod["Header_Walzlos"]))))
Explanation: Auswahl eines Produktes und Ausgabe der Anzahl von Walzlosen mit "genügend" Rohren
End of explanation
df_prod.columns
Explanation: Verbleibende Merkmale:
End of explanation
test_frac = 0.2
train_set, test_set = get_lda_data(df_prod, test_frac=test_frac)
Explanation: Aufteilen der Daten in Test- und Trainingsdaten
End of explanation
train_set['data'] = zscore(train_set['data'])
test_set['data'] = zscore(test_set['data'])
Explanation: Normalisierung der Daten
End of explanation
cov_train = np.cov(train_set['data'].T)
cov_test = np.cov(test_set['data'].T)
plt.figure(figsize=(15,10))
ax1 = plt.subplot(121)
ax1.imshow(255*(cov_train-np.max(cov_train))/(np.max(cov_train)-np.min(cov_train)), 'gray')
ax1.set_title('Kovarianz der Trainingsdaten')
ax1.set_xlabel('Merkmal')
ax1.set_ylabel('Merkmal')
ax2 = plt.subplot(122)
ax2.imshow(255*(cov_test-np.max(cov_test))/(np.max(cov_test)-np.min(cov_test)), 'gray')
ax2.set_title('Kovarianz der Testdaten')
ax2.set_xlabel('Merkmal')
ax2.set_ylabel('Merkmal')
print('Wie selbstähnlich sind die Test- und Trainingsdaten?')
Explanation: Kovarianzmatrix von Trainings- und Testdaten
End of explanation
# extract data and label
X_train, y_train = train_set['data'], train_set['label']
X_test, y_test = test_set['data'], test_set['label']
# number components for transform
n_components = 3
# LDA object
sklearn_LDA = LDA(n_components=n_components, solver='eigen')
# fit with train data
sklearn_LDA = sklearn_LDA.fit(X_train, y_train)
Explanation: Dürchführen der LDA auf die Trainingsdaten
End of explanation
plt.stem(sklearn_LDA.explained_variance_ratio_)
plt.xlabel('Index Eigenwert')
plt.ylabel('Beitrag zur Varianz')
plt.title("Varainzverteilung")
Explanation: Darstellung der Eigenwerte
End of explanation
train_pred = sklearn_LDA.predict(X_train)
print('{0:.2f}% Genauigkeit bei der Klassifikation der Trainingsdaten'.format(100*np.mean(train_pred == y_train)))
test_pred = sklearn_LDA.predict(X_test)
print('{0:.2f}% Genauigkeit bei der Klassifikation der Testdaten'.format(100*np.mean(test_pred == y_test)))
Explanation: Testen der Klassifikation
End of explanation
data = sklearn_LDA.transform(X_train)
plot_lda(data, y_train, 'Transformierte Trainingsdaten')
Explanation: Darstellung der transformierten Trainingsdaten und Klassenzugehörigkeit
End of explanation
eigvecs = sklearn_LDA.scalings_
plt.figure(figsize=(20,5))
plt.imshow(np.abs(eigvecs), 'gray')
#_ = plt.axis('off')
plt.title("Eigenvektoren")
print('Einflussreichstes Merkmal im ersten EV: {}'.format(df[df.columns[6:]].columns[np.argmax(np.abs(eigvecs[:, 0]))]))
print('Einflussreichstes Merkmal im zweiten EV: {}'.format(df[df.columns[6:]].columns[np.argmax(np.abs(eigvecs[:, 1]))]))
Explanation: Interpretation der LDA-Ergebnisse
Die Helligkeit der Punkte bildet die Größe des Beitrags des Merkmals im jeweiligen Eigenvektor ab.
End of explanation
plt.figure(figsize=(20,10))
for index in range(3):
ax = plt.subplot(1,3,index+1)
ax.stem(eigvecs[:, index])
ax.set_title('Eigenvektor {}'.format(index))
ax.set_xlabel('Merkmalsindex')
ax.set_ylabel('Beitrag in Eigenvektor')
Explanation: Darstellung der Eigenvektoren
End of explanation |
9,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stability map with MEGNO and WHFast
In this tutorial, we'll create a stability map of a two planet system using the chaos indicator MEGNO (Mean Exponential Growth of Nearby Orbits) and the symplectic integrator WHFast (Rein and Tamayo 2015).
We will integrate a two planet system with massive planets. We vary two orbital parameters, the semi-major axis $a$ and the eccentricity $e$. Let us first define a function that runs one simulation for a given set of initial conditions $(a, e)$.
Step1: Let's try this out and run one simulation
Step2: The return value is the MEGNO. It is about 2, thus the system is regular for these initial conditions. Let's run a whole array of simulations.
Step3: On my laptop (dual core CPU), this takes only 3 seconds!
Let's plot it! | Python Code:
def simulation(par):
a, e = par # unpack parameters
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.integrator_whfast_safe_mode = 0
sim.dt = 5.
sim.add(m=1.) # Star
sim.add(m=0.000954, a=5.204, M=0.600, omega=0.257, e=0.048)
sim.add(m=0.000285, a=a, M=0.871, omega=1.616, e=e)
sim.move_to_com()
sim.init_megno(1e-16)
sim.exit_max_distance = 20.
try:
sim.integrate(5e2*2.*np.pi, exact_finish_time=0) # integrate for 500 years, integrating to the nearest
#timestep for each output to keep the timestep constant and preserve WHFast's symplectic nature
megno = sim.calculate_megno()
return megno
except rebound.Escape:
return 10. # At least one particle got ejected, returning large MEGNO.
Explanation: Stability map with MEGNO and WHFast
In this tutorial, we'll create a stability map of a two planet system using the chaos indicator MEGNO (Mean Exponential Growth of Nearby Orbits) and the symplectic integrator WHFast (Rein and Tamayo 2015).
We will integrate a two planet system with massive planets. We vary two orbital parameters, the semi-major axis $a$ and the eccentricity $e$. Let us first define a function that runs one simulation for a given set of initial conditions $(a, e)$.
End of explanation
import rebound
import numpy as np
simulation((7,0.1))
Explanation: Let's try this out and run one simulation
End of explanation
Ngrid = 80
par_a = np.linspace(7.,10.,Ngrid)
par_e = np.linspace(0.,0.5,Ngrid)
parameters = []
for e in par_e:
for a in par_a:
parameters.append((a,e))
from rebound.interruptible_pool import InterruptiblePool
pool = InterruptiblePool()
results = pool.map(simulation,parameters)
Explanation: The return value is the MEGNO. It is about 2, thus the system is regular for these initial conditions. Let's run a whole array of simulations.
End of explanation
results2d = np.array(results).reshape(Ngrid,Ngrid)
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(7,5))
ax = plt.subplot(111)
extent = [min(par_a),max(par_a),min(par_e),max(par_e)]
ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("semi-major axis $a$")
ax.set_ylim(extent[2],extent[3])
ax.set_ylabel("eccentricity $e$")
im = ax.imshow(results2d, interpolation="none", vmin=1.9, vmax=4, cmap="RdYlGn_r", origin="lower", aspect='auto', extent=extent)
cb = plt.colorbar(im, ax=ax)
cb.set_label("MEGNO $\\langle Y \\rangle$")
Explanation: On my laptop (dual core CPU), this takes only 3 seconds!
Let's plot it!
End of explanation |
9,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Step1: Background
Recall that the simplistic measurement equation can be defined as follows
Step2: The effect of a large range of w values is apparent from the equation for phase error above. For very long baselines, especially those originating from non-coplanar arrays (e.g VLBI) over extended periods of time this w term has a multiplicative effect on the error
Step3: Epsilon for w-projected images
The maximum error occurs at one of the corners of the facet/image, ie. lets say at
Step4: Calculator | Python Code:
%install_ext https://raw.githubusercontent.com/mkrphys/ipython-tikzmagic/master/tikzmagic.py
%load_ext tikzmagic
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Setup
End of explanation
%%tikz --scale 2 --size 600,600 -f png
\draw [black, domain=0:180] plot ({2*cos(\x)}, {2*sin(\x)});
\draw [black, ->] (0,0) -- (0,2);
\draw [black, ->] (0,0) -- ({-0.75*2},{sqrt(4-(-0.75*2)*(-0.75*2))});
\node [above] at (0,1*2) {$n=1$};
\node [right] at (0,0.5*2) {$1$};
\node [right] at (-0.5*2,0.5*2) {$1$};
\node [left] at ({-0.75*2},{sqrt(4-(-0.75*2)*(-0.75*2))}) {$n=\sqrt{1-l^2-m^2}$};
\draw [blue,thick] (-2,2) -- (2,2);
\draw [red, ->] ({-0.75*2},{sqrt(4-(-0.75*2)*(-0.75*2))}) -- ({-0.75*2},2);
\node [left] at ({-0.75*2},{sqrt(4-(-0.75*2)*(-0.75*2))+0.15*2}) {$\epsilon$};
\node [right] at ({1*2},{1*2}) {$l$};
\node [right] at ({1*2},{0*2}) {$l=1$};
\node [left] at ({-1*2},{0*2}) {$l=-1$};
Explanation: Background
Recall that the simplistic measurement equation can be defined as follows:
\begin{equation}
V_{measured}(u,v,w) \approx \left<\int_{sources}I(l,m,n)e^{\frac{2\pi i}{\lambda}(ul+vm+w(n-1))}\frac{dldm}{n}\right> \text{ where } n\approx\sqrt{1-l^2-m^2}
\end{equation}
In order to use a 2 dimensional discrete fourier transform $n-1\approx 0$ (the tangent planar approximation better be close to the celestial sphere). This assumption is invalid in widefield imaging and it is necessary to correct for the resulting phase delay.
Let us define the phase error introduced by imaging over wider fields of view with non-coplanar baselines (non-zero w terms) as
\begin{equation}
\xi:=\frac{2{\pi}||\Delta{w}||\epsilon}{{\lambda_{min}}{n_{\text{planes}}}} \text{ and ideally } 0{\leq\xi\ll}1
\end{equation}
Here $\epsilon$ represents the distance between the celestial sphere and the [tangential] planar projection. For simplicity we assume an orthogonal (SIN projection in FITS nomenclature) coordinate projection is used. In other words for each $(l,m,n)$ coordinate $n = 1$ where n is defined to be in the direction of the phase centre, with orthogonal $l$ and $m$ direction cosines. $l$ and $m$ are the direction cosines with respect to $u$ and $v$ respectively.
In order to decrease $\xi$ we need $n_{\text{planes}}{\rightarrow}\infty$. $n_{planes}$ represent the number of w-projection planes needed to drive down the phase error, $\xi$.
End of explanation
%%tikz --scale 2 --size 600,600 -f png
\draw [black,thick] (0,2) -- (3,2);
\draw [black,thick] (0,0) -- (3,0);
\draw [black, domain={180+15}:{360-15}] plot ({(cos(\x)+1)*0.25}, {(sin(\x))*0.25+2});
\draw [black] ({0.25*0.5},{2 - sqrt(0.25*0.25*(1-0.5*0.5))}) -- ({0.25},{2});
\draw [black] ({0.25*1.5},{2 - sqrt(0.25*0.25*(1-0.5*0.5))}) -- ({0.25},{2});
\draw [black, domain={180+15}:{360-15}] plot ({(cos(\x)-1)*0.25+3}, {(sin(\x))*0.25});
\draw [black] ({3-0.25*0.5},{0 - sqrt(0.25*0.25*(1-0.5*0.5))}) -- ({3-0.25},{0});
\draw [black] ({3-0.25*1.5},{0 - sqrt(0.25*0.25*(1-0.5*0.5))}) -- ({3-0.25},{0});
\draw [red,thick,<->] ({3-0.25},{0}) -- ({3-0.25},{2});
\node [right] at ({3-0.25},{1}) {$||\Delta{w}||$};
\node [right] at ({3},{2}) {$w_{max}$};
\node [right] at ({3},{0}) {$w_{min}$};
Explanation: The effect of a large range of w values is apparent from the equation for phase error above. For very long baselines, especially those originating from non-coplanar arrays (e.g VLBI) over extended periods of time this w term has a multiplicative effect on the error:
\begin{equation}
\xi\propto||\Delta{w}||\epsilon
\end{equation}
It must be emphasized that over an extended period of time the baselines of any general non-East-West array will be rotated up into the w-direction
End of explanation
def compute_lmn(phase_centres,image_coordinate):
delta_ra = - phase_centres[0] + image_coordinate[0]
dec0 = phase_centres[1]
dec = image_coordinate[1]
return (np.cos(dec)*np.sin(delta_ra),
np.sin(dec)*np.cos(dec0)-np.cos(dec)*np.sin(dec0)*np.cos(delta_ra),
np.sin(dec)*np.sin(dec0)+np.cos(dec)*np.cos(dec0)*np.cos(delta_ra))
def construct_rot_matrix(ra,dec):
rot_matrix = [[np.sin(ra),np.cos(ra),0],
[-np.sin(dec)*np.cos(ra),np.sin(dec)*np.sin(ra),np.cos(dec)],
[np.cos(dec)*np.cos(ra),-np.cos(dec)*np.sin(ra),np.sin(dec)]]
return np.matrix(rot_matrix)
def compute_new_uvw(old,new,uvw):
rot_matrix = construct_rot_matrix(old[0],old[1])
rot_matrix_new = construct_rot_matrix(new[0],new[1])
return rot_matrix_new * rot_matrix.T * np.matrix(uvw).T #transpose of a Euler rotation matrix is the inverse rotation
Explanation: Epsilon for w-projected images
The maximum error occurs at one of the corners of the facet/image, ie. lets say at:
\begin{equation}
\begin{split}
d &= (\theta_l/2,\theta_m/2)\
\end{split}
\end{equation}
Where $\theta_l=n_xcell_x \text{ rads and } \theta_m=n_ycell_y \text{ rads}$
The following identities relate the directions (assumed to be given in right assension and declination $\theta_l/2$ and $\theta_m/2$ to points on the celestial sphere.
\begin{equation}
\begin{split}
l &= \cos{\delta}\sin{\Delta\alpha}\
m &= \sin{\delta}\cos{\delta_0} - \cos{\delta}\sin{\delta_0}\cos{\Delta\alpha}\
n &= \sin{\delta}\sin{\delta_0} + \cos{\delta}\cos{\delta_0}\cos{\Delta\alpha}\
\end{split}
\end{equation}
The difference between a point on the orthogonally projected image and the corresponding point on the celestial sphere is given by:
\begin{equation}
\epsilon = ||n - 1|| = ||\sqrt{1-(\Delta{(l/2)})^2-(\Delta{(m/2)})^2} - 1||
\end{equation}
where $\Delta{(l/2)}$ and $\Delta{(m/2)}$ correspond to the direction cosines of the point at $(\alpha + \theta_l/2,\delta + \theta_m/2)$ where $\alpha$ and $\delta$ correspond to the phase centre at the centre of the facet/image.
The corresponding relation between number of planes and epsilon is given as:
\begin{equation}
n_{\text{planes}}=\frac{2{\pi}||\Delta{w}||\epsilon}{{\lambda_{min}}{\xi}} \text{ and ideally } 0{\leq\xi\ll}1
\end{equation}
Epsilon for faceted images
Here the half facet size (in l and m) is given as $\theta_{l_f}/2 = \theta_l/(2n_{facets})$ and $\theta_{m_f}/2 = \theta_m/(2n_{facets})$ respectively. We know the arc subtended by the angle to the corner of the facet has length $\cos{(\theta_{l_f}/2)}\cos{(\theta_{m_f}/2)}$ using the spherical rule of cosines and assuming a unit celestial sphere and orthogonal u and v bases. This angle to the corner of the image is a small number, so we might as well just use the small angle approximation:
\begin{equation}
\epsilon \approx \sin{(\delta_0 + \theta_l/2)}\sin{\delta_0} + \cos{(\delta_0 + \theta_l/2)}\cos{\delta_0}\cos{(\theta_m/2)} - \cos{\left[max(\theta_{l},\theta_{m})/(2n_{facets})\right]}
\end{equation}
This results in the following relation between half the number of linearly spaced facets (along a single diagonal of the facet image) and $\xi$:
\begin{equation}
n_{facets} = \frac{max(\theta_l,\theta_m)}{2\cos^{-1}{\left[\sin{(\delta_0 + \theta_l/2)}\sin{\delta_0} + \cos{(\delta_0 + \theta_l/2)}\cos{\delta_0}\cos{(\theta_m/2)}-\frac{\lambda_{min}\xi}{2{\pi}||\Delta{w}||}\right]}}
\end{equation}
End of explanation
nx = ny = 1024
cellx = celly = 8 / 60.0 / 60.0 * np.pi / 180.0 #8*1024 arcsec in radians
ra = (290 + 25 / 60 + 0 / 60 / 60) * np.pi / 180
dec = (21 + 45 / 60 + 0 / 60 / 60) * np.pi / 180
phase_centre = np.array([ra,dec]) #should be read from MS
max_err = phase_centre + np.array([cellx*nx/2.0,celly*ny/2.0])
lmn_max_err = compute_lmn(phase_centre,max_err)
delta_w = 1031.2111327 #should be read from MS
min_lambda = 0.15762 #should be read from MS
e = np.abs(lmn_max_err[2]-1)
threshold = 0.5
num_planes_needed = np.ceil(2 * np.pi * delta_w * e / (min_lambda * threshold))
(li,mi,ni) = compute_lmn(phase_centre,phase_centre + np.array([cellx*nx/2.0,celly*ny/2.0]))
num_facets_needed = np.ceil(np.sqrt((cellx*nx) / (2*np.arccos(ni - (min_lambda *threshold)/(2*np.pi*delta_w))))*2)
print num_planes_needed, "planes needed to obtain seperation"
print num_facets_needed, "facets needed along each dimension of the final image to obtain speration"
Explanation: Calculator
End of explanation |
9,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was used for developing a script to translate MagIC format files from the 2.5 data format to the 3.0 format. This functionality is now implemented in Pmag GUI.
Getting started
Step1: lowest level
Step3: Convert 2.5 measurements file --> 3.0 measurements file
Step4: Convert 2.5 specimens files --> 3.0 specimens file
Step5: Convert 2.5 directory --> 3.0 directory
Step6: Cast all columns to correct dtype | Python Code:
from importlib import reload
import pmagpy.contribution_builder as cb
from pmagpy import ipmag
import os
import json
import numpy as np
import sys
import pandas as pd
import numpy as np
from pandas import DataFrame
from pmagpy import builder2 as builder
from pmagpy import validate_upload2 as validate_upload
from pmagpy import pmag
from pmagpy.mapping import map_magic
from pmagpy import pmag
WD = os.path.realpath(os.path.join("..", "2_5", "McMurdo"))
Explanation: This notebook was used for developing a script to translate MagIC format files from the 2.5 data format to the 3.0 format. This functionality is now implemented in Pmag GUI.
Getting started
End of explanation
# convert magic_measurements to measurements (3.0)
# first unpack lawrence et al., 2009 datafile from MagIC
!download_magic.py -f zmab0100049tmp03.txt -WD ../2_5/McMurdo -ID ../2_5/McMurdo
# read in data model 2.5 measruements file
data2,filetype = pmag.magic_read(WD+'/magic_measurements.txt')
print filetype, len(data2)
NewMeas = []
# step through records
for rec in data2:
NewMeas.append(map_magic.convert_meas('magic3',rec))
pmag.magic_write(WD+'/measurements.txt',NewMeas,'measurements')
Explanation: lowest level: convert 2.5 measurement records --> 3.0 measurement records
End of explanation
reload(nb)
reload(pmag)
WD = os.path.join("..", "2_5", "McMurdo")
#for dtype in ['specimens', 'samples', 'sites', 'locations']:
# filename = os.path.join(WD, '{}.txt'.format(dtype))
# if os.path.exists(filename):
# os.remove(filename)
# convert magic_measurements file only
new_meas, upgraded, no_upgrade = pmag.convert_directory_2_to_3("magic_measurements.txt", WD, WD, meas_only=True)
# create a contribution using the converted measurement data
con = cb.Contribution(WD, read_tables=['measurements'])
# use name data in measurement table to create specimen-location tables
con.propagate_measurement_info()
# show sample table created from measurement info
con.tables['samples'].df.head()
# convert a pandas DataFrame to the standard PmagPy formats:
# either a dict of dicts or a list of dicts, each corresponding to one table row
def convert_to_pmag_data_list(df, lst_or_dict):
dictionary = dict(df.T)
if lst_or_dict == "lst":
return [dict(dictionary[key]) for key in dictionary]
else:
return {key: dict(dictionary[key]) for key in dictionary}
site_df = con.tables['sites'].df.head()
print convert_to_pmag_data_list(site_df, "dict")
print convert_to_pmag_data_list(site_df, "lst")
Explanation: Convert 2.5 measurements file --> 3.0 measurements file
End of explanation
import pmagpy.mapping.map_magic as mm
import pmagpy.contribution_builder as cb
reload(mm)
reload(nb)
reload(pmag)
wdir = os.path.join("..", "2_5", "McMurdo")
# take er_*.txt files and pmag_*.txt files, combine them, then turn them to 3.0. and write them out
dtype = "specimens"
map_dict = mm.spec_magic2_2_magic3_map
pmag.convert_and_combine_2_to_3(dtype, map_dict, input_dir=wdir, output_dir=wdir)
cb.MagicDataFrame(os.path.join(wdir, "{}.txt".format(dtype))).df.head()
Explanation: Convert 2.5 specimens files --> 3.0 specimens file
End of explanation
# converts measurements file and any present specimen, sample, site, or location files to 3.0.
# does not yet handle any other MagIC format files
new_meas, upgraded, not_upgraded = pmag.convert_directory_2_to_3('magic_measurements.txt', wdir, wdir)
print 'upgraded files: {}'.format(', '.join(upgraded))
print 'files that could not be upgraded: {}'.format(', '.join(not_upgraded))
Explanation: Convert 2.5 directory --> 3.0 directory
End of explanation
import pmagpy.contribution_builder as cb
import pmagpy.data_model3 as data_model
con = cb.Contribution('../3_0/Megiddo', dmodel=data_model.DataModel())
site_dm = con.data_model.dm['sites']
site_dm['name'] = site_dm.index
site_dm[['name', 'type']].head()
dtypes = set()
for dm_name in con.data_model.dm:
dtypes = dtypes.union(con.data_model.dm[dm_name]['type'].unique())
print ", ".join(dtypes)
site_df = con.tables['sites'].df
for col_name in site_df.columns:
dtype = site_dm.loc[col_name, 'type']
if dtype == 'Number':
site_df[col_name] = site_df[col_name].astype(float)
elif dtype == 'Integer':
site_df[col_name] = site_df[col_name].fillna(0)
site_df[col_name] = site_df[col_name].astype(int)
#site_df[col_name] = site_df[col_name].replace(-999, np.nan) # can't have dtype of int & np.nan/None values
elif dtype == 'String':
#print "string", col_name
site_df[col_name] = site_df[col_name].astype(str) # can't have dtype of str & np.nan/None values
#site_df[col_name] == site_df[col_name].astype(int)
for col in ['age', 'dir_n_samples', 'criteria']:
print col, ":", site_df[col].dtype
reload(pmag)
#pmag.convert_measfile_2_to_3('magic_measurements.txt', '2_5/McMurdo')
fname = os.path.join("..", '3_0', 'Megiddo', 'sites.txt')
df = cb.MagicDataFrame(os.path.join("..", '3_0', 'Megiddo', 'sites.txt')).df
pmag.magic_read(fname)
df = pd.read_table(fname, skiprows=[0])
df['age'].astype(str).head()
Explanation: Cast all columns to correct dtype
End of explanation |
9,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Introduction
Step1: References
Step2: Conditional statements
Step3: Loops
Step4: Data structures
Step5: Pandas basics
Step6: House Sales in King County, USA
Dataset features are selfexplanatory.
Dataset is taken from Kaggle website
Step7: Adding new columns
Step8: Histograms - data distributions
Step9: Miscellaneous
Step10: A bit about correlation
Step11: House Prices - Data fields description
Here's a brief version of what you'll find in the data description file.
SalePrice - the property's sale price in dollars. This is the target variable that you're trying to predict.
MSSubClass | Python Code:
from IPython.display import Image, display, HTML
Image("images/munich.jpg")
display(HTML("<table><tr><td><p><b>Rain Princess - Leonid Afremov</b></p><img src='images/princess.jpeg'></td><td><b><p>Munich + Rain Princess + Machine Learning</b></p><img src='images/munich-princess-out.jpg'></td></tr></table>"))
display(HTML("<table><tr><td><p><b>The Great Wave off Kanagawa - Katsushika Hokusai</b></p><img src='images/wave.jpg'></td><td><b><p>Munich + The Great Wave + Machine Learning</b></p><img src='images/munich-wave-out.jpg'></td></tr></table>"))
display(HTML("<table><tr><td><p><b>La Muse - Pablo Picaso</b></p><img src='images/muse.jpg'></td><td><b><p>Munich + La Muse + Machine Learning</b></p><img src='images/munich-muse-out.jpg'></td></tr></table>"))
display(HTML("<table><tr><td><p><b>Udnie - Francis Picabia</b></p><img src='images/udnie.jpg'></td><td><b><p>Munich + Udnie + Machine Learning</b></p><img src='images/munich-udnie-out.jpg'></td></tr></table>"))
display(HTML("<table><tr><td><b><p>Scream - Edvard Munch</b></p><img src='images/scream.jpg'></td><td><b><p>Munich + Scream + Machine Learning</b></p><img src='images/munich-scream-out.jpg'></td></tr></table>"))
display(HTML("<table><tr><td><p><b>The Shipwreck of the Minotaur - Joseph Mallord William Turner</b></p><img src='images/wreck.jpg'></td><td><b><p>Munich + Shipwreck + Machine Learning</b></p><img src='images/munich-wreck-out.jpg'></td></tr></table>"))
# A bit about MNIST dataset
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=True)
import numpy as np
from scipy.stats import norm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
from pandas.tools.plotting import scatter_matrix
import seaborn as sns
%matplotlib inline
data.test.cls = np.array([label.argmax() for label in data.test.labels])
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Machine Learning Introduction
End of explanation
# String
string = 'Machine learning '
string2 = ' dojo '
string3 = ' part I'
print string + string2 + string3
print 'String variable type is: {}'.format(type(string))
# Integers
number = 10
number2 = 20
number3 = 30
print number + number2 + number3
print 'number variable type is: {}'.format(type(number))
# Booleans
boolean = True
boolean2 = True
boolean3 = False
print boolean and boolean2 or boolean3
print 'bolean variable type is: {}'.format(type(boolean))
# Floating point numbers
floating = 3.14
floating2 = 2.79
floating3 = 10.01
print floating + floating2 + floating3
print 'floating variable type is: {}'.format(type(floating))
Explanation: References:
Fast Style Transfer
Tensorflow Tutorials
Python basics
Variables
End of explanation
if 10 > 8:
print '10 is greater than 8.'
print '10 is greater than 8.'
print '10 is greater than 8.'
a = True
b = 10
c = 20
print 'first if statement...'
if b < c and a:
print 'All fine.'
else:
print 'Not all fine.'
print 'second if statement...'
if b < c and (not a):
print 'All fine.'
else:
print 'Not all fine.'
if 10 > 20:
message = "if only 10 were greater than 20"
elif 10 > 30:
message = "elif means 'else if'"
else:
message = "when all else fails use else "
message
Explanation: Conditional statements
End of explanation
for i in [1, 2, 3, 4, 5]:
print i
for x in range(5):
if x == 3:
continue # go immediately to the next iteration
if x == 5:
break # quit the loop entirely
print x
x = 0
while x < 5:
print x, "is less than 5"
x += 1
a = True
x = 0
while a:
print x, "is less than 10"
x += 1
if x >= 10:
a = False
Explanation: Loops
End of explanation
# Lists
numbers = [1, 4, 9, 16, 25]
numbers[:]
numbers[:2]
numbers[2:]
type(numbers)
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
len(letters)
letters[2]
a = [66.25, 333, 333, 1, 1234.5]
a
a.count(333), a.count(66.25), a.count('x')
a.insert(2, -1)
a
a.append(333)
a
a.index(333)
a.remove(333)
a
a.reverse()
a
a.sort()
a
a.pop()
a
# dictionaires
phones = {'Spiderman': 151984858, 'Me': 151234324}
phones['Superman'] = 15104928
phones
phones['Spiderman']
del phones['Me']
phones
phones['Batman'] = 15123545
phones
phones.keys()
'Ken' in phones
# tuples
tuple = 31213, 123453, 'hi Ml!'
tuple
tuple[0]
tuple[2]
tuple[1] = 1234
tupleTheSecond = tuple, (1, 2, 3, 4, 5)
tupleTheSecond
t1, t2 = tupleTheSecond
t1
t2
for i, j in zip (t1, t2):
print i, j
type(t1)
# sets
basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
fruit = set(basket)
fruit
'orange' in fruit
'plum' in fruit
Explanation: Data structures
End of explanation
# import panda library
import pandas as pd
# Show version of panda library
print pd.__version__
# it is all about describing the data
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
def randrange(n, vmin, vmax):
'''
Helper function to make an array of random numbers having shape (n, )
with each number distributed Uniform(vmin, vmax).
'''
return (vmax - vmin)*np.random.rand(n) + vmin
fig = plt.figure(figsize=(14, 12))
ax = fig.add_subplot(111, projection='3d')
n = 100
# For each set of style and range settings, plot n random points in the box
# defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh].
for c, m, zlow, zhigh in [('r', 'o', -50, -25), ('b', '^', -30, -5)]:
xs = randrange(n, 23, 32)
ys = randrange(n, 0, 100)
zs = randrange(n, zlow, zhigh)
ax.scatter(xs, ys, zs, c=c, marker=m)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
Explanation: Pandas basics
End of explanation
# read csv file
nn = pd.read_csv('kc_house_data.csv')
# top 5 data records
nn.head(10)
# check are there any null values in any of the columns
nn.isnull().any()
len(nn)
# add one record with NaN values
nn = nn.append({'id':'12345', 'price':'12345.23'}, ignore_index=True)
len(nn)
# check number of NaN values in some column
len(nn[nn.bedrooms.isnull()])
# show list of the records where column bedrooms contain NaN values
nn[nn.bedrooms.isnull()]
# drop NaN values
nn = nn.dropna()
# check number of NaN records after droping NaNs
len(nn[nn.bedrooms.isnull()])
len(nn)
nn.describe()
Explanation: House Sales in King County, USA
Dataset features are selfexplanatory.
Dataset is taken from Kaggle website
End of explanation
foot_to_meter_ratio = 0.092903
nn['sqm2_living']=nn['sqft_living'] * foot_to_meter_ratio
nn['sqm2_living'] = nn['sqm2_living'].round(0)
nn['sqm2_lot']=nn['sqft_lot'] * foot_to_meter_ratio
nn['sqm2_lot'] = nn['sqm2_lot'].round(0)
# show all columns
pd.set_option("display.max_columns",99)
pd.set_option("display.max_rows",999)
nn.head()
nn['sqm2_basement'] = nn['sqft_basement'].map(lambda x: round(x * foot_to_meter_ratio, 0))
nn.head()
nn['price_low'] = 0
condition = nn['price'] < 100000
nn.loc[condition, 'price_low'] = 1
nn.loc[~condition, 'price_low'] = 0
nn['price_low'].value_counts()
new = nn[(nn['price'] < 100000)]
new
nn['bedrooms'].value_counts()
counts = nn.groupby('bedrooms').size()
counts
# check waterfront column values
nn['waterfront'].value_counts()
# select all properties with waterfront
waterfront = nn[(nn['waterfront'] == 1)]
waterfront
waterfront_1_room = nn[(nn['waterfront'] == 1) & (nn['bedrooms'] == 1)]
waterfront_1_room
waterfront.describe()
Explanation: Adding new columns
End of explanation
plt.figure(figsize=(10, 5))
plt.hist(nn['bedrooms'],normed=False)
plt.show()
plt.figure(figsize=(10, 5))
plt.hist(nn['price'],normed=False)
plt.show()
plt.figure(figsize=(10, 5))
plt.hist(nn['sqft_living'],normed=False)
plt.show()
plt.figure(figsize=(10, 5))
plt.hist(nn['sqft_lot'],normed=False)
plt.show()
Explanation: Histograms - data distributions
End of explanation
def colorFunction(x):
if x == 0:
return 'black'
elif x == 1:
return 'brown'
elif x == 2:
return 'red'
elif x == 3:
return 'blue'
elif x == 4:
return 'green'
elif x == 5:
return 'pink'
elif x == 6:
return 'orange'
elif x ==7:
return 'cyan'
elif x ==8:
return 'yellow'
elif x == 9:
return 'magenta'
else:
return 'pink'
nn['color'] = nn['bedrooms'].apply(colorFunction)
figure = plt.figure()
subplot = figure.add_subplot(111)
scatter = subplot.scatter(nn['long'], nn['lat'], s=10, c=nn['color'])
subplot.set_xlabel('Longitude')
subplot.set_ylabel('Latitude')
figure.set_figheight(10)
figure.set_figwidth(15)
plt.show()
features = nn.drop(['id','price','date','color'], axis = 1)
# Using pyplot
plt.figure(figsize=(20, 55))
# i: index
for i, col in enumerate(features.columns):
# 3 plots here hence 1, 3
plt.subplot(10, 3, i+1)
x = nn[col]
y = nn['price']
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(col)
plt.xlabel(col)
plt.ylabel('prices')
plt.show()
# best fit of data
(mu, sigma) = norm.fit(nn['price'])
# the histogram of the data
n, bins, patches = plt.hist(nn['price'], 60, normed=True, facecolor='green', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf(bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=2)
#plot
plt.xlabel('Sales prices')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.grid(True)
plt.show()
Explanation: Miscellaneous
End of explanation
# plot the heatmap
nn = pd.read_csv('kc_house_data.csv')
nn = nn.drop(['id'], axis=1)
plt.figure(figsize=(14, 12))
sns.heatmap(nn.corr())
# showing correlations in the table
cmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)
def magnify():
return [dict(selector="th",
props=[("font-size", "7pt")]),
dict(selector="td",
props=[('padding', "0em 0em")]),
dict(selector="th:hover",
props=[("font-size", "12pt")]),
dict(selector="tr:hover td:hover",
props=[('max-width', '200px'),
('font-size', '12pt')])
]
nn.corr().style.background_gradient(cmap, axis=1)\
.set_properties(**{'max-width': '80px', 'font-size': '10pt'})\
.set_caption("Hover to magify")\
.set_precision(2)\
.set_table_styles(magnify())
Explanation: A bit about correlation
End of explanation
# read csv data
data = pd.read_csv('housing_train.csv')
# describe dataset
data.describe()
# show first 5 records in the dataset
data.head()
# show last 5 records in the dataset
data.tail()
# row selection from 10-15 record
dataTemp = data[0:15]
# iteration over rows
for row in dataTemp.iterrows():
print row[1]['SalePrice']
data['Lambda'] = data['SalePrice'].apply(lambda x: x * 1.1)
dataTemp = data[0:15]
dataTemp[::3]
columns = ['SalePrice', 'LotArea', '1stFlrSF', '2ndFlrSF', 'BedroomAbvGr', 'YrSold']
data = data[columns]
plt.figure(figsize=(10, 5))
plt.hist(data['SalePrice'],normed=False)
plt.show()
plt.figure(figsize=(10, 5))
plt.hist(data['LotArea'],normed=False)
plt.show()
plt.figure(figsize=(10, 5))
plt.hist(data['BedroomAbvGr'],normed=False)
plt.show()
len(data['SalePrice'])
# Data filtering
dataFiltering = data[['SalePrice', 'BedroomAbvGr','LotArea']].copy()
dataFiltering.head()
# Hadling NaN values
original = pd.read_csv('housing_train.csv')
#original.isnull().any()
original.loc[:, original.isnull().any()]
original.dropna(subset=["LotFrontage"]) # option 1
original.drop("LotFrontage", axis=1) # option 2
median = housing["LotFrontage"].median()
original["LotFrontage"].fillna(median) # option 3
# Mention ~ operator
count = original[(original["MSZoning"].str.contains('RL'))]
len(count)
# best fit of data
(mu, sigma) = norm.fit(data['SalePrice'])
# the histogram of the data
n, bins, patches = plt.hist(data['SalePrice'], 60, normed=True, facecolor='green', alpha=0.75)
# add a 'best fit' line
y = mlab.normpdf( bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=2)
#plot
plt.xlabel('Sales prices')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=%.3f,\ \sigma=%.3f$' %(mu, sigma))
plt.grid(True)
plt.show()
prices = data['SalePrice']
features = data.drop('SalePrice', axis = 1)
# i: index
for i, col in enumerate(features.columns):
plt.figure(figsize=(20, 35))
# 3 plots here hence 1, 3
plt.subplot(5, 1, i+1)
x = data[col]
y = prices
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(col)
plt.xlabel(col)
plt.ylabel('prices')
plt.show()
foot_to_meter_ratio = 0.092903
data['LotAream2']=data['LotArea'] * foot_to_meter_ratio
data['LotAream2'] = data['LotAream2'].round(0)
data.head()
x = data['LotAream2']
y = prices
plt.figure(figsize=(20, 10))
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(x.name)
plt.xlabel(x.name)
plt.ylabel('prices')
plt.show()
# Creating smaller data set and filter it
dataM2 = data[['SalePrice', 'LotAream2']].copy()
low = .05
high = .9
quant_df = dataM2.quantile([low, high])
print(quant_df)
dataM2 = dataM2.apply(lambda x: x[(x > quant_df.loc[low, x.name]) & (x < quant_df.loc[high, x.name])], axis=0)
dataM2.head()
len(dataM2['SalePrice'])
dataM2['BedroomAbvGr']=data['BedroomAbvGr'].copy()
dataM2.head()
dataM2 = dataM2.dropna()
x = dataM2['LotAream2']
y = dataM2['SalePrice']
plt.figure(figsize=(20, 15))
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(x.name)
plt.xlabel(x.name)
plt.ylabel('prices')
plt.show()
dataM2["BedroomAbvGr"].value_counts()
color = [str(item*270/255.) for item in dataM2["BedroomAbvGr"]]
figure = plt.figure()
subplot = figure.add_subplot(111)
scatter = subplot.scatter(dataM2['LotAream2'], dataM2['SalePrice'], s=50, c=color)
subplot.set_xlabel('Lot in m2')
subplot.set_ylabel('Price')
plt.colorbar(scatter)
figure.set_figheight(10)
figure.set_figwidth(15)
plt.show()
# Correlation matrix
corr_matrix = data.corr()
corr_matrix["SalePrice"].sort_values(ascending=False)
attributes = ["SalePrice", "LotAream2", "BedroomAbvGr", "1stFlrSF", "2ndFlrSF"]
scatter_matrix(data[attributes], figsize=(15, 15))
data.plot(kind="scatter", x="LotAream2", y="SalePrice",alpha=0.1)
plt.show()
# a bit more data filtering
df = data[['SalePrice', 'LotAream2', 'BedroomAbvGr']].copy()
df.head()
len(df)
filtered = df.drop(
df.index[(df['LotAream2'] > (df['LotAream2'].mean() + 3 * df['LotAream2'].std()))])
x = filtered['LotAream2']
y = filtered['SalePrice']
plt.figure(figsize=(20, 15))
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(x.name)
plt.xlabel(x.name)
plt.ylabel('prices')
plt.show()
filtered.describe()
data.describe()
counts = filtered.groupby('BedroomAbvGr').size()
counts.head()
filtered = df.drop(
df.index[(df['LotAream2'] > (df['LotAream2'].mean() + 3 * df['LotAream2'].std()))])
Explanation: House Prices - Data fields description
Here's a brief version of what you'll find in the data description file.
SalePrice - the property's sale price in dollars. This is the target variable that you're trying to predict.
MSSubClass: The building class
MSZoning: The general zoning classification
LotFrontage: Linear feet of street connected to property
LotArea: Lot size in square feet
Street: Type of road access
Alley: Type of alley access
LotShape: General shape of property
LandContour: Flatness of the property
Utilities: Type of utilities available
LotConfig: Lot configuration
LandSlope: Slope of property
Neighborhood: Physical locations within Ames city limits
Condition1: Proximity to main road or railroad
Condition2: Proximity to main road or railroad (if a second is present)
BldgType: Type of dwelling
HouseStyle: Style of dwelling
OverallQual: Overall material and finish quality
OverallCond: Overall condition rating
YearBuilt: Original construction date
YearRemodAdd: Remodel date
RoofStyle: Type of roof
RoofMatl: Roof material
Exterior1st: Exterior covering on house
Exterior2nd: Exterior covering on house (if more than one material)
MasVnrType: Masonry veneer type
MasVnrArea: Masonry veneer area in square feet
ExterQual: Exterior material quality
ExterCond: Present condition of the material on the exterior
Foundation: Type of foundation
BsmtQual: Height of the basement
BsmtCond: General condition of the basement
BsmtExposure: Walkout or garden level basement walls
BsmtFinType1: Quality of basement finished area
BsmtFinSF1: Type 1 finished square feet
BsmtFinType2: Quality of second finished area (if present)
BsmtFinSF2: Type 2 finished square feet
BsmtUnfSF: Unfinished square feet of basement area
TotalBsmtSF: Total square feet of basement area
Heating: Type of heating
HeatingQC: Heating quality and condition
CentralAir: Central air conditioning
Electrical: Electrical system
1stFlrSF: First Floor square feet
2ndFlrSF: Second floor square feet
LowQualFinSF: Low quality finished square feet (all floors)
GrLivArea: Above grade (ground) living area square feet
BsmtFullBath: Basement full bathrooms
BsmtHalfBath: Basement half bathrooms
FullBath: Full bathrooms above grade
HalfBath: Half baths above grade
Bedroom: Number of bedrooms above basement level
Kitchen: Number of kitchens
KitchenQual: Kitchen quality
TotRmsAbvGrd: Total rooms above grade (does not include bathrooms)
Functional: Home functionality rating
Fireplaces: Number of fireplaces
FireplaceQu: Fireplace quality
GarageType: Garage location
GarageYrBlt: Year garage was built
GarageFinish: Interior finish of the garage
GarageCars: Size of garage in car capacity
GarageArea: Size of garage in square feet
GarageQual: Garage quality
GarageCond: Garage condition
PavedDrive: Paved driveway
WoodDeckSF: Wood deck area in square feet
OpenPorchSF: Open porch area in square feet
EnclosedPorch: Enclosed porch area in square feet
3SsnPorch: Three season porch area in square feet
ScreenPorch: Screen porch area in square feet
PoolArea: Pool area in square feet
PoolQC: Pool quality
Fence: Fence quality
MiscFeature: Miscellaneous feature not covered in other categories
MiscVal: $Value of miscellaneous feature
MoSold: Month Sold
YrSold: Year Sold
SaleType: Type of sale
SaleCondition: Condition of sale
More about this data set can be found on Kaggle website.
End of explanation |
9,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Set Path Information
Step2: Livneh Domain File
Along with the VIC model parameters, we also need a domain file that describes the spatial extent and active grid cells in the model domain. The domain file must exactly match the parameters and forcings. The Livneh dataset includes a DEM file in netCDF format that we can use to construct the domain file.
Steps
Step3: Livneh Parameters
VIC 5 uses the same parameters as VIC 4. The following steps will read/parse the ASCII formatted parameter files and construct the netCDF formatted parameter file. We'll use the domain file constructed in the previous step to help define the spatial grid.
Steps
Step4: Global 1/2 deg. Parameters
For the global 1/2 deg. parameters, we will follow the same steps as for the Livneh case with one exception. We don't have a domain file this time so we'll use tonic's calc_grid function to make one for us.
Step5: Global Domain File
Since the global soil parameters didn't come with a domain file that we could use, we'll construct one using the output from tonic's calc_grid function.
Step6: Maurer 1/8 deg. Parameters
Finally, we'll repeat the same steps for the Maurer 1/8 deg. parameters.
Step7: 1/8 deg CONUS domain file | Python Code:
%matplotlib inline
import os
import getpass
from datetime import datetime
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
# For more information on tonic, see: https://github.com/UW-Hydro/tonic/
import tonic.models.vic.grid_params as gp
# Metadata to be used later
user = getpass.getuser()
now = datetime.now()
print('python version : %s' % os.sys.version)
print('numpy version : %s' % np.version.full_version)
print('xarray version : %s' % xr.version.version)
print('User : %s' % user)
print('Date/Time : %s' % now)
Explanation: Tutorial: VIC 5 Image Driver Parameter Conversion
Converting parameters from ASCII VIC 4 format to netCDF VIC 5 Image Driver Format
This Jupyter Notebook outlines one approach to converting VIC parameters from ASCII to netCDF format. For this tutorial, we'll convert three datasets from ASCII to netCDF:
Livneh et al. (2015) - 1/16th deg. VIC parameters
Description: http://www.colorado.edu/lab/livneh/data
Data: ftp://livnehpublicstorage.colorado.edu/public/Livneh.2015.NAmer.Dataset/nldas.vic.params/
Citation: Livneh B., E.A. Rosenberg, C. Lin, B. Nijssen, V. Mishra, K.M. Andreadis, E.P. Maurer, and D.P. Lettenmaier, 2013: A Long-term hydrologically based dataset of land surface fluxes and states for the conterminous United States: update and extensions, Journal of Climate, 26, 9384–9392.
Global 1/2 deg. VIC parameters
Description: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/vic_global_0.5deg.html
Data: ftp://ftp.hydro.washington.edu/pub/HYDRO/data/VIC_param/vic_params_global_0.5deg.tgz
Citation: Nijssen, B.N., G.M. O'Donnell, D.P. Lettenmaier and E.F. Wood, 2001: Predicting the discharge of global rivers, J. Clim., 14(15), 3307-3323, doi: 10.1175/1520-0442(2001)014<3307:PTDOGR>2.0.CO;2.
Maurer et al. (2002) - 1/8th deg. VIC parameters
Description: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/VIC_retrospective/index.html
Data: http://www.hydro.washington.edu/SurfaceWaterGroup/Data/VIC_retrospective/index.html
Citation: Maurer, E.P., A.W. Wood, J.C. Adam, D.P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically-based data set of land surface fluxes and states for the conterminous United States, J. Climate 15, 3237-3251.
All of these datasets include the following parameter sets:
- Soil Parameter file
- Vegetation Library file
- Vegetation Parameter file
- Snowbands file
Outputs
For each of the parameter sets above, we'll be producing two files:
1. VIC 5 Image Driver Input Parameters (netCDF file defining model parameters)
2. VIC 5 Image Driver Domain File (netCDF file defining spatial extent of model domain)
Python Imports and Setup
End of explanation
# Set the path to the datasets here
dpath = './' # root input data path
opath = './' # output data path
ldpath = os.path.join(dpath, 'Livneh_0.0625_NLDAS') # Path to Livneh Parameters
gdpath = os.path.join(dpath, 'Nijssen_0.5_Global') # Path to Global Parameters
mdpath = os.path.join(dpath, 'Maurer_0.125_NLDAS') # Path to Maurer Parameters
Explanation: Set Path Information
End of explanation
dom_file = os.path.join(opath, 'domain.vic.global0.0625deg.%s.nc' % now.strftime('%Y%m%d'))
dem = xr.open_dataset(os.path.join(ldpath, 'Composite.DEM.NLDAS.mex.0625.nc'))
dom_ds = xr.Dataset()
# Set global attributes
dom_ds.attrs['title'] = 'VIC domain data'
dom_ds.attrs['Conventions'] = 'CF-1.6'
dom_ds.attrs['history'] = 'created by %s, %s' % (user, now)
dom_ds.attrs['user_comment'] = 'VIC domain data'
dom_ds.attrs['source'] = 'generated from VIC North American 1/16 deg. model parameters, see Livneh et al. (2015) for more information'
# since we have it, put the elevation in the domain file
dom_ds['elev'] = dem['Band1']
dom_ds['elev'].attrs['long_name'] = 'gridcell_elevation'
dom_ds['elev'].attrs['units'] = 'm'
# Get the mask variable
dom_ds['mask'] = dem['Band1'].notnull().astype(np.int)
dom_ds['mask'].attrs['long_name'] = 'domain mask'
dom_ds['mask'].attrs['comment'] = '0 indicates cell is not active'
# For now, the frac variable is going to be just like the mask
dom_ds['frac'] = dom_ds['mask'].astype(np.float)
dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active'
dom_ds['frac'].attrs['units'] = '1'
# Save the output domain to a temporary file.
dom_ds.to_netcdf('temp.nc')
dom_ds.close()
# This shell command uses cdo step calculates the grid cell area
!cdo -O gridarea temp.nc area.nc
!rm temp.nc
# This step extracts the area from the temporary area.nc file
area = xr.open_dataset('area.nc')['cell_area']
dom_ds['area'] = area
# Write the final domain file
dom_ds.to_netcdf(dom_file)
dom_ds.close()
# Document the domain and plot
print(dom_ds)
dom_ds['mask'].plot()
Explanation: Livneh Domain File
Along with the VIC model parameters, we also need a domain file that describes the spatial extent and active grid cells in the model domain. The domain file must exactly match the parameters and forcings. The Livneh dataset includes a DEM file in netCDF format that we can use to construct the domain file.
Steps:
Open dem
Set useful global attributes
Create the mask/frac variables using the non-missing dem mask.
Calculate the grid cell area using cdo
Add the grid cell area back into the domain dataset.
Save the domain dataset
End of explanation
soil_file = os.path.join(ldpath, 'vic.nldas.mexico.soil.txt')
snow_file = os.path.join(ldpath, 'vic.nldas.mexico.snow.txt.L13')
veg_file = os.path.join(ldpath, 'vic.nldas.mexico.veg.txt')
vegl_file = os.path.join(ldpath, 'LDAS_veg_lib')
out_file = os.path.join(opath, 'livneh_nldas.mexico_vic_5.0.0_parameters.nc')
# Set options that define the shape/type of parameters
cols = gp.Cols(nlayers=3,
snow_bands=5,
organic_fract=False,
spatial_frost=False,
spatial_snow=False,
july_tavg_supplied=False,
veglib_fcan=False,
veglib_photo=False)
n_veg_classes = 11
vegparam_lai = True
lai_src = 'FROM_VEGPARAM'
# ----------------------------------------------------------------- #
# Read the soil parameters
soil_dict = gp.soil(soil_file, c=gp.Cols(nlayers=3))
# Read the snow parameters
snow_dict = gp.snow(snow_file, soil_dict, c=cols)
# Read the veg parameter file
veg_dict = gp.veg(veg_file, soil_dict,
vegparam_lai=vegparam_lai, lai_src=lai_src,
veg_classes=n_veg_classes)
# Read the veg library file
veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols)
# Determine the grid shape
target_grid, target_attrs = gp.read_netcdf(dom_file)
for old, new in [('lon', 'xc'), ('lat', 'yc')]:
target_grid[new] = target_grid.pop(old)
target_attrs[new] = target_attrs.pop(old)
# Grid all the parameters
grid_dict = gp.grid_params(soil_dict,
target_grid,
version_in='4.1.2.c',
vegparam_lai=vegparam_lai,
lib_bare_idx=lib_bare_idx,
lai_src=lai_src,
veg_dict=veg_dict,
veglib_dict=veg_lib,
snow_dict=snow_dict,
lake_dict=None)
# Write a netCDF file with all the parameters
gp.write_netcdf(out_file,
target_attrs,
target_grid=target_grid,
vegparam_lai=vegparam_lai,
lai_src=lai_src,
soil_grid=grid_dict['soil_dict'],
snow_grid=grid_dict['snow_dict'],
veg_grid=grid_dict['veg_dict'])
Explanation: Livneh Parameters
VIC 5 uses the same parameters as VIC 4. The following steps will read/parse the ASCII formatted parameter files and construct the netCDF formatted parameter file. We'll use the domain file constructed in the previous step to help define the spatial grid.
Steps:
Read the soil/snow/veg/veglib files
Read the target grid (domain file)
Map the parameters to the spatial grid defined by the domain file
Write the parameters to a netCDF file.
End of explanation
soil_file = os.path.join(gdpath, 'global_soil_param_new')
snow_file = os.path.join(gdpath, 'global_snowbands_new')
veg_file = os.path.join(gdpath, 'global_veg_param_new')
vegl_file = os.path.join(gdpath, 'world_veg_lib.txt')
out_file = os.path.join(gdpath, 'global_0.5deg.vic_5.0.0_parameters.nc')
# Set options that define the shape/type of parameters
cols = gp.Cols(nlayers=3,
snow_bands=5,
organic_fract=False,
spatial_frost=False,
spatial_snow=False,
july_tavg_supplied=False,
veglib_fcan=False,
veglib_photo=False)
n_veg_classes = 11
root_zones = 2
vegparam_lai = True
lai_src = 'FROM_VEGPARAM'
# ----------------------------------------------------------------- #
# Read the soil parameters
soil_dict = gp.soil(soil_file, c=cols)
# Read the snow parameters
snow_dict = gp.snow(snow_file, soil_dict, c=cols)
# Read the veg parameter file
veg_dict = gp.veg(veg_file, soil_dict,
vegparam_lai=vegparam_lai, lai_src=lai_src,
veg_classes=n_veg_classes, max_roots=root_zones)
# Read the veg library file
veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols)
# Determine the grid shape
target_grid, target_attrs = gp.calc_grid(soil_dict['lats'], soil_dict['lons'])
# Grid all the parameters
grid_dict = gp.grid_params(soil_dict, target_grid, version_in='4',
vegparam_lai=vegparam_lai, lai_src=lai_src,
lib_bare_idx=lib_bare_idx,
veg_dict=veg_dict, veglib_dict=veg_lib,
snow_dict=snow_dict, lake_dict=None)
# Write a netCDF file with all the parameters
gp.write_netcdf(out_file,
target_attrs,
target_grid=target_grid,
vegparam_lai=vegparam_lai,
lai_src=lai_src,
soil_grid=grid_dict['soil_dict'],
snow_grid=grid_dict['snow_dict'],
veg_grid=grid_dict['veg_dict'])
Explanation: Global 1/2 deg. Parameters
For the global 1/2 deg. parameters, we will follow the same steps as for the Livneh case with one exception. We don't have a domain file this time so we'll use tonic's calc_grid function to make one for us.
End of explanation
dom_ds = xr.Dataset()
# Set global attributes
dom_ds.attrs['title'] = 'VIC domain data'
dom_ds.attrs['Conventions'] = 'CF-1.6'
dom_ds.attrs['history'] = 'created by %s, %s' % (user, now)
dom_ds.attrs['user_comment'] = 'VIC domain data'
dom_ds.attrs['source'] = 'generated from VIC Global 0.5 deg. model parameters, see Nijssen et al. (2001) for more information'
dom_file = os.path.join(opath, 'domain.vic.global0.5deg.%s.nc' % now.strftime('%Y%m%d'))
# Get the mask variable
dom_ds['mask'] = xr.DataArray(target_grid['mask'], coords={'lat': target_grid['yc'],
'lon': target_grid['xc']},
dims=('lat', 'lon', ))
# For now, the frac variable is going to be just like the mask
dom_ds['frac'] = dom_ds['mask'].astype(np.float)
dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active'
dom_ds['frac'].attrs['units'] = '1'
# Set variable attributes
for k, v in target_attrs.items():
if k == 'xc':
k = 'lon'
elif k == 'yc':
k = 'lat'
dom_ds[k].attrs = v
# Write temporary file for gridarea calculation
dom_ds.to_netcdf('temp.nc')
# This step calculates the grid cell area
!cdo -O gridarea temp.nc area.nc
!rm temp.nc
# Extract the area variable
area = xr.open_dataset('area.nc').load()['cell_area']
dom_ds['area'] = area
# write the domain file
dom_ds.to_netcdf(dom_file)
dom_ds.close()
# document and plot the domain
print(dom_ds)
dom_ds.mask.plot()
!rm area.nc
Explanation: Global Domain File
Since the global soil parameters didn't come with a domain file that we could use, we'll construct one using the output from tonic's calc_grid function.
End of explanation
soil_file = os.path.join(mdpath, 'soil', 'us_all.soil.wsne')
snow_file = os.path.join(mdpath, 'snow', 'us_all.snowbands.wsne')
veg_file = os.path.join(mdpath, 'veg', 'us_all.veg.wsne')
vegl_file = os.path.join(ldpath, 'LDAS_veg_lib') # from livneh
out_file = os.path.join(mdpath, 'nldas_0.125deg.vic_5.0.0_parameters.nc')
cols = gp.Cols(nlayers=3,
snow_bands=5,
organic_fract=False,
spatial_frost=False,
spatial_snow=False,
july_tavg_supplied=False,
veglib_fcan=False,
veglib_photo=False)
n_veg_classes = 11
root_zones = 2
vegparam_lai = True
lai_src = 'FROM_VEGPARAM'
# ----------------------------------------------------------------- #
# Read the soil parameters
soil_dict = gp.soil(soil_file, c=cols)
# Read the snow parameters
snow_dict = gp.snow(snow_file, soil_dict, c=cols)
# Read the veg parameter file
veg_dict = gp.veg(veg_file, soil_dict,
vegparam_lai=vegparam_lai, lai_src=lai_src,
veg_classes=n_veg_classes, max_roots=root_zones)
# Read the veg library file
veg_lib, lib_bare_idx = gp.veg_class(vegl_file, c=cols)
# Determine the grid shape
target_grid, target_attrs = gp.calc_grid(soil_dict['lats'], soil_dict['lons'])
# Grid all the parameters
grid_dict = gp.grid_params(soil_dict, target_grid, version_in='4',
vegparam_lai=vegparam_lai, lai_src=lai_src,
lib_bare_idx=lib_bare_idx,
veg_dict=veg_dict, veglib_dict=veg_lib,
snow_dict=snow_dict, lake_dict=None)
# Write a netCDF file with all the parameters
gp.write_netcdf(out_file,
target_attrs,
target_grid=target_grid,
vegparam_lai=vegparam_lai,
lai_src=lai_src,
soil_grid=grid_dict['soil_dict'],
snow_grid=grid_dict['snow_dict'],
veg_grid=grid_dict['veg_dict'])
Explanation: Maurer 1/8 deg. Parameters
Finally, we'll repeat the same steps for the Maurer 1/8 deg. parameters.
End of explanation
dom_ds = xr.Dataset()
# Set global attributes
dom_ds.attrs['title'] = 'VIC domain data'
dom_ds.attrs['Conventions'] = 'CF-1.6'
dom_ds.attrs['history'] = 'created by %s, %s' % (user, now)
dom_ds.attrs['user_comment'] = 'VIC domain data'
dom_ds.attrs['source'] = 'generated from VIC CONUS 1.8 deg model parameters, see Maurer et al. (2002) for more information'
dom_file = os.path.join(opath, 'domain.vic.conus0.0125deg.%s.nc' % now.strftime('%Y%m%d'))
# Get the mask variable
dom_ds['mask'] = xr.DataArray(target_grid['mask'], coords={'lat': target_grid['yc'],
'lon': target_grid['xc']},
dims=('lat', 'lon', ))
# For now, the frac variable is going to be just like the mask
dom_ds['frac'] = dom_ds['mask'].astype(np.float)
dom_ds['frac'].attrs['long_name'] = 'fraction of grid cell that is active'
dom_ds['frac'].attrs['units'] = '1'
# Set variable attributes
for k, v in target_attrs.items():
if k == 'xc':
k = 'lon'
elif k == 'yc':
k = 'lat'
dom_ds[k].attrs = v
# Write temporary file for gridarea calculation
dom_ds.to_netcdf('temp.nc')
# This step calculates the grid cell area
!cdo -O gridarea temp.nc area.nc
!rm temp.nc
# Extract the area variable
area = xr.open_dataset('area.nc').load()['cell_area']
dom_ds['area'] = area
# write the domain file
dom_ds.to_netcdf(dom_file)
dom_ds.close()
# document and plot the domain
print(dom_ds)
dom_ds.mask.plot()
plt.close('all')
Explanation: 1/8 deg CONUS domain file
End of explanation |
9,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datapot Usage Examples
Step1: Dataset with timestamp features extraction.
Convert CSV file to JSON lines
Step2: Creating the DataPot object.
Step3: Let's call the fit method. It automatically finds appropriate transformers for the fields of jsonlines file. The parameter 'limit' means how many objects will be used to detect the right transformers.
Step4: Let's remove the SVDOneHotTransformer
Step5: Bag of Words Meets Bags of Popcorn
Usage example for unstructured textual bzip2-compressed data
https
Step6: Load data from datapot.datasets
Step7: Or load directly from file
Step8: Job Salary Prediction
Usage example for unstructured textual bzip2-compressed data | Python Code:
import datapot as dp
from datapot import datasets
import pandas as pd
from __future__ import print_function
import sys
import bz2
import time
import xgboost as xgb
from sklearn.model_selection import cross_val_score
import datapot as dp
from datapot.utils import csv_to_jsonlines
Explanation: Datapot Usage Examples
End of explanation
transactions = pd.read_csv('../data/transactions.csv')
transactions.head()
Explanation: Dataset with timestamp features extraction.
Convert CSV file to JSON lines
End of explanation
datapot = dp.DataPot()
from datapot.utils import csv_to_jsonlines
csv_to_jsonlines('../data/transactions.csv', '../data/transactions.jsonlines')
data_trns = open('../data/transactions.jsonlines')
data_trns.readline()
Explanation: Creating the DataPot object.
End of explanation
datapot.detect(data_trns, limit=100)
t0 = time.time()
datapot.fit(data_trns, verbose=True)
print('fit time:', time.time()-t0)
datapot
Explanation: Let's call the fit method. It automatically finds appropriate transformers for the fields of jsonlines file. The parameter 'limit' means how many objects will be used to detect the right transformers.
End of explanation
datapot.remove_transformer('merchant_id', 0)
t0 = time.time()
df_trns = datapot.transform(data_trns)
print('transform time:', time.time()-t0)
df_trns.head()
Explanation: Let's remove the SVDOneHotTransformer
End of explanation
import datapot as dp
from datapot import datasets
Explanation: Bag of Words Meets Bags of Popcorn
Usage example for unstructured textual bzip2-compressed data
https://www.kaggle.com/c/word2vec-nlp-tutorial/data
datapot.fit method subsamples the data to detect language and choose corresponding stopwords and stemming.
For each review datapot.transform generates an SVD-compressed 12-dimensional tfidf-vector representation.
End of explanation
data_imdb = datasets.load_imdb()
Explanation: Load data from datapot.datasets
End of explanation
data_imdb = bz2.BZ2File('data/imdb.jsonlines.bz2')
datapot_imdb = dp.DataPot()
t0 = time.time()
datapot_imdb.detect(data_imdb)
print('detect time:', time.time()-t0)
datapot_imdb
datapot_imdb.remove_transformer('sentiment', 0)
t0 = time.time()
datapot_imdb.fit(data_imdb, verbose=True)
print('fit time:', time.time()-t0)
t0 = time.time()
df_imdb = datapot_imdb.transform(data_imdb)
print('transform time:', time.time()-t0)
df_imdb.head()
X = df_imdb.drop(['sentiment'], axis=1)
y = df_imdb['sentiment']
model = xgb.XGBClassifier()
cv_score = cross_val_score(model, X, y, cv=5)
assert all(i > 0.5 for i in cv_score), 'Low score!'
print('Cross-val score:', cv_score)
model.fit(X, y)
fi = model.feature_importances_
print('Feature importance:')
print(*(list(zip(X.columns, fi))), sep='\n')
Explanation: Or load directly from file
End of explanation
from datapot import datasets
data_job = datasets.load_job_salary()
# Or load from file%:
# data_job = bz2.BZ2File('datapot/data/job.jsonlines.bz2')
datapot_job = dp.DataPot()
t0 = time.time()
datapot_job.detect(data_job)
print('detect time:', time.time()-t0)
datapot_job
t0 = time.time()
datapot_job.fit(data_job, verbose=True)
print('fit time:', time.time()-t0)
t0 = time.time()
df_job = datapot_job.transform(data_job)
print('transform time:', time.time()-t0)
print(df_job.columns)
print(df_job.shape)
df_job.head()
X_job = df_job.drop(['SalaryNormalized', 'Id'], axis=1)
y_job = pd.qcut(df_job['SalaryNormalized'].values, q=2, labels=[0,1]).ravel()
model = xgb.XGBClassifier()
cv_score_job = cross_val_score(model, X_job, y_job, cv=5)
print('Cross-val score:', cv_score_job)
assert all(i > 0.5 for i in cv_score_job), 'Low score!'
model.fit(X_job, y_job)
fi_job = model.feature_importances_
print('Feature importance:')
print(*(list(zip(X_job.columns, fi_job))), sep='\n')
Explanation: Job Salary Prediction
Usage example for unstructured textual bzip2-compressed data
End of explanation |
9,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ubrzavanje Pythona
Step1: Katkada korištenje Numpy-a ne daje dovoljno ubrzanje, ili je teško/nespretno vektorizirati kod. Tada postoji više opcija
spori dio koda (koji dio koda je spor možemo saznati profiliranjem koda, npr. pomoću IPythonove magične funkcije %prun) možemo napisati npr. u C-u. U Pythonu je taj proces prilično bezbolan.
spori dio koda možemo implementirati u Cythonu, koji je proširenje Pythona
možemo ubrzati kod korištenjem specijaliziranih kompajlera koji optimiziraju strojni kod
Pod 3., postoji cijeli niz kompajlera za Python
Step2: %%cython_inline magična funkcija koristi Cython.inline da bi se kompajlirao Cython izraz. return služi za slanje izlaza.
Step3: %%cython_pyximport magična funkcija služi da se unese proizvoljan Cython kod u IPython notebook ćeliju. Taj kod se sprema u .pyx u radnom direktoriju i onda importira koristeći pyximport. Moramo specificirati ime modula (u donjem slučaju foo). Svi objekti modula se automatski importiraju.
Step4: U dosadašnjim primjerima Cython kod se nije razlikovao od Python koda. U sljedećem primjeru ćemo vidjeti neka od proširenja Pythona koja nudi Cython.
Magična funkcija %%cython je slična funkciji %%cython_pyximport, ali ne traži ime modula te sve datoteke sprema u privremeni direktorij.
Jedan od načina računanja broja $\pi$ je korištenjem formule
$$ \frac{\pi}{4} = \int_0^1 \sqrt{1-x^2}\,\mathrm{d}x.$$
A integral možemo aproksimirati pomoću trapezne formule, što nam daje
$$\frac{\pi}{4}\approx \frac{1}{n}\left(\frac{1}{2}+\sum_{i=1}^n\sqrt{1-\left(\frac{i}{n}\right)^2} \right) $$
Krenimo od običnog Pythona.
Step5: Cython verzija
Ovdje je
cimport
Step7: Složeniji primjer
Ovdje je
@
Step8: Cython omogućava linkanje dodatnih biblioteka pri kompajliranju, u ovom slučaju standardne matematičke biblioteke
Step9: Još jedan primjer, gdje koristimo numpy nizove
Računamo kumulativnu sumu niza
Step10: Numba
Numba je just-in-time kompajler koji korisiti LLVM. Korištenje numbe je zapravo dosta jednostavno.
Step11: Funkcija sum2d(arr)će biti optimizirana, a cijela procedura se svela na korištenje jednog dekoratora.
Step12: Kompliciraniji primjer
Step13: Paralelizacija
Drugi način ubrzavanja izvršavanja programa je pomoću paralelizacije. To je opsežna tema sama po sebi te u nju nećemo ulaziti.
Popularni način paralelizacije je npr. pomoću Apache Sparka ili pomoću Dask biblioteke. | Python Code:
from IPython.display import Image
Image("https://raw.github.com/jrjohansson/scientific-python-lectures/master/images/optimizing-what.png")
Explanation: Ubrzavanje Pythona
End of explanation
# olakšava rad u Cythonu u IPythonu
%load_ext Cython
a=10
b=20
Explanation: Katkada korištenje Numpy-a ne daje dovoljno ubrzanje, ili je teško/nespretno vektorizirati kod. Tada postoji više opcija
spori dio koda (koji dio koda je spor možemo saznati profiliranjem koda, npr. pomoću IPythonove magične funkcije %prun) možemo napisati npr. u C-u. U Pythonu je taj proces prilično bezbolan.
spori dio koda možemo implementirati u Cythonu, koji je proširenje Pythona
možemo ubrzati kod korištenjem specijaliziranih kompajlera koji optimiziraju strojni kod
Pod 3., postoji cijeli niz kompajlera za Python: PyPy, Nuitka, shedskin, parakeet, psyco, Theano i Numba.
Mi ćemo se ovdje pozabaviti s Cythonom i Numbom. U oba slučaja je od najveće važnosti precizirati vrste podataka (tzv. anotacija) jer se onda strojni kod može dobro optimizirati.
Cython
End of explanation
%%cython_inline
return a+b
Explanation: %%cython_inline magična funkcija koristi Cython.inline da bi se kompajlirao Cython izraz. return služi za slanje izlaza.
End of explanation
%%cython_pyximport foo
def f(x):
return 4.0*x
f(10)
Explanation: %%cython_pyximport magična funkcija služi da se unese proizvoljan Cython kod u IPython notebook ćeliju. Taj kod se sprema u .pyx u radnom direktoriju i onda importira koristeći pyximport. Moramo specificirati ime modula (u donjem slučaju foo). Svi objekti modula se automatski importiraju.
End of explanation
from math import sqrt
def funkcija(x):
return sqrt(1-x**2)
def integral4pi(n):
korak = 1.0/n
rez = (funkcija(0)+funkcija(1))/2
for i in range(n):
rez += funkcija(i*korak)
return 4*rez*korak
approx=integral4pi(10**7)
print ('pi={}'.format(approx))
%timeit integral4pi(10**7)
Explanation: U dosadašnjim primjerima Cython kod se nije razlikovao od Python koda. U sljedećem primjeru ćemo vidjeti neka od proširenja Pythona koja nudi Cython.
Magična funkcija %%cython je slična funkciji %%cython_pyximport, ali ne traži ime modula te sve datoteke sprema u privremeni direktorij.
Jedan od načina računanja broja $\pi$ je korištenjem formule
$$ \frac{\pi}{4} = \int_0^1 \sqrt{1-x^2}\,\mathrm{d}x.$$
A integral možemo aproksimirati pomoću trapezne formule, što nam daje
$$\frac{\pi}{4}\approx \frac{1}{n}\left(\frac{1}{2}+\sum_{i=1}^n\sqrt{1-\left(\frac{i}{n}\right)^2} \right) $$
Krenimo od običnog Pythona.
End of explanation
%%cython
cimport cython
from libc.math cimport sqrt
cdef double cy_funkcija(double x):
return sqrt(1-x**2)
def cy_integral4pi(int n):
cdef double korak, rez
cdef int i
korak = 1.0/n
rez = (cy_funkcija(0)+cy_funkcija(1))/2
for i in range(n):
rez += cy_funkcija(i*korak)
return 4*rez*korak
cy_approx = cy_integral4pi(10**7)
print ('pi={}'.format(cy_approx))
%timeit cy_integral4pi(10**7)
Explanation: Cython verzija
Ovdje je
cimport: ekvivalent importa, ali možemo učitavati i iz C ili C++ biblioteka
cdef: ekvivalent za def, gdje definiramo (kao u C-u) tipove podataka; također služi za deklariranje tipa varijabli
End of explanation
%%cython
cimport cython
from libc.math cimport exp, sqrt, pow, log, erf
@cython.cdivision(True)
cdef double std_norm_cdf(double x) nogil:
return 0.5*(1+erf(x/sqrt(2.0)))
@cython.cdivision(True)
def black_scholes(double s, double k, double t, double v, double rf, double div, double cp):
s : početna cijena dionice
k : fiksirana cijena (strike price)
t : vrijeme
v : volatilnost
rf : bezrizična kamata
div : dividenda
cp : put/call paritet
cdef double d1, d2, optprice
with nogil:
d1 = (log(s/k)+(rf-div+0.5*pow(v,2))*t)/(v*sqrt(t))
d2 = d1 - v*sqrt(t)
optprice = cp*s*exp(-div*t)*std_norm_cdf(cp*d1) - cp*k*exp(-rf*t)*std_norm_cdf(cp*d2)
return optprice
black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
%timeit black_scholes(100.0, 100.0, 1.0, 0.3, 0.03, 0.0, -1)
Explanation: Složeniji primjer
Ovdje je
@: Pythonova sintaksa za tzv. dekoratore
nogil: GIL je skraćenica od global interpreter lock, koji spriječava simultano izvršavanje koda u više niti (eng. threads), ovdje s nogil deklariramo da je sigurno pozvati funkciju bez GIL-a
with nogil: odgovarajući dio koda ne koristi GIL
End of explanation
%%cython -lm
from libc.math cimport sin
print ('sin(1)=', sin(1))
Explanation: Cython omogućava linkanje dodatnih biblioteka pri kompajliranju, u ovom slučaju standardne matematičke biblioteke
End of explanation
import numpy as np
def py_dcumsum(a):
b = np.empty_like(a)
b[0] = a[0]
for n in range(1,len(a)):
b[n] = b[n-1]+a[n]
return b
a = np.random.rand(100000)
b = np.empty_like(a)
%%cython
cimport numpy
def cy_dcumsum2(numpy.ndarray[numpy.float64_t, ndim=1] a, numpy.ndarray[numpy.float64_t, ndim=1] b):
cdef int i, n = len(a)
b[0] = a[0]
for i from 1 <= i < n:
b[i] = b[i-1] + a[i]
return b
%timeit cy_dcumsum2(a,b)
%timeit py_dcumsum(a)
Explanation: Još jedan primjer, gdje koristimo numpy nizove
Računamo kumulativnu sumu niza
End of explanation
# iz numbe učitavamo jit kompajler
from numba import jit
from numpy import arange
Explanation: Numba
Numba je just-in-time kompajler koji korisiti LLVM. Korištenje numbe je zapravo dosta jednostavno.
End of explanation
# @jit je dekorator
@jit
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
def py_sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result
a = arange(9).reshape(3,3)
%timeit sum2d(a)
%timeit py_sum2d(a)
Explanation: Funkcija sum2d(arr)će biti optimizirana, a cijela procedura se svela na korištenje jednog dekoratora.
End of explanation
import numpy
def filter2d(image, filt):
M, N = image.shape
Mf, Nf = filt.shape
Mf2 = Mf // 2
Nf2 = Nf // 2
result = numpy.zeros_like(image)
for i in range(Mf2, M - Mf2):
for j in range(Nf2, N - Nf2):
num = 0.0
for ii in range(Mf):
for jj in range(Nf):
num += (filt[Mf-1-ii, Nf-1-jj] * image[i-Mf2+ii, j-Nf2+jj])
result[i, j] = num
return result
from numba import double
fastfilter_2d = jit(double[:,:](double[:,:], double[:,:]))(filter2d)
image = numpy.random.random((100, 100))
filt = numpy.random.random((10, 10))
%timeit fastfilter_2d(image, filt)
%timeit filter2d(image, filt)
Explanation: Kompliciraniji primjer
End of explanation
from verzije import *
from IPython.display import HTML
HTML(print_sysinfo()+info_packages('cython,numpy,numba'))
Explanation: Paralelizacija
Drugi način ubrzavanja izvršavanja programa je pomoću paralelizacije. To je opsežna tema sama po sebi te u nju nećemo ulaziti.
Popularni način paralelizacije je npr. pomoću Apache Sparka ili pomoću Dask biblioteke.
End of explanation |
9,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2.3
Step1: CI for continuous data, Pg 18
Step2: Numpy uses a denominator of N in the standard deviation calculation by
default, instead of N-1. To use N-1, the unbiased estimator-- and to
agree with the R output, we have to give np.std() the argument ddof=1
Step3: CI for proportions, Pg 18
Step4: CI for discrete data, Pg 18
Step5: See the note above about the difference different defaults for standard
deviation in Python and R.
Step6: Plot Figure 2.3, Pg 19
The polls.dat file has an unusual format. The data that we would like to
have in a single row is split across 4 rows
Step7: Using knowledge of the file layout we can read in the file and pre-process into
appropriate rows/columns for passing into a pandas dataframe
Step8: Weighted averages, Pg 19
The example R-code for this part is incomplete, so I will make up N, p and
se loosely related to the text on page 19.
Step9: CI using simulations, Pg 20 | Python Code:
from __future__ import print_function, division
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# use matplotlib style sheet
plt.style.use('ggplot')
Explanation: 2.3: Classical confidence intervals
End of explanation
# import the t-distribution from scipy.stats
from scipy.stats import t
y = np.array([35,34,38,35,37])
y
n = len(y)
n
estimate = np.mean(y)
estimate
Explanation: CI for continuous data, Pg 18
End of explanation
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
Explanation: Numpy uses a denominator of N in the standard deviation calculation by
default, instead of N-1. To use N-1, the unbiased estimator-- and to
agree with the R output, we have to give np.std() the argument ddof=1:
End of explanation
from scipy.stats import norm
y = 700
y
n = 1000
n
estimate = y/n
estimate
se = np.sqrt(estimate*(1-estimate)/n)
se
int95 = estimate + norm.ppf([.025,0.975])*se
int95
Explanation: CI for proportions, Pg 18
End of explanation
y = np.repeat([0,1,2,3,4], [600,300, 50, 30, 20])
y
n = len(y)
n
estimate = np.mean(y)
estimate
Explanation: CI for discrete data, Pg 18
End of explanation
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
Explanation: See the note above about the difference different defaults for standard
deviation in Python and R.
End of explanation
%%bash
head ../../ARM_Data/death.polls/polls.dat
Explanation: Plot Figure 2.3, Pg 19
The polls.dat file has an unusual format. The data that we would like to
have in a single row is split across 4 rows:
year month
percentage support
percentage against
percentage no opinion
The data seems to be a subset of the Gallup data, available here:
http://www.gallup.com/poll/1606/Death-Penalty.aspx
We can see the unusual layout using the bash command head (linux/osx only,
sorry..)
End of explanation
# Data is available in death.polls directory of ARM_Data
data = []
temp = []
ncols = 5
with open("../../ARM_Data/death.polls/polls.dat") as f:
for line in f.readlines():
for d in line.strip().split(' '):
temp.append(float(d))
if (len(temp) == ncols):
data.append(temp)
temp = []
polls = pd.DataFrame(data, columns=[u'year', u'month', u'perc for',
u'perc against', u'perc no opinion'])
polls.head()
# --Note: this give the (percent) support for thise that have an opinion
# --The percentage with no opinion are ignored
# --This results in difference between our plot (below) and the Gallup plot (link above)
polls[u'support'] = polls[u'perc for']/(polls[u'perc for']+polls[u'perc against'])
polls.head()
polls[u'year_float'] = polls[u'year'] + (polls[u'month']-6)/12
polls.head()
# add error column -- symmetric so only add one column
# assumes sample size N=1000
# uses +/- 1 standard error, resulting in 68% confidence
polls[u'support_error'] = np.sqrt(polls[u'support']*(1-polls[u'support'])/1000)
polls.head()
fig, ax = plt.subplots(figsize=(8, 6))
plt.errorbar(polls[u'year_float'], 100*polls[u'support'],
yerr=100*polls[u'support_error'], fmt='ko',
ms=4, capsize=0)
plt.ylabel(u'Percentage support for the death penalty')
plt.xlabel(u'Year')
# you can adjust y-limits with command like below
# I will leave the default behavior
#plt.ylim(np.min(100*polls[u'support'])-2, np.max(100*polls[u'support']+2))
Explanation: Using knowledge of the file layout we can read in the file and pre-process into
appropriate rows/columns for passing into a pandas dataframe:
End of explanation
N = np.array([66030000, 81083600, 60788845])
p = np.array([0.55, 0.61, 0.38])
se = np.array([0.02, 0.03, 0.03])
w_avg = np.sum(N*p)/np.sum(N)
w_avg
se_w_avg = np.sqrt(np.sum((N*se/np.sum(N))**2))
se_w_avg
# this uses +/- 2 std devs
int_95 = w_avg + np.array([-2,2])*se_w_avg
int_95
Explanation: Weighted averages, Pg 19
The example R-code for this part is incomplete, so I will make up N, p and
se loosely related to the text on page 19.
End of explanation
# import the normal from scipy.stats
# repeated to make sure that it is clear that it is needed for this section
from scipy.stats import norm
# also need this for estimating CI from samples
from scipy.stats.mstats import mquantiles
n_men = 500
n_men
p_hat_men = 0.75
p_hat_men
se_men = np.sqrt(p_hat_men*(1.-p_hat_men)/n_men)
se_men
n_women = 500
n_women
p_hat_women = 0.65
p_hat_women
se_women = np.sqrt(p_hat_women*(1.-p_hat_women)/n_women)
se_women
n_sims = 10000
n_sims
p_men = norm.rvs(size=n_sims, loc=p_hat_men, scale=se_men)
p_men[:10] # show first ten
p_women = norm.rvs(size=n_sims, loc=p_hat_women, scale=se_women)
p_women[:10] # show first ten
ratio = p_men/p_women
ratio[:10] # show first ten
# the values of alphap and betap replicate the R default behavior
# see http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles.html
int95 = mquantiles(ratio, prob=[0.025,0.975], alphap=1., betap=1.)
int95
Explanation: CI using simulations, Pg 20
End of explanation |
9,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sorting a List of Dictionaries by a Common Key
Problem
Sort the entries according to one or more of the dictionary values.
Solution
Sorting this type of structure is easy using the operator module’s itemgetter function.
Step1: The itemgetter() function can also accept multiple keys.
Step2: The functionality of itemgetter() is sometimes replaced by lambda expressions. | Python Code:
from operator import itemgetter
rows_by_fname = sorted(rows, key=itemgetter('fname'))
rows_by_uid = sorted(rows, key=itemgetter('uid'))
rows_by_fname
rows_by_uid
Explanation: Sorting a List of Dictionaries by a Common Key
Problem
Sort the entries according to one or more of the dictionary values.
Solution
Sorting this type of structure is easy using the operator module’s itemgetter function.
End of explanation
rows_by_lfname = sorted(rows, key=itemgetter('lname','fname'))
rows_by_lfname
Explanation: The itemgetter() function can also accept multiple keys.
End of explanation
rows_by_fname = sorted(rows, key=lambda r: r['fname'])
rows_by_fname
Explanation: The functionality of itemgetter() is sometimes replaced by lambda expressions.
End of explanation |
9,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions
Step1: 2 - Overview of the Problem set
Problem Statement
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
What you need to remember
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+ np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
End of explanation
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
Explanation: Expected Output:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T,X) +b) # compute activation
cost = (-1)* (np.dot(Y, np.log(A).T) + np.dot((1-Y), np.log(1-A).T)) / m # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = (1/m) * np.dot(X, (A-Y).T)
db = (1/m) * np.sum(A-Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99845601]
[ 2.39507239]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.00145557813678 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 5.801545319394553 </td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T,X) +b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0, i] = 1 if A[0, i]>0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.19033591]
[ 0.12259159]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.92535983008 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.67752042]
[ 1.41625495]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.219194504541 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1. 0.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
Explanation: Run the following cell to train your model.
End of explanation
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Cost after iteration 0 ** </td>
<td> 0.693147 </td>
</tr>
<tr>
<td> <center> $\vdots$ </center> </td>
<td> <center> $\vdots$ </center> </td>
</tr>
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
Explanation: Let's also plot the cost function and the gradients.
End of explanation
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "1.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation |
9,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Introduction-to-Outlier-Mitigation" data-toc-modified-id="Introduction-to-Outlier-Mitigation-1"><span class="toc-item-num">1 </span>Introduction to Outlier Mitigation</a></div><div class="lev2 toc-item"><a href="#Making-the-data" data-toc-modified-id="Making-the-data-1.1"><span class="toc-item-num">1.1 </span>Making the data</a></div><div class="lev1 toc-item"><a href="#Mitigating-outliers" data-toc-modified-id="Mitigating-outliers-2"><span class="toc-item-num">2 </span>Mitigating outliers</a></div><div class="lev2 toc-item"><a href="#Spearman-Regression" data-toc-modified-id="Spearman-Regression-2.1"><span class="toc-item-num">2.1 </span>Spearman Regression</a></div><div class="lev1 toc-item"><a href="#Bayesian-approaches-to-outlier-mitigation" data-toc-modified-id="Bayesian-approaches-to-outlier-mitigation-3"><span class="toc-item-num">3 </span>Bayesian approaches to outlier mitigation</a></div>
# Introduction to Outlier Mitigation
Welcome to our brief tutorial on the Bayesian theorem and why to use it to perform linear regressions. First, we will provide a motivational example with synthetic data, showing how usual least-squares would do, and then we will introduce Bayes to perform a robust regression.
Step1: Making the data
First, we will make some data for us to use.
I will draw 30 points evenly (not randomly) from 0 to 10.
Step2: We will also need some data to plot on the Y-axis. I will draw 30 points between 0 to 10 and I will add just a little bit of noise to most of them. We will replace a random 3 points with outliers
Step3: Let's take a look at our data
Step5: Our data looks pretty good. and we might think that we can calculate a line of best fit. I will use the least-squares algorithm, which is how most lines of best fit are calculated.
Step6: Perform the optimization
Step7: Let's see the fit
Step8: Clearly the fit is not very good. Our eyes can see a better trendline, if only we could ignore the outliers.
Mitigating outliers
One common approach towards mitigating outliers is to rank the points. Let's see what happens when we do this.
Step9: Spearman Regression
Clearly ranking the data mitigates the outliers. This is because rank-transformations are insensitive to the distance from the outliers to the mean trend. They don't care how far away the outliers are, they just care about what their rank order is, and the rank order has to have a more compressed space than the unranked points. In this case, the points can vary from 0 to 60 along the y-axis, but the ranked y-axis can only vary from 0 to 30. Effectively, the distance from the main trend to the outliers is cut in half for this example. Let's go ahead, find the line of best fit for the ranked data and plot it. Doing this is called a Spearman regression
Step10: Great! The spearman correlation can much more accurately tell us about the line of best fit in the realm of ranked data! RNA-seq data is often plagued by terrible outliers that are very far from the mean effect magnitude. For this reason, we often rank-transform the beta values to get a better estimate of the true correlation between points.
Bayesian approaches to outlier mitigation
One of the wonderful facts about this world is that many things that are random in this world follow the same patterns. In particular, random events tend to follow a beautiful distribution known as the Gaussian distribution, or normal distribution
Step11: Clearly, as more samples are drawn, the better the data approximate the real, underlying distribution. This distribution has some interesting aspects, namely, it has 'tails' that decay quite quickly. Another way to put it is that, if the data are normally distributed, then huge outliers should be rare. When we perform line-fitting using the least squares algorithm (as we did above), one of the underlying assumptions is that our data has errors that are normally distributed, and therefore outliers should be very rare. This is why the line of best fit gets strongly skewed by outliers as we saw above
Step13: Do you see how the green curve is above the blue curve around the edges? Those are the tails of the Student-T distribution decaying more slowly than the Normal distribution. Under a Student-T distribution, the outliers will be far more frequent than under a normal model.
If we want to use a different distribution from the Normal distribution, we will need one last equation. It's called Bayes theorem. Here it is
Step14: Now that we have our robust regression, let's go back to our original data and try to fit a line through it. First, we need to put our data into a dictionary, and then we can run the regression. It will take some amount of time, with longer wait times for larger datasets.
Step15: We fit a linear model, $y = a + bx$, so now we need to extract the parameters | Python Code:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import scipy as scipy
from matplotlib import rc
# set to use tex, but make sure it is sans-serif fonts only
rc('text', usetex=True)
rc('text.latex', preamble=r'\usepackage{cmbright}')
rc('font', **{'family': 'sans-serif', 'sans-serif': ['Helvetica']})
# bayes and mcmc
import pymc3 as pm
import theano
# Magic function to make matplotlib inline;
# other style specs must come AFTER
%matplotlib inline
# This enables SVG graphics inline.
# There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'png', 'retina'}
# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 2,
'axes.labelsize': 18,
'axes.titlesize': 18,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style("dark")
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['legend.fontsize'] = 14
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Introduction-to-Outlier-Mitigation" data-toc-modified-id="Introduction-to-Outlier-Mitigation-1"><span class="toc-item-num">1 </span>Introduction to Outlier Mitigation</a></div><div class="lev2 toc-item"><a href="#Making-the-data" data-toc-modified-id="Making-the-data-1.1"><span class="toc-item-num">1.1 </span>Making the data</a></div><div class="lev1 toc-item"><a href="#Mitigating-outliers" data-toc-modified-id="Mitigating-outliers-2"><span class="toc-item-num">2 </span>Mitigating outliers</a></div><div class="lev2 toc-item"><a href="#Spearman-Regression" data-toc-modified-id="Spearman-Regression-2.1"><span class="toc-item-num">2.1 </span>Spearman Regression</a></div><div class="lev1 toc-item"><a href="#Bayesian-approaches-to-outlier-mitigation" data-toc-modified-id="Bayesian-approaches-to-outlier-mitigation-3"><span class="toc-item-num">3 </span>Bayesian approaches to outlier mitigation</a></div>
# Introduction to Outlier Mitigation
Welcome to our brief tutorial on the Bayesian theorem and why to use it to perform linear regressions. First, we will provide a motivational example with synthetic data, showing how usual least-squares would do, and then we will introduce Bayes to perform a robust regression.
End of explanation
x = np.linspace(0, 10, 30)
Explanation: Making the data
First, we will make some data for us to use.
I will draw 30 points evenly (not randomly) from 0 to 10.
End of explanation
y = np.linspace(0, 10, 30)
y = y + np.random.normal(0, 0.5, len(y))
y[np.random.randint(0, 30, 3)] = np.random.normal(50, 5, 3)
Explanation: We will also need some data to plot on the Y-axis. I will draw 30 points between 0 to 10 and I will add just a little bit of noise to most of them. We will replace a random 3 points with outliers
End of explanation
plt.plot(x, y, 'o')
Explanation: Let's take a look at our data:
End of explanation
def line(x, a, b):
The line of best fit.
# unpack the parameters:
y = a + b*x
return y
Explanation: Our data looks pretty good. and we might think that we can calculate a line of best fit. I will use the least-squares algorithm, which is how most lines of best fit are calculated.
End of explanation
popt, pcov = scipy.optimize.curve_fit(line, x, y)
# unpack the parameters of the line of best fit:
a, b = popt
Explanation: Perform the optimization
End of explanation
plt.plot(x, y, 'o', label='data')
plt.plot(x, line(x, a, b), label='fit')
plt.legend(title='Legend')
Explanation: Let's see the fit:
End of explanation
x_ranked = scipy.stats.rankdata(x)
y_ranked = scipy.stats.rankdata(y)
fig, ax = plt.subplots(ncols=2, sharey=False)
ax[0].plot(x, y, 'o')
ax[0].set_title('Normal Data, Unranked')
ax[1].plot(x_ranked, y_ranked, 'go')
ax[1].set_title('Ranked Data')
ax[0].set_ylabel('Y (Ranked or Unranked)')
fig.text(0.5, 0.04, 'X', ha='center', size=18)
Explanation: Clearly the fit is not very good. Our eyes can see a better trendline, if only we could ignore the outliers.
Mitigating outliers
One common approach towards mitigating outliers is to rank the points. Let's see what happens when we do this.
End of explanation
popt, pcov = scipy.optimize.curve_fit(line, x_ranked, y_ranked)
# unpack the parameters of the line of best fit:
arank, brank = popt
# plot
fig, ax = plt.subplots(ncols=2, sharey=False)
ax[0].plot(x, y, 'o')
ax[0].plot(x, line(x, a, b), 'b', label='Unranked fit')
ax[0].legend()
ax[0].set_title('Raw Data')
ax[1].plot(x_ranked, y_ranked, 'go')
ax[1].plot(x_ranked, line(x_ranked, arank, brank), 'g', label='Ranked fit')
ax[1].legend()
ax[1].set_title('Ranked Data')
ax[0].set_ylabel('Y (Ranked or Unranked)')
fig.text(0.5, 0.04, 'X (Ranked or Unranked)', ha='center', size=18)
Explanation: Spearman Regression
Clearly ranking the data mitigates the outliers. This is because rank-transformations are insensitive to the distance from the outliers to the mean trend. They don't care how far away the outliers are, they just care about what their rank order is, and the rank order has to have a more compressed space than the unranked points. In this case, the points can vary from 0 to 60 along the y-axis, but the ranked y-axis can only vary from 0 to 30. Effectively, the distance from the main trend to the outliers is cut in half for this example. Let's go ahead, find the line of best fit for the ranked data and plot it. Doing this is called a Spearman regression
End of explanation
def normal(x):
return 1/np.sqrt(2*np.pi)*np.exp(-x**2/(2))
x1 = np.random.normal(0, 1, 20)
x2 = np.random.normal(0, 1, 100)
x3 = np.random.normal(0, 1, 1000)
x4 = np.linspace(-3, 3, 1000)
fig, ax = plt.subplots(ncols=4, figsize=(20, 5))
ax[0].hist(x1, normed=True)
ax[0].set_xlim(-3, 3)
ax[0].set_title('20 Observations')
ax[1].hist(x2, normed=True)
ax[1].set_xlim(-3, 3)
ax[1].set_title('100 Observations')
ax[2].hist(x3, normed=True)
ax[2].set_xlim(-3, 3)
ax[2].set_title('1,000 Observations')
ax[3].plot(x4, normal(x4))
ax[3].set_xlim(-3, 3)
ax[3].set_title('Normal Distribution')
ax[0].set_ylabel(r'p(x)')
fig.text(0.5, -0.04, 'Values of X', ha='center', size=18)
Explanation: Great! The spearman correlation can much more accurately tell us about the line of best fit in the realm of ranked data! RNA-seq data is often plagued by terrible outliers that are very far from the mean effect magnitude. For this reason, we often rank-transform the beta values to get a better estimate of the true correlation between points.
Bayesian approaches to outlier mitigation
One of the wonderful facts about this world is that many things that are random in this world follow the same patterns. In particular, random events tend to follow a beautiful distribution known as the Gaussian distribution, or normal distribution:
$$
p(x) \propto exp(-\frac{(x-\mu)^2}{2\sigma^2})
$$
Let's take a look at this.
End of explanation
# Points for the normal dist:
xnorm = np.linspace(scipy.stats.norm.ppf(0.00001),
scipy.stats.norm.ppf(0.99999))
# calculate the points for the student t
df = 2.74335149908
t_nums = np.linspace(scipy.stats.t.ppf(0.01, df),
scipy.stats.t.ppf(0.99, df), 100)
#plot
plt.plot(xnorm, scipy.stats.t.pdf(xnorm, df), 'g-', lw=5, alpha=0.6, label='Student-T Distribution')
plt.plot(xnorm, scipy.stats.norm.pdf(xnorm), 'b-', lw=5, alpha=0.6, label='Normal Distribution')
plt.legend()
plt.title('Difference between Student-T and Normal')
plt.xlabel('Data values, X')
plt.ylabel('$p(X)$')
Explanation: Clearly, as more samples are drawn, the better the data approximate the real, underlying distribution. This distribution has some interesting aspects, namely, it has 'tails' that decay quite quickly. Another way to put it is that, if the data are normally distributed, then huge outliers should be rare. When we perform line-fitting using the least squares algorithm (as we did above), one of the underlying assumptions is that our data has errors that are normally distributed, and therefore outliers should be very rare. This is why the line of best fit gets strongly skewed by outliers as we saw above: It thinks outliers are common and important!
Is there a way to make outliers less important? Absolutely. We could start by selecting a curve that has tails that decay less quickly than the normal distribution. For example, we could pick a Student-T distribution!
End of explanation
def robust_regress(data):
A robust regression using a StudentT distribution.
Params:
data - a dictionary with entries called 'x' and 'y'
Outputs:
trace_robust - the trace of the simulation
# PyMC3 asks you to make an object, called a pm.Model(), so go ahead and make it.
with pm.Model() as model_robust:
# Choose your distribution: StudentT
family = pm.glm.families.StudentT()
# Figure out the model you will fit. In this case, we want y = alpha*x,
# where alpha is to be determined
pm.glm.glm('y ~ x', data, family=family)
# PyMC3 performs what we call a Monte Carlo Markov Chain simulation, but this
# usually only works if we start reasonably close to what alpha should be.
# Fortunately, PyMC3 can estimate a pretty good starting point using something
# called a Maximum A Priori likelihood method, so use it!
start = pm.find_MAP()
# do the simulation and return the results
step = pm.NUTS(scaling=start)
trace_robust = pm.sample(2000, step, progressbar=True)
return trace_robust
Explanation: Do you see how the green curve is above the blue curve around the edges? Those are the tails of the Student-T distribution decaying more slowly than the Normal distribution. Under a Student-T distribution, the outliers will be far more frequent than under a normal model.
If we want to use a different distribution from the Normal distribution, we will need one last equation. It's called Bayes theorem. Here it is:
$$
P(X|Y) \propto P(Y|X)\cdot P(X)
$$
Read out loud it says: The probability of X happening given that Y is true is proportional to the probability that Y happens given that X is true multiplied by the probability that X is true. In other words, what is the probability of rain given that is cloudy? The answer is that it is proportional to the probability of it being cloudy given that it is raining (close to 1) multiplied by the probability of rain (in California, almost 0). Therefore, the probability of rain given that it is cloudy could be very small if you are in California.
We can appropriate Bayes theorem and use it to model our data as coming from a Student T distribution instead of a Normal distribution. We need Bayes because we need to estimate the distribution that the data will be coming from, since the Student-T depends on certain parameters that are hard to estimate otherwise. How do we do this? The details get a bit messy and there's a longer explanation than we can give here, so we won't get into it. The important thing to note is that by knowing that we need to use a Student-T distribution and that we need to do Bayesian regression we have solved most of the problem. Some searching quickly reveals that PyMC3 has a module that enables us to do just what we need. Below, I define a function, with the help of PyMC3 that will allow us to perform linear regression on some of our data.
End of explanation
data = dict(x=x, y=y)
trace = robust_regress(data)
Explanation: Now that we have our robust regression, let's go back to our original data and try to fit a line through it. First, we need to put our data into a dictionary, and then we can run the regression. It will take some amount of time, with longer wait times for larger datasets.
End of explanation
# normalize everything so that all points are centered around 0
intercept = trace.Intercept.mean()
slope = trace.x.mean()
smoothx = np.linspace(0, 10, 1000)
# plot the results
plt.plot(x, y, 'o', label='date')
plt.plot(smoothx, line(smoothx, intercept, slope), 'g-', label='robust fit')
plt.legend()
Explanation: We fit a linear model, $y = a + bx$, so now we need to extract the parameters:
End of explanation |
9,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Plots
Step1: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https
Step2: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
Step3: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
Step4: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
Step5: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
Step6: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
Step7: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
Step8: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
Step9: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
Step10: Statewide Crime 2009 Dataset
Compare the following to http
Step11: Partial Regression Plots (Crime Data)
Step12: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
Step13: Influence Plot
Step14: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
Step15: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888) | Python Code:
%matplotlib inline
from statsmodels.compat import lzip
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.formula.api import ols
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Explanation: Regression Plots
End of explanation
prestige = sm.datasets.get_rdataset("Duncan", "carData", cache=True).data
prestige.head()
prestige_model = ols("prestige ~ income + education", data=prestige).fit()
print(prestige_model.summary())
Explanation: Duncan's Prestige Dataset
Load the Data
We can use a utility function to load any R dataset available from the great <a href="https://vincentarelbundock.github.io/Rdatasets/">Rdatasets package</a>.
End of explanation
fig = sm.graphics.influence_plot(prestige_model, criterion="cooks")
fig.tight_layout(pad=1.0)
Explanation: Influence plots
Influence plots show the (externally) studentized residuals vs. the leverage of each observation as measured by the hat matrix.
Externally studentized residuals are residuals that are scaled by their standard deviation where
$$var(\hat{\epsilon}i)=\hat{\sigma}^2_i(1-h{ii})$$
with
$$\hat{\sigma}^2_i=\frac{1}{n - p - 1 \;\;}\sum_{j}^{n}\;\;\;\forall \;\;\; j \neq i$$
$n$ is the number of observations and $p$ is the number of regressors. $h_{ii}$ is the $i$-th diagonal element of the hat matrix
$$H=X(X^{\;\prime}X)^{-1}X^{\;\prime}$$
The influence of each point can be visualized by the criterion keyword argument. Options are Cook's distance and DFFITS, two measures of influence.
End of explanation
fig = sm.graphics.plot_partregress(
"prestige", "income", ["income", "education"], data=prestige
)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress("prestige", "income", ["education"], data=prestige)
fig.tight_layout(pad=1.0)
Explanation: As you can see there are a few worrisome observations. Both contractor and reporter have low leverage but a large residual. <br />
RR.engineer has small residual and large leverage. Conductor and minister have both high leverage and large residuals, and, <br />
therefore, large influence.
Partial Regression Plots (Duncan)
Since we are doing multivariate regressions, we cannot just look at individual bivariate plots to discern relationships. <br />
Instead, we want to look at the relationship of the dependent variable and independent variables conditional on the other <br />
independent variables. We can do this through using partial regression plots, otherwise known as added variable plots. <br />
In a partial regression plot, to discern the relationship between the response variable and the $k$-th variable, we compute <br />
the residuals by regressing the response variable versus the independent variables excluding $X_k$. We can denote this by <br />
$X_{\sim k}$. We then compute the residuals by regressing $X_k$ on $X_{\sim k}$. The partial regression plot is the plot <br />
of the former versus the latter residuals. <br />
The notable points of this plot are that the fitted line has slope $\beta_k$ and intercept zero. The residuals of this plot <br />
are the same as those of the least squares fit of the original model with full $X$. You can discern the effects of the <br />
individual data values on the estimation of a coefficient easily. If obs_labels is True, then these points are annotated <br />
with their observation label. You can also see the violation of underlying assumptions such as homoskedasticity and <br />
linearity.
End of explanation
subset = ~prestige.index.isin(["conductor", "RR.engineer", "minister"])
prestige_model2 = ols(
"prestige ~ income + education", data=prestige, subset=subset
).fit()
print(prestige_model2.summary())
Explanation: As you can see the partial regression plot confirms the influence of conductor, minister, and RR.engineer on the partial relationship between income and prestige. The cases greatly decrease the effect of income on prestige. Dropping these cases confirms this.
End of explanation
fig = sm.graphics.plot_partregress_grid(prestige_model)
fig.tight_layout(pad=1.0)
Explanation: For a quick check of all the regressors, you can use plot_partregress_grid. These plots will not label the <br />
points, but you can use them to identify problems and then use plot_partregress to get more information.
End of explanation
fig = sm.graphics.plot_ccpr(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Component-Component plus Residual (CCPR) Plots
The CCPR plot provides a way to judge the effect of one regressor on the <br />
response variable by taking into account the effects of the other <br />
independent variables. The partial residuals plot is defined as <br />
$\text{Residuals} + B_iX_i \text{ }\text{ }$ versus $X_i$. The component adds $B_iX_i$ versus <br />
$X_i$ to show where the fitted line would lie. Care should be taken if $X_i$ <br />
is highly correlated with any of the other independent variables. If this <br />
is the case, the variance evident in the plot will be an underestimate of <br />
the true variance.
End of explanation
fig = sm.graphics.plot_ccpr_grid(prestige_model)
fig.tight_layout(pad=1.0)
Explanation: As you can see the relationship between the variation in prestige explained by education conditional on income seems to be linear, though you can see there are some observations that are exerting considerable influence on the relationship. We can quickly look at more than one variable by using plot_ccpr_grid.
End of explanation
fig = sm.graphics.plot_regress_exog(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Single Variable Regression Diagnostics
The plot_regress_exog function is a convenience function that gives a 2x2 plot containing the dependent variable and fitted values with confidence intervals vs. the independent variable chosen, the residuals of the model vs. the chosen independent variable, a partial regression plot, and a CCPR plot. This function can be used for quickly checking modeling assumptions with respect to a single regressor.
End of explanation
fig = sm.graphics.plot_fit(prestige_model, "education")
fig.tight_layout(pad=1.0)
Explanation: Fit Plot
The plot_fit function plots the fitted values versus a chosen independent variable. It includes prediction confidence intervals and optionally plots the true dependent variable.
End of explanation
# dta = pd.read_csv("http://www.stat.ufl.edu/~aa/social/csv_files/statewide-crime-2.csv")
# dta = dta.set_index("State", inplace=True).dropna()
# dta.rename(columns={"VR" : "crime",
# "MR" : "murder",
# "M" : "pctmetro",
# "W" : "pctwhite",
# "H" : "pcths",
# "P" : "poverty",
# "S" : "single"
# }, inplace=True)
#
# crime_model = ols("murder ~ pctmetro + poverty + pcths + single", data=dta).fit()
dta = sm.datasets.statecrime.load_pandas().data
crime_model = ols("murder ~ urban + poverty + hs_grad + single", data=dta).fit()
print(crime_model.summary())
Explanation: Statewide Crime 2009 Dataset
Compare the following to http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter4/statareg_self_assessment_answers4.htm
Though the data here is not the same as in that example. You could run that example by uncommenting the necessary cells below.
End of explanation
fig = sm.graphics.plot_partregress_grid(crime_model)
fig.tight_layout(pad=1.0)
fig = sm.graphics.plot_partregress(
"murder", "hs_grad", ["urban", "poverty", "single"], data=dta
)
fig.tight_layout(pad=1.0)
Explanation: Partial Regression Plots (Crime Data)
End of explanation
fig = sm.graphics.plot_leverage_resid2(crime_model)
fig.tight_layout(pad=1.0)
Explanation: Leverage-Resid<sup>2</sup> Plot
Closely related to the influence_plot is the leverage-resid<sup>2</sup> plot.
End of explanation
fig = sm.graphics.influence_plot(crime_model)
fig.tight_layout(pad=1.0)
Explanation: Influence Plot
End of explanation
from statsmodels.formula.api import rlm
rob_crime_model = rlm(
"murder ~ urban + poverty + hs_grad + single",
data=dta,
M=sm.robust.norms.TukeyBiweight(3),
).fit(conv="weights")
print(rob_crime_model.summary())
# rob_crime_model = rlm("murder ~ pctmetro + poverty + pcths + single", data=dta, M=sm.robust.norms.TukeyBiweight()).fit(conv="weights")
# print(rob_crime_model.summary())
Explanation: Using robust regression to correct for outliers.
Part of the problem here in recreating the Stata results is that M-estimators are not robust to leverage points. MM-estimators should do better with this examples.
End of explanation
weights = rob_crime_model.weights
idx = weights > 0
X = rob_crime_model.model.exog[idx.values]
ww = weights[idx] / weights[idx].mean()
hat_matrix_diag = ww * (X * np.linalg.pinv(X).T).sum(1)
resid = rob_crime_model.resid
resid2 = resid ** 2
resid2 /= resid2.sum()
nobs = int(idx.sum())
hm = hat_matrix_diag.mean()
rm = resid2.mean()
from statsmodels.graphics import utils
fig, ax = plt.subplots(figsize=(16, 8))
ax.plot(resid2[idx], hat_matrix_diag, "o")
ax = utils.annotate_axes(
range(nobs),
labels=rob_crime_model.model.data.row_labels[idx],
points=lzip(resid2[idx], hat_matrix_diag),
offset_points=[(-5, 5)] * nobs,
size="large",
ax=ax,
)
ax.set_xlabel("resid2")
ax.set_ylabel("leverage")
ylim = ax.get_ylim()
ax.vlines(rm, *ylim)
xlim = ax.get_xlim()
ax.hlines(hm, *xlim)
ax.margins(0, 0)
Explanation: There is not yet an influence diagnostics method as part of RLM, but we can recreate them. (This depends on the status of issue #888)
End of explanation |
9,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Auto-caption
Date
Step1: Read file
Step2: Access data of multiIndex dataframe
pandas, how to access multiIndex dataframe?
Step3: Dataframe that i want to match
Step5: string matching funciton
1-to-1 matching (or mapping)
Github of fuzzywuzzy
Step6: show all stats (Ans) and matching results (algorithm) | Python Code:
# system
import os
import sys
# 3rd party lib
import pandas as pd
from sklearn.cluster import KMeans
from fuzzywuzzy import fuzz # stirng matching
print('Python verison: {}'.format(sys.version))
print('\n############################')
print('Pandas verison: {}'.format(pd.show_versions()))
Explanation: Auto-caption
Date: 2018/11/14
Purpose: swarm name matching using the data below
Data source:
auto_caption4.csv
auto_caption5.csv
auto_caption7.csv
auto_caption8.csv
auto_caption9.csv
auto_caption10.csv
auto_caption11.csv
End of explanation
standard_df = pd.read_csv('auto_caption4.csv', names=['cluster_ID','timestamp','event','name'])
print('There are {} clusters in standard_df\n'.format(len(standard_df['cluster_ID'].unique())))
print(standard_df.head(5))
# default is axis=0
standard_df_groupby = standard_df.groupby(['cluster_ID','name']).agg({'name':['count']})
print(standard_df.groupby(['cluster_ID','name']).agg({'name':['count']}))
Explanation: Read file
End of explanation
# get column names
df = standard_df_groupby.loc[0].reset_index()
flat_column_names = []
for level in df.columns:
# tuple to list
flat_column_names.extend(list(level)) # extend(): in-place
# remove duplicate and empty
flat_column_names = filter(None, flat_column_names) # filter empty
flat_column_names = list(set(flat_column_names)) # deduplicate
print('original order: {}'.format(flat_column_names))
# change member order of list due to set is a random order
if flat_column_names[0] == 'count':
myorder = [1,0]
flat_column_names = [flat_column_names[i] for i in myorder]
print('New order: {}'.format(flat_column_names))
standard_df_dict = {}
# Transform multi-index to single index, and update string to dict standard_df_dict
for id_of_cluster in standard_df['cluster_ID'].unique():
print('\n# of cluster: {}'.format(id_of_cluster))
df = standard_df_groupby.loc[id_of_cluster].reset_index()
df.columns = flat_column_names
print(df.sort_values(by=['count'], ascending=False))
standard_df_dict.update({id_of_cluster: df.name.str.cat(sep=' ', na_rep='?')})
print('################################')
print('\nDictionary of swarm data: \n{}'.format(standard_df_dict))
Explanation: Access data of multiIndex dataframe
pandas, how to access multiIndex dataframe?
End of explanation
matching_df1 = pd.read_csv('auto_caption5.csv', names=['cluster_ID','timestamp','event','name'])
print('There are {} clusters in standard_df\n'.format(len(matching_df1['cluster_ID'].unique())))
print(matching_df1.head(5))
# default is axis=0
matching_df1_groupby = matching_df1.groupby(['cluster_ID','name']).agg({'name':['count']})
print(matching_df1.groupby(['cluster_ID','name']).agg({'name':['count']}))
# get column names
df = matching_df1_groupby.loc[0].reset_index()
flat_column_names = []
for level in df.columns:
# tuple to list
flat_column_names.extend(list(level)) # extend(): in-place
# remove duplicate and empty
flat_column_names = filter(None, flat_column_names) # filter empty
flat_column_names = list(set(flat_column_names)) # deduplicate
print(flat_column_names)
# change member order of list due to set is a random order
if flat_column_names[0] == 'count':
myorder = [1,0]
flat_column_names = [flat_column_names[i] for i in myorder]
print('New order: {}'.format(flat_column_names))
matching_df1_dict = {}
# Transform multi-index to single index, and update string to dict standard_df_dict
for id_of_cluster in matching_df1['cluster_ID'].unique():
print('\n# of cluster: {}'.format(id_of_cluster))
df = matching_df1_groupby.loc[id_of_cluster].reset_index()
df.columns = flat_column_names
print(df.sort_values(by=['count'], ascending=False))
matching_df1_dict.update({id_of_cluster: df.name.str.cat(sep=' ', na_rep='?')})
print('################################')
print('\nDictionary of swarm data: \n{}'.format(matching_df1_dict))
Explanation: Dataframe that i want to match
End of explanation
def matching_two_dicts_of_swarm(standard_dict, matching_dict, res_dict):
match two dictoinaries with same amount of key-value pairs
and return matching result, a dict of dict called res_dict.
* standard_dict: The standard of dict
* matching_dict: The dict that i want to match
* res_dict: the result, a dict of dict
key = 0 # key: number, no string
pop_list = [k for k,v in matching_dict.items()]
print(pop_list)
for i in standard_dict.keys(): # control access index of standard_dict. a more pythonic way
threshold = 0
for j in pop_list: # control access index of matching_dict
f_ratio = fuzz.ratio(standard_dict[i], matching_dict[j])
if f_ratio > threshold: # update matching result only when the fuzz ratio is greater
print('New matching fuzz ratio {} is higher than threshold {}'\
.format(f_ratio, threshold))
key = j # update key
threshold = f_ratio # update threshold value
print('Update new threshold {}'\
.format(threshold))
res_dict.update({i: {j: matching_dict[i]}}) #
# pop out matched key-value pair of matching dict
if pop_list:
pop_list.remove(key) # remove specific value. remove() fails when no elements remains
print(res_dict)
return res_dict
res_dict = {}
res_dict = matching_two_dicts_of_swarm(standard_df_dict, matching_df1_dict, res_dict)
print(res_dict)
Explanation: string matching funciton
1-to-1 matching (or mapping)
Github of fuzzywuzzy: link
Search keyword: You can try 'fuzzywuzzy' + 'pandas'
End of explanation
std_dict_to_df = pd.DataFrame.from_dict(standard_df_dict, orient='index', columns=['Before: function_name'])
std_dict_to_df['std_cluster_ID'] = std_dict_to_df.index
std_dict_to_df = std_dict_to_df[['std_cluster_ID', 'Before: function_name']]
std_dict_to_df
mtch_df1_dict_to_df = pd.DataFrame.from_dict(matching_df1_dict, orient='index', columns=['Matching function_name'])
mtch_df1_dict_to_df
res_dict_to_df = pd.DataFrame()
res_dict_to_df
res_list = [k for k,v in res_dict.items()]
for key in res_list:
df = pd.DataFrame.from_dict(res_dict[key], orient='index', columns=['After: funciton name']) # res_dict[key]: a dict
df['mtch_cluster_ID'] = df.index
#print(df)
res_dict_to_df = res_dict_to_df.append(df, ignore_index=True) # df.append(): not in-place
res_dict_to_df = res_dict_to_df[['mtch_cluster_ID', 'After: funciton name']]
print(res_dict_to_df.head(5))
final_df = pd.concat([std_dict_to_df, res_dict_to_df], axis=1)
final_df
Explanation: show all stats (Ans) and matching results (algorithm)
End of explanation |
9,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
Step2: Create the dataset
Step3: Review the dataset
In the previous lab we created the data we will use modeling and saved them as tables in BigQuery. Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
Step4: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
Step5: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
Step6: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
Step7: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
Step8: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
Step9: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query | Python Code:
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env REGION = "us-central1"
Explanation: Time Series Prediction with BQML and AutoML
Objectives
1. Learn how to use BQML to create a classification time-series model using CREATE MODEL.
2. Learn how to use BQML to create a linear regression time-series model.
3. Learn how to use AutoML Tables to build a time series model from data in BigQuery.
Set up environment variables and load necessary libraries
End of explanation
from google.cloud import bigquery
from IPython import get_ipython
bq = bigquery.Client(project=PROJECT)
def create_dataset():
dataset = bigquery.Dataset(bq.dataset("stock_market"))
try:
bq.create_dataset(dataset) # Will fail if dataset already exists.
print("Dataset created")
except:
print("Dataset already exists")
def create_features_table():
error = None
try:
bq.query(
CREATE TABLE stock_market.eps_percent_change_sp500
AS
SELECT *
FROM `stock_market.eps_percent_change_sp500`
).to_dataframe()
except Exception as e:
error = str(e)
if error is None:
print("Table created")
elif "Already Exists" in error:
print("Table already exists.")
else:
print(error)
raise Exception("Table was not created.")
create_dataset()
create_features_table()
Explanation: Create the dataset
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
stock_market.eps_percent_change_sp500
LIMIT
10
Explanation: Review the dataset
In the previous lab we created the data we will use modeling and saved them as tables in BigQuery. Let's examine that table again to see that everything is as we expect. Then, we will build a model using BigQuery ML using this table.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.direction_model OPTIONS(model_type = "logistic_reg",
input_label_cols = ["direction"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
Explanation: Using BQML
Create classification model for direction
To create a model
1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.
2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.
3. Provide the query which fetches the training data
Have a look at Step Two of this tutorial to see another example.
The query will take about two minutes to complete
We'll start with creating a classification model to predict the direction of each stock.
We'll take a random split using the symbol value. With about 500 different values, using ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1 will give 30 distinct symbol values which corresponds to about 171,000 training examples. After taking 70% for training, we will be building a model on about 110,000 training examples.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.direction_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
Explanation: Get training statistics and examine training info
After creating our model, we can evaluate the performance using the ML.EVALUATE function. With this command, we can find the precision, recall, accuracy F1-score and AUC of our classification model.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.direction_model`)
ORDER BY iteration
Explanation: We can also examine the training statistics collected by Big Query. To view training results we use the ML.TRAINING_INFO function.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
WITH
eval_data AS (
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
direction
FROM
`stock_market.eps_percent_change_sp500`
WHERE
tomorrow_close IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85)
SELECT
direction,
(COUNT(direction)* 100 / (
SELECT
COUNT(*)
FROM
eval_data)) AS percentage
FROM
eval_data
GROUP BY
direction
Explanation: Compare to simple benchmark
Another way to asses the performance of our model is to compare with a simple benchmark. We can do this by seeing what kind of accuracy we would get using the naive strategy of just predicted the majority class. For the training dataset, the majority class is 'STAY'. The following query we can see how this naive strategy would perform on the eval set.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
CREATE OR REPLACE MODEL
stock_market.price_model OPTIONS(model_type = "linear_reg",
input_label_cols = ["normalized_change"]) AS
-- query to fetch training data
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 70
Explanation: So, the naive strategy of just guessing the majority class would have accuracy of 0.5509 on the eval dataset, just below our BQML model.
Create regression model for normalized change
We can also use BigQuery to train a regression model to predict the normalized change for each stock. To do this in BigQuery we need only change the OPTIONS when calling CREATE OR REPLACE MODEL. This will give us a more precise prediction rather than just predicting if the stock will go up, down, or stay the same. Thus, we can treat this problem as either a regression problem or a classification problem, depending on the business needs.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.EVALUATE(MODEL `stock_market.price_model`,
(
SELECT
symbol,
Date,
Open,
close_MIN_prior_5_days,
close_MIN_prior_20_days,
close_MIN_prior_260_days,
close_MAX_prior_5_days,
close_MAX_prior_20_days,
close_MAX_prior_260_days,
close_AVG_prior_5_days,
close_AVG_prior_20_days,
close_AVG_prior_260_days,
close_STDDEV_prior_5_days,
close_STDDEV_prior_20_days,
close_STDDEV_prior_260_days,
normalized_change
FROM
`stock_market.eps_percent_change_sp500`
WHERE
normalized_change IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15)) = 1
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) > 15 * 70
AND ABS(MOD(FARM_FINGERPRINT(symbol), 15 * 100)) <= 15 * 85))
%%bigquery --project $PROJECT
#standardSQL
SELECT
*
FROM
ML.TRAINING_INFO(MODEL `stock_market.price_model`)
ORDER BY iteration
Explanation: Just as before we can examine the evaluation metrics for our regression model and examine the training statistics in Big Query
End of explanation |
9,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'orb' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Let's create the and orb dataset and attach it to the Bundle
Step3: times
Step4: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to dynamics and the ORB dataset
Step5: dynamics_method
Step6: The 'dynamics_method' parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices
Step7: The 'ltte' parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.
See the Systemic Velocity Example Script for an example of how 'ltte' and 'vgamma' (systemic velocity) interplay.
Synthetics
Step8: Plotting
By default, orb datasets plot as 'vs' vx 'us' (plane of sky coordinates). Notice the y-scale here with inclination set to 90.
Step9: As always, you have access to any of the arrays for either axes, so if you want to plot 'vus' vs 'times'
Step10: We can also plot the orbit in 3D. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: 'orb' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('orb')
print b.filter(kind='orb')
Explanation: Dataset Parameters
Let's create the and orb dataset and attach it to the Bundle
End of explanation
print b['times']
Explanation: times
End of explanation
print b['compute']
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to dynamics and the ORB dataset
End of explanation
print b['dynamics_method']
Explanation: dynamics_method
End of explanation
print b['ltte']
Explanation: The 'dynamics_method' parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices:
* keplerian (default): Use Kepler's laws to determine positions. If the system has more than two components, then each orbit is treated independently and nested (ie there are no dynamical/tidal effects - the inner orbit is treated as a single point mass in the outer orbit).
* nbody: Use an n-body integrator to determine positions. Here the initial conditions (positions and velocities) are still defined by the orbit's Keplerian parameters at 't0@system'. Closed orbits and orbital stability are not guaranteed and ejections can occur.
ltte
End of explanation
b.set_value_all('times', np.linspace(0,3,201))
b.run_compute()
b['orb@model'].twigs
print b['times@primary@orb01@orb@model']
print b['us@primary@orb01@orb@model']
print b['vus@primary@orb01@orb@model']
Explanation: The 'ltte' parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.
See the Systemic Velocity Example Script for an example of how 'ltte' and 'vgamma' (systemic velocity) interplay.
Synthetics
End of explanation
afig, mplfig = b['orb@model'].plot(show=True)
Explanation: Plotting
By default, orb datasets plot as 'vs' vx 'us' (plane of sky coordinates). Notice the y-scale here with inclination set to 90.
End of explanation
afig, mplfig = b['orb@model'].plot(x='times', y='vus', show=True)
Explanation: As always, you have access to any of the arrays for either axes, so if you want to plot 'vus' vs 'times'
End of explanation
afig, mplfig = b['orb@model'].plot(projection='3d', xlim=(-4,4), ylim=(-4,4), zlim=(-4,4), show=True)
Explanation: We can also plot the orbit in 3D.
End of explanation |
9,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bipartite node layout
By default, nodes are partitioned into two subsets using a two-coloring of the graph.
The median heuristic proposed in Eades & Wormald (1994) is used to reduce edge crossings.
Step1: The partitions can also be made explicit using the
Step2: To change the layout from the left-right orientation to a bottom-up orientation,
call the layout function directly and swap x and y coordinates of the node positions. | Python Code:
import matplotlib.pyplot as plt
from netgraph import Graph
edges = [
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(5, 6)
]
Graph(edges, node_layout='bipartite', node_labels=True)
plt.show()
Explanation: Bipartite node layout
By default, nodes are partitioned into two subsets using a two-coloring of the graph.
The median heuristic proposed in Eades & Wormald (1994) is used to reduce edge crossings.
End of explanation
import matplotlib.pyplot as plt
from netgraph import Graph
edges = [
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(5, 6)
]
Graph(edges, node_layout='bipartite', node_layout_kwargs=dict(subsets=[(0, 2, 4, 6), (1, 3, 5)]), node_labels=True)
plt.show()
Explanation: The partitions can also be made explicit using the :code:subsets argument.
In multi-component bipartite graphs, multiple two-colorings are possible,
such that explicit specification of the subsets may be necessary to achieve the desired partitioning of nodes.
End of explanation
import matplotlib.pyplot as plt
from netgraph import Graph, get_bipartite_layout
edges = [
(0, 1),
(1, 2),
(2, 3),
(3, 4),
(5, 6)
]
node_positions = get_bipartite_layout(edges, subsets=[(0, 2, 4, 6), (1, 3, 5)])
node_positions = {node : (x, y) for node, (y, x) in node_positions.items()}
Graph(edges, node_layout=node_positions, node_labels=True)
plt.show()
Explanation: To change the layout from the left-right orientation to a bottom-up orientation,
call the layout function directly and swap x and y coordinates of the node positions.
End of explanation |
9,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification problems are a broad category of machine learning problems that involve the prediction of values taken from a discrete, finite number of cases.
In this example, we'll build a classifier to predict to which species a flower belongs to.
Reading data
Step1: Visualizing data
Step2: Classifying species
We'll use scikit-learn's LogisticRegression to build out classifier. | Python Code:
import pandas as pd
iris = pd.read_csv('../datasets/iris.csv')
# Print some info and statistics about the dataset
iris.info()
iris.Class.unique()
iris.describe()
# Encode the classes to numeric values
class_encodings = {'Iris-setosa': 0, 'Iris-versicolor': 1, 'Iris-virginica': 2}
iris.Class = iris.Class.map(class_encodings)
iris.Class.unique()
Explanation: Classification problems are a broad category of machine learning problems that involve the prediction of values taken from a discrete, finite number of cases.
In this example, we'll build a classifier to predict to which species a flower belongs to.
Reading data
End of explanation
# Create a scatterplot for sepal length and sepal width
import matplotlib.pyplot as plt
%matplotlib inline
sl = iris.Sepal_length
sw = iris.Sepal_width
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(sl[iris.Class == 0], sw[iris.Class == 0], color='red')
plt.scatter(sl[iris.Class == 1], sw[iris.Class == 1], color='green')
plt.scatter(sl[iris.Class == 2], sw[iris.Class == 2], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Sepal Length')
plt.ylabel('Sepal Width')
# Show graph
plt.show()
# Create a scatterplot for petal length and petal width
pl = iris.Petal_length
pw = iris.Petal_width
# Create a scatterplot of these two properties using plt.scatter()
# Assign different colors to each data point according to the class it belongs to
plt.scatter(pl[iris.Class == 0], pw[iris.Class == 0], color='red')
plt.scatter(pl[iris.Class == 1], pw[iris.Class == 1], color='green')
plt.scatter(pl[iris.Class == 2], pw[iris.Class == 2], color='blue')
# Specify labels for the X and Y axis
plt.xlabel('Petal Length')
plt.ylabel('Petal Width')
# Show graph
plt.show()
Explanation: Visualizing data
End of explanation
X = iris.drop('Class', axis=1)
t = iris.Class.values
# Use sklean's train_test_plit() method to split our data into two sets.
from sklearn.cross_validation import train_test_split
Xtr, Xts, ytr, yts = train_test_split(X, t)
# Use the training set to build a LogisticRegression model
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression().fit(Xtr, ytr) # Fit a logistic regression model
# Use the LogisticRegression's score() method to assess the model accuracy
lr.score(Xtr, ytr)
from sklearn.metrics import confusion_matrix
# Use scikit-learn's confusion_matrix to understand which classes were misclassified.
# See http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html
confusion_matrix(ytr, lr.predict(Xtr))
Explanation: Classifying species
We'll use scikit-learn's LogisticRegression to build out classifier.
End of explanation |
9,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy를 사용한 기초적인 검정
SciPy 파이썬 패키지는 다음과 같은 다양한 검정 명령을 제공한다.
이항 검정 (Binomial test)
카이 제곱 검정 (Chi-square test)
단일 표본 z-검정 (One-sample z-test)
단일 표본 t-검정 (One-sample t-test)
독립 표본 t-검정 (Independent-two-sample t-test)
대응 표본 t-검정 (Paired-two-sample t-test)
분산 검정 (Chi squared variance test)
등분산 검정 (Equal-variance test)
정규성 검정 (Normality test)
이항 검정 (Binomial test)
이항 검정은 이항 분포를 이용하여 Bernoulli 분포 모수 $\theta$에 대한 가설을 조사하는 검정 방법이다. SciPy stats 서브패키지의 binom_test 명령을 사용한다. 디폴트 귀무 가설은 $\theta = 0.5$이다.
scipy.stats.binom_test
http
Step1: 유의 확률(p-value)이 34%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta=0.5$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=0.5$인 경우 대해 이항 검정 명령을 실시해 보자.
Step2: 유의 확률(p-value)이 92%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta=0.5$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=0.35$인 경우 대해 이항 검정 명령을 실시해 보자.
Step3: 유의 확률(p-value)이 0.018%로 낮으므로 귀무 가설을 기각할 수 있다. 따라서 $\theta \neq 0.5$이다.
카이 제곱 검정 (Chi-square test)
카이 제곱 검정은 goodness of fit 검정이라고도 부른다. 카테고리 분포의 모수 $\theta=(\theta_1, \ldots, \theta_K)$에 대한 가설을 조사하는 검정 방법이다. SciPy stats 서브패키지의 chisquare 명령을 사용한다. 디폴트 귀무 가설은 $\theta = \left(\frac{1}{K}, \ldots, \frac{1}{K} \right)$이다.
scipy.stats.chisquare
http
Step4: 유의 확률(p-value)이 17.8%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta_0=(0.25, 0.25, 0.25, 0.25)$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=(0.35, 0.30, 0.20, 0.15)$인 경우 대해 카이 제곱 검정 명령을 실시해 보자.
Step5: 유의 확률(p-value)이 0.087%이므로 귀무 가설을 기각할 수 있다. 따라서 $\theta \neq (0.25, 0.25, 0.25, 0.25))$이다.
단일 표본 z-검정 (One-sample z-test)
단일 표본 z-검정은 분산 $\sigma^2$의 값을 정확히 알고 있는 정규 분포의 표본에 대해 기댓값을 조사하는 검정방법이다. 단일 표본 z-검정의 경우에는 SciPy에 별도의 함수가 준비되어 있지 않으므로 norm 명령의 cdf 메서드를 사용하여 직접 구현해야 한다.
scipy.stats.norm
http
Step6: 유의 확률(p-value)이 1.96%이므로 만약 유의 수준이 5% 이상 이라면 귀무 가설을 기각할 수 있다. 따라서 $\mu \neq 0$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다. 검정 결과가 오류로 나온 이유는 데이터 수가 10개로 부족하기 때문이다.
오류의 유형 중에서 이러한 오류는 귀무 가설이 진실임에도 불구하고 거짓으로 나온 경우로 유형 1 오류(Type 1 Error)라고 한다.
데이터 갯수 $N=100$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
Step7: 유의 확률(p-value)이 54.98%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu = 0$이다.
단일 표본 t-검정 (One-sample t-test)
단일 표본 t-검정은 정규 분포의 표본에 대해 기댓값을 조사하는 검정방법이다. SciPy의 stats 서브 패키지의 ttest_1samp 명령을 사용한다. ttest_1samp 명령의 경우에는 디폴트 모수가 없으므로 popmean 인수를 사용하여 직접 지정해야 한다.
scipy.stats.ttest_1samp
http
Step8: 유의 확률(p-value)이 4.78%이므로 만약 유의 수준이 5% 이상 이라면 귀무 가설을 기각할 수 있다. 따라서 $\mu \neq 0$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다. 검정 결과가 오류로 나온 이유는 데이터 수가 10개로 부족하기 때문이다.
데이터 갯수 $N=100$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
Step9: 유의 확률(p-value)이 55.62%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu = 0$이다.
독립 표본 t-검정 (Independent-two-sample t-test)
독립 표본 t-검정(Independent-two-sample t-test)은 간단하게 two sample t-검정이라고도 한다. 두 개의 독립적인 정규 분포에서 나온 두 개의 데이터 셋을 사용하여 두 정규 분포의 기댓값이 동일한지를 검사한다. SciPy stats 서브패키지의 ttest_ind 명령을 사용한다. 독립 표본 t-검정은 두 정규 분포의 분산값이 같은 경우와 같지 않은 경우에 사용하는 검정 통계량이 다르기 때문에 equal_var 인수를 사용하여 이를 지정해 주어야 한다.
scipy.stats.ttest_ind
http
Step10: 유의 확률(p-value)이 68.4%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu_1 \neq \mu_2$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다.
오류의 유형 중에서 이러한 오류는 귀무 가설이 거짓임에도 불구하고 진실로 나온 경우로 유형 2 오류(Type 2 Error)라고 한다.
데이터 수가 증가하면 이러한 오류가 발생할 가능성이 줄어든다.
Step11: 데이터의 갯수를 50개와 100개로 증가시킨 경우에 유의 확률은 0.8%로 감소하였다. 따라서 두 확률 분포의 기댓값이 일치한다는 귀무 가설은 기각할 수 있다.
대응 표본 t-검정 (Paired-two-sample t-test)
대응 표본 t-검정은 독립 표본 t-검정을 두 집단의 샘플이 1대1 대응하는 경우에 대해 수정한 것이다. 즉, 독립 표본 t-검정과 마찬가지로 두 정규 분포의 기댓값이 같은지 확인하기 위한 검정이다.
예를 들어 어떤 반의 학생들이 특강을 수강하기 전과 수강한 이후에 각각 시험을 본 시험 점수의 경우에는 같은 학생의 두 점수는 대응할 수 있다. 이 대응 정보를 알고 있다면 보통의 독립 표본 t-검정에서 발생할 수 있는 샘플간의 차이의 영향을 없앨 수 있기 때문에 특강 수강의 영향을 보다 정확하게 추정할 수 있다.
scipy.stats.ttest_rel
http
Step12: 5 개의 데이터만으로도 두 평균이 다르다는 것을 유의 확률(p-value) 0.2%의 정확도로 알아내었음을 확인할 수 있다.
카이 제곱 분산 검정 (Chi-Square Test for the Variance)
지금까지는 정규 분포의 기댓값을 비교하는 검정을 살펴보았다. 이제는 정규 분포의 분산에 대해 살펴보자.
카이 제곱 분산 검정(Chi-Square Test for the Variance)은 정규 분포의 샘플 분산 값은 정규화 하면 카이 제곱 분포를 따른다는 점을 이용하는 검정 방법이다.
그러나 SciPy는 카이 제곱 분산 검정에 대한 명령이 없으므로 chi2 클래스를 사용하여 직접 구현해야 한다.
Step13: 등분산 검정 (Equal-variance test)
등분산 검정은 두 정규 분포로부터 생성된 두 개의 데이터 집합으로부터 두 정규 분포의 분산 모수가 같은지 확인하기 위한 검정이다. 가장 기본적인 방법은 F분포를 사용하는 것이지만 실무에서는 이보다 더 성능이 좋은 bartlett, fligner, levene 방법을 주로 사용한다. SciPy의 stats 서브패키지는 이를 위한 bartlett, fligner, levene 명령을 제공한다.
scipy.stats.bartlett
http
Step14: 정규성 검정
회귀 분석 등에서는 확률 분포가 가우시안 정규 분포를 따르는지 아닌지를 확인하는 것이 중요하다. 이러한 검정을 정규성 검정(normality test)이라고 한다. 정규성 분포 그 중요도 만큼 여러가지 검정 방법들이 개발되어 있으며 Scipy 패키지 이외에도 statsmodels 패키지에도 다양한 정규성 검정 명령어를 제공한다.
statsmodels에서 제공하는 정규성 검정 명령어
Omnibus Normality test
statsmodels.stats.stattools.omni_normtest
http | Python Code:
N = 10
theta_0 = 0.5
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
Explanation: SciPy를 사용한 기초적인 검정
SciPy 파이썬 패키지는 다음과 같은 다양한 검정 명령을 제공한다.
이항 검정 (Binomial test)
카이 제곱 검정 (Chi-square test)
단일 표본 z-검정 (One-sample z-test)
단일 표본 t-검정 (One-sample t-test)
독립 표본 t-검정 (Independent-two-sample t-test)
대응 표본 t-검정 (Paired-two-sample t-test)
분산 검정 (Chi squared variance test)
등분산 검정 (Equal-variance test)
정규성 검정 (Normality test)
이항 검정 (Binomial test)
이항 검정은 이항 분포를 이용하여 Bernoulli 분포 모수 $\theta$에 대한 가설을 조사하는 검정 방법이다. SciPy stats 서브패키지의 binom_test 명령을 사용한다. 디폴트 귀무 가설은 $\theta = 0.5$이다.
scipy.stats.binom_test
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom_test.html
데이터 갯수 $N=10$, 실제 모수 $\theta_0=0.5$인 경우 대해 이항 검정 명령을 실시해 보자.
End of explanation
N = 100
theta_0 = 0.5
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
Explanation: 유의 확률(p-value)이 34%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta=0.5$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=0.5$인 경우 대해 이항 검정 명령을 실시해 보자.
End of explanation
N = 100
theta_0 = 0.35
np.random.seed(0)
x = sp.stats.bernoulli(theta_0).rvs(N)
n = np.count_nonzero(x)
n
sp.stats.binom_test(n, N)
Explanation: 유의 확률(p-value)이 92%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta=0.5$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=0.35$인 경우 대해 이항 검정 명령을 실시해 보자.
End of explanation
N = 10
K = 4
theta_0 = np.ones(K)/K
np.random.seed(0)
x = np.random.choice(K, N, p=theta_0)
n = np.bincount(x, minlength=K)
n
sp.stats.chisquare(n)
Explanation: 유의 확률(p-value)이 0.018%로 낮으므로 귀무 가설을 기각할 수 있다. 따라서 $\theta \neq 0.5$이다.
카이 제곱 검정 (Chi-square test)
카이 제곱 검정은 goodness of fit 검정이라고도 부른다. 카테고리 분포의 모수 $\theta=(\theta_1, \ldots, \theta_K)$에 대한 가설을 조사하는 검정 방법이다. SciPy stats 서브패키지의 chisquare 명령을 사용한다. 디폴트 귀무 가설은 $\theta = \left(\frac{1}{K}, \ldots, \frac{1}{K} \right)$이다.
scipy.stats.chisquare
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chisquare.html
데이터 갯수 $N=10$, 실제 모수 $\theta_0=(0.25, 0.25, 0.25, 0.25)$인 경우 대해 카이 제곱 검정 명령을 실시해 보자.
End of explanation
N = 100
K = 4
theta_0 = np.array([0.35, 0.30, 0.20, 0.15])
np.random.seed(0)
x = np.random.choice(K, N, p=theta_0)
n = np.bincount(x, minlength=K)
n
sp.stats.chisquare(n)
Explanation: 유의 확률(p-value)이 17.8%로 높으므로 귀무 가설을 기각할 수 없다. 따라서 $\theta_0=(0.25, 0.25, 0.25, 0.25)$이다.
데이터 갯수 $N=100$, 실제 모수 $\theta_0=(0.35, 0.30, 0.20, 0.15)$인 경우 대해 카이 제곱 검정 명령을 실시해 보자.
End of explanation
N = 10
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
x
def ztest_1samp(x, sigma2=1, mu=0):
z = (x.mean() - mu)/ np.sqrt(sigma2/len(x))
return z, 2 * sp.stats.norm().sf(np.abs(z))
ztest_1samp(x)
Explanation: 유의 확률(p-value)이 0.087%이므로 귀무 가설을 기각할 수 있다. 따라서 $\theta \neq (0.25, 0.25, 0.25, 0.25))$이다.
단일 표본 z-검정 (One-sample z-test)
단일 표본 z-검정은 분산 $\sigma^2$의 값을 정확히 알고 있는 정규 분포의 표본에 대해 기댓값을 조사하는 검정방법이다. 단일 표본 z-검정의 경우에는 SciPy에 별도의 함수가 준비되어 있지 않으므로 norm 명령의 cdf 메서드를 사용하여 직접 구현해야 한다.
scipy.stats.norm
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html
데이터 갯수 $N=10$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
End of explanation
N = 100
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
ztest_1samp(x)
Explanation: 유의 확률(p-value)이 1.96%이므로 만약 유의 수준이 5% 이상 이라면 귀무 가설을 기각할 수 있다. 따라서 $\mu \neq 0$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다. 검정 결과가 오류로 나온 이유는 데이터 수가 10개로 부족하기 때문이다.
오류의 유형 중에서 이러한 오류는 귀무 가설이 진실임에도 불구하고 거짓으로 나온 경우로 유형 1 오류(Type 1 Error)라고 한다.
데이터 갯수 $N=100$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
End of explanation
N = 10
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_1samp(x, popmean=0)
Explanation: 유의 확률(p-value)이 54.98%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu = 0$이다.
단일 표본 t-검정 (One-sample t-test)
단일 표본 t-검정은 정규 분포의 표본에 대해 기댓값을 조사하는 검정방법이다. SciPy의 stats 서브 패키지의 ttest_1samp 명령을 사용한다. ttest_1samp 명령의 경우에는 디폴트 모수가 없으므로 popmean 인수를 사용하여 직접 지정해야 한다.
scipy.stats.ttest_1samp
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_1samp.html
데이터 갯수 $N=10$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
End of explanation
N = 100
mu_0 = 0
np.random.seed(0)
x = sp.stats.norm(mu_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_1samp(x, popmean=0)
Explanation: 유의 확률(p-value)이 4.78%이므로 만약 유의 수준이 5% 이상 이라면 귀무 가설을 기각할 수 있다. 따라서 $\mu \neq 0$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다. 검정 결과가 오류로 나온 이유는 데이터 수가 10개로 부족하기 때문이다.
데이터 갯수 $N=100$, 실제 모수 $\mu_0=0$인 경우 대해 단일 표본 z-검정 명령을 실시해 보자.
End of explanation
N_1 = 10; mu_1 = 0; sigma_1 = 1
N_2 = 10; mu_2 = 0.5; sigma_2 = 1
np.random.seed(0)
x1 = sp.stats.norm(mu_1, sigma_1).rvs(N_1)
x2 = sp.stats.norm(mu_2, sigma_2).rvs(N_2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_ind(x1, x2, equal_var=True)
Explanation: 유의 확률(p-value)이 55.62%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu = 0$이다.
독립 표본 t-검정 (Independent-two-sample t-test)
독립 표본 t-검정(Independent-two-sample t-test)은 간단하게 two sample t-검정이라고도 한다. 두 개의 독립적인 정규 분포에서 나온 두 개의 데이터 셋을 사용하여 두 정규 분포의 기댓값이 동일한지를 검사한다. SciPy stats 서브패키지의 ttest_ind 명령을 사용한다. 독립 표본 t-검정은 두 정규 분포의 분산값이 같은 경우와 같지 않은 경우에 사용하는 검정 통계량이 다르기 때문에 equal_var 인수를 사용하여 이를 지정해 주어야 한다.
scipy.stats.ttest_ind
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html
두 정규 분포의 기댓값이 $\mu_1 = 0$, $\mu_2 = 1$으로 다르고 분산은 $\sigma_1 = \sigma_2 = 1$ 으로 같으며 샘플의 수가 $N_1=N_2=10$인 경우를 실행해 보자
End of explanation
N_1 = 50; mu_1 = 0; sigma_1 = 1
N_2 = 100; mu_2 = 0.5; sigma_2 = 1
np.random.seed(0)
x1 = sp.stats.norm(mu_1, sigma_1).rvs(N_1)
x2 = sp.stats.norm(mu_2, sigma_2).rvs(N_2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_ind(x1, x2, equal_var=True)
Explanation: 유의 확률(p-value)이 68.4%이므로 귀무 가설을 기각할 수 없다. 따라서 $\mu_1 \neq \mu_2$이다. 이 경우는 검정 결과가 오류인 예라고 볼 수 있다.
오류의 유형 중에서 이러한 오류는 귀무 가설이 거짓임에도 불구하고 진실로 나온 경우로 유형 2 오류(Type 2 Error)라고 한다.
데이터 수가 증가하면 이러한 오류가 발생할 가능성이 줄어든다.
End of explanation
N = 5
mu_1 = 0
mu_2 = 0.5
np.random.seed(1)
x1 = sp.stats.norm(mu_1).rvs(N)
x2 = x1 + sp.stats.norm(mu_2, 0.1).rvs(N)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
sp.stats.ttest_rel(x1, x2)
Explanation: 데이터의 갯수를 50개와 100개로 증가시킨 경우에 유의 확률은 0.8%로 감소하였다. 따라서 두 확률 분포의 기댓값이 일치한다는 귀무 가설은 기각할 수 있다.
대응 표본 t-검정 (Paired-two-sample t-test)
대응 표본 t-검정은 독립 표본 t-검정을 두 집단의 샘플이 1대1 대응하는 경우에 대해 수정한 것이다. 즉, 독립 표본 t-검정과 마찬가지로 두 정규 분포의 기댓값이 같은지 확인하기 위한 검정이다.
예를 들어 어떤 반의 학생들이 특강을 수강하기 전과 수강한 이후에 각각 시험을 본 시험 점수의 경우에는 같은 학생의 두 점수는 대응할 수 있다. 이 대응 정보를 알고 있다면 보통의 독립 표본 t-검정에서 발생할 수 있는 샘플간의 차이의 영향을 없앨 수 있기 때문에 특강 수강의 영향을 보다 정확하게 추정할 수 있다.
scipy.stats.ttest_rel
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_rel.html
$\mu_1 = 0$, $\mu_2 = 0.5$로 평균이 달라진 경우에 대해 대응 표본 t-검정을 실시해 보자. 데이터 갯수 $N$은 5 이다.
End of explanation
def chi2var_test(x, sigma2=1):
v = x.var(ddof=1)
t = (len(x) - 1)*v/sigma2
return t, sp.stats.chi2(df=len(x)-1).sf(np.abs(t))
N = 10
mu_0 = 0
sigma_0 = 1.1
np.random.seed(0)
x = sp.stats.norm(mu_0, sigma_0).rvs(N)
sns.distplot(x, kde=False, fit=sp.stats.norm)
plt.show()
x.std()
chi2var_test(x)
Explanation: 5 개의 데이터만으로도 두 평균이 다르다는 것을 유의 확률(p-value) 0.2%의 정확도로 알아내었음을 확인할 수 있다.
카이 제곱 분산 검정 (Chi-Square Test for the Variance)
지금까지는 정규 분포의 기댓값을 비교하는 검정을 살펴보았다. 이제는 정규 분포의 분산에 대해 살펴보자.
카이 제곱 분산 검정(Chi-Square Test for the Variance)은 정규 분포의 샘플 분산 값은 정규화 하면 카이 제곱 분포를 따른다는 점을 이용하는 검정 방법이다.
그러나 SciPy는 카이 제곱 분산 검정에 대한 명령이 없으므로 chi2 클래스를 사용하여 직접 구현해야 한다.
End of explanation
N1 = 100
N2 = 100
sigma_1 = 1
sigma_2 = 1.2
np.random.seed(0)
x1 = sp.stats.norm(0, sigma_1).rvs(N1)
x2 = sp.stats.norm(0, sigma_2).rvs(N2)
sns.distplot(x1, kde=False, fit=sp.stats.norm)
sns.distplot(x2, kde=False, fit=sp.stats.norm)
plt.show()
x1.std(), x2.std()
sp.stats.bartlett(x1, x2)
sp.stats.fligner(x1, x2)
sp.stats.levene(x1, x2)
Explanation: 등분산 검정 (Equal-variance test)
등분산 검정은 두 정규 분포로부터 생성된 두 개의 데이터 집합으로부터 두 정규 분포의 분산 모수가 같은지 확인하기 위한 검정이다. 가장 기본적인 방법은 F분포를 사용하는 것이지만 실무에서는 이보다 더 성능이 좋은 bartlett, fligner, levene 방법을 주로 사용한다. SciPy의 stats 서브패키지는 이를 위한 bartlett, fligner, levene 명령을 제공한다.
scipy.stats.bartlett
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bartlett.html
scipy.stats.fligner
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fligner.html
scipy.stats.levene
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.levene.html
End of explanation
np.random.seed(0)
N1 = 50
N2 = 100
x1 = sp.stats.norm(0, 1).rvs(N1)
x2 = sp.stats.norm(0.5, 1.5).rvs(N2)
sns.distplot(x1)
sns.distplot(x2)
plt.show()
sp.stats.ks_2samp(x1, x2)
Explanation: 정규성 검정
회귀 분석 등에서는 확률 분포가 가우시안 정규 분포를 따르는지 아닌지를 확인하는 것이 중요하다. 이러한 검정을 정규성 검정(normality test)이라고 한다. 정규성 분포 그 중요도 만큼 여러가지 검정 방법들이 개발되어 있으며 Scipy 패키지 이외에도 statsmodels 패키지에도 다양한 정규성 검정 명령어를 제공한다.
statsmodels에서 제공하는 정규성 검정 명령어
Omnibus Normality test
statsmodels.stats.stattools.omni_normtest
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.stattools.omni_normtest.html
Jarque–Bera test
statsmodels.stats.stattools.jarque_bera
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.stattools.jarque_bera.html
Kolmogorov-Smirnov test
statsmodels.stats.diagnostic.kstest_normal
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.diagnostic.kstest_normal.html
Lilliefors test
statsmodels.stats.diagnostic.lillifors
http://statsmodels.sourceforge.net/devel/generated/statsmodels.stats.diagnostic.lillifors.html
SciPy 에서 제공하는 정규성 검정 명령어
Kolmogorov-Smirnov test
scipy.stats.ks_2samp
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html
Shapiro–Wilk test
scipy.stats.shapiro
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.shapiro.html
Anderson–Darling test
scipy.stats.anderson
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.anderson.html
D'Agostino's K-squared test
scipy.stats.mstats.normaltest
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.normaltest.html
이 중에서 Kolmogorov-Smirnov 검정은 사실 정규 분포에 국한되지 않고 두 샘플이 같은 분포를 따르는지 확인할 수 있는 방법이다.
End of explanation |
9,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
To begin with, cobrapy comes with bundled models for Salmonella and E. coli, as well as a "textbook" model of E. coli core metabolism. To load a test model, type
Step1: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a DictList, and each one is made up of Reaction, Metabolite and Gene objects respectively.
Step2: Just like a regular list, objects in the DictList can be retrived by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing)
Step3: Addictionally, items can be retrived by their id using the get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following
Step4: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt
Step5: Reactions
We will consider the reaction glucose 6-phosphate isomerase, which interconverts glucose 6-phosphate and fructose 6-phosphate. The reaction id for this reaction in our test model is PGI.
Step6: We can view the full name and reaction catalyzed as strings
Step7: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible
Step8: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced.
Step9: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient
Step10: The reaction is no longer mass balanced
Step11: We can remove the metabolite, and the reaction will be balanced once again.
Step12: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds.
Step13: Metabolites
We will consider cytosolic atp as our metabolite, which has the id atp_c in our test model.
Step14: We can print out the metabolite name and compartment (cytosol in this case).
Step15: We can see that ATP is a charged molecule in our model.
Step16: We can see the chemical formula for the metabolite as well.
Step17: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp.
Step18: A metabolite like glucose 6-phosphate will participate in fewer reactions.
Step19: Genes
The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9)
Step20: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model
Step21: Each gene keeps track of the reactions it catalyzes
Step22: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships.
Step23: Newly created genes are also added to the model
Step24: The delete_model_genes function will evaluate the gpr and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag.
Step25: The undelete_model_genes can be used to reset a gene deletion | Python Code:
from __future__ import print_function
import cobra.test
# "ecoli" and "salmonella" are also valid arguments
model = cobra.test.create_test_model("textbook")
Explanation: Getting Started
To begin with, cobrapy comes with bundled models for Salmonella and E. coli, as well as a "textbook" model of E. coli core metabolism. To load a test model, type
End of explanation
print(len(model.reactions))
print(len(model.metabolites))
print(len(model.genes))
Explanation: The reactions, metabolites, and genes attributes of the cobrapy model are a special type of list called a DictList, and each one is made up of Reaction, Metabolite and Gene objects respectively.
End of explanation
model.reactions[29]
Explanation: Just like a regular list, objects in the DictList can be retrived by index. For example, to get the 30th reaction in the model (at index 29 because of 0-indexing):
End of explanation
model.metabolites.get_by_id("atp_c")
Explanation: Addictionally, items can be retrived by their id using the get_by_id() function. For example, to get the cytosolic atp metabolite object (the id is "atp_c"), we can do the following:
End of explanation
model.reactions.EX_glc__D_e.lower_bound
Explanation: As an added bonus, users with an interactive shell such as IPython will be able to tab-complete to list elements inside a list. While this is not recommended behavior for most code because of the possibility for characters like "-" inside ids, this is very useful while in an interactive prompt:
End of explanation
pgi = model.reactions.get_by_id("PGI")
pgi
Explanation: Reactions
We will consider the reaction glucose 6-phosphate isomerase, which interconverts glucose 6-phosphate and fructose 6-phosphate. The reaction id for this reaction in our test model is PGI.
End of explanation
print(pgi.name)
print(pgi.reaction)
Explanation: We can view the full name and reaction catalyzed as strings
End of explanation
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
print(pgi.reversibility)
Explanation: We can also view reaction upper and lower bounds. Because the pgi.lower_bound < 0, and pgi.upper_bound > 0, pgi is reversible
End of explanation
pgi.check_mass_balance()
Explanation: We can also ensure the reaction is mass balanced. This function will return elements which violate mass balance. If it comes back empty, then the reaction is mass balanced.
End of explanation
pgi.add_metabolites({model.metabolites.get_by_id("h_c"): -1})
pgi.reaction
Explanation: In order to add a metabolite, we pass in a dict with the metabolite object and its coefficient
End of explanation
pgi.check_mass_balance()
Explanation: The reaction is no longer mass balanced
End of explanation
pgi.pop(model.metabolites.get_by_id("h_c"))
print(pgi.reaction)
print(pgi.check_mass_balance())
Explanation: We can remove the metabolite, and the reaction will be balanced once again.
End of explanation
pgi.reaction = "g6p_c --> f6p_c + h_c + green_eggs + ham"
pgi.reaction
pgi.reaction = "g6p_c <=> f6p_c"
pgi.reaction
Explanation: It is also possible to build the reaction from a string. However, care must be taken when doing this to ensure reaction id's match those in the model. The direction of the arrow is also used to update the upper and lower bounds.
End of explanation
atp = model.metabolites.get_by_id("atp_c")
atp
Explanation: Metabolites
We will consider cytosolic atp as our metabolite, which has the id atp_c in our test model.
End of explanation
print(atp.name)
print(atp.compartment)
Explanation: We can print out the metabolite name and compartment (cytosol in this case).
End of explanation
atp.charge
Explanation: We can see that ATP is a charged molecule in our model.
End of explanation
print(atp.formula)
Explanation: We can see the chemical formula for the metabolite as well.
End of explanation
len(atp.reactions)
Explanation: The reactions attribute gives a frozenset of all reactions using the given metabolite. We can use this to count the number of reactions which use atp.
End of explanation
model.metabolites.get_by_id("g6p_c").reactions
Explanation: A metabolite like glucose 6-phosphate will participate in fewer reactions.
End of explanation
gpr = pgi.gene_reaction_rule
gpr
Explanation: Genes
The gene_reaction_rule is a boolean representation of the gene requirements for this reaction to be active as described in Schellenberger et al 2011 Nature Protocols 6(9):1290-307.
The GPR is stored as the gene_reaction_rule for a Reaction object as a string.
End of explanation
pgi.genes
pgi_gene = model.genes.get_by_id("b4025")
pgi_gene
Explanation: Corresponding gene objects also exist. These objects are tracked by the reactions itself, as well as by the model
End of explanation
pgi_gene.reactions
Explanation: Each gene keeps track of the reactions it catalyzes
End of explanation
pgi.gene_reaction_rule = "(spam or eggs)"
pgi.genes
pgi_gene.reactions
Explanation: Altering the gene_reaction_rule will create new gene objects if necessary and update all relationships.
End of explanation
model.genes.get_by_id("spam")
Explanation: Newly created genes are also added to the model
End of explanation
len("%3d"% 0)
cobra.manipulation.delete_model_genes(model, ["spam"],
cumulative_deletions=True)
print("after 1 KO: %4d < flux_PGI < %4d" %
(pgi.lower_bound, pgi.upper_bound))
cobra.manipulation.delete_model_genes(model, ["eggs"],
cumulative_deletions=True)
print("after 2 KO: %4d < flux_PGI < %4d" %
(pgi.lower_bound, pgi.upper_bound))
Explanation: The delete_model_genes function will evaluate the gpr and set the upper and lower bounds to 0 if the reaction is knocked out. This function can preserve existing deletions or reset them using the cumulative_deletions flag.
End of explanation
cobra.manipulation.undelete_model_genes(model)
print(pgi.lower_bound, "< pgi <", pgi.upper_bound)
Explanation: The undelete_model_genes can be used to reset a gene deletion
End of explanation |
9,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = source_text.split('\n')
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_ids = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_sentences]
target_ids = [[target_vocab_to_int[word] for word in sentence.split()] for sentence in target_sentences]
return (source_ids, target_ids)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
processed_target = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return processed_target
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size, state_is_tuple=True)
# Dropout
drop_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([drop_cell] * num_layers, state_is_tuple=True)
_, rnn_state = tf.nn.dynamic_rnn(cell = enc_cell, inputs = rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
#with tf.variable_scope("decoding") as decoding_scope:
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
#tf.variable_scope("decoder") as varscope
#with tf.variable_scope("decoding") as decoding_scope:
# Inference Decoder
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
#with tf.variable_scope('decoding') as decoding_scope:
with tf.variable_scope("decoding") as decoding_scope:
#Output Function
output_fn= lambda x: tf.contrib.layers.fully_connected(x,vocab_size,None,scope=decoding_scope)
#Train Logits
train_logits=decoding_layer_train(
encoder_state,
dec_cell,
dec_embed_input,
sequence_length,
decoding_scope,output_fn, keep_prob)
decoding_scope.reuse_variables()
#with tf.variable_scope("decoding") as decoding_scope:
#Infer Logits
infer_logits=decoding_layer_infer(
encoder_state,
dec_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
sequence_length-1,
vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
#Apply embedding to the input data for the encoder.
enc_input = tf.contrib.layers.embed_sequence(
input_data,
source_vocab_size,
enc_embedding_size
)
#embed_target = tf.nn.embedding_lookup(dec_embed, dec_input)
#Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
enc_layer = encoding_layer(
enc_input,
rnn_size,
num_layers,
keep_prob
)
#Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
dec_input = process_decoding_input(
target_data,
target_vocab_to_int,
batch_size
)
#Apply embedding to the target data for the decoder.
#embed_target = tf.contrib.layers.embed_sequence(dec_input,target_vocab_size,dec_embedding_size)
dec_embed = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
embed_target = tf.nn.embedding_lookup(dec_embed, dec_input)
#Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
train_logits, inf_logits = decoding_layer(
embed_target,
dec_embed,
enc_layer,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, inf_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
#Number of Epochs
epochs = 5
#Batch Size
batch_size = 256
#RNN Size
rnn_size = 512 #25
#Number of Layers
num_layers = 2
#Embedding Size
encoding_embedding_size = 256 #13
decoding_embedding_size = 256 #13
#Learning Rate
learning_rate = 0.01
#Dropout Keep Probability
keep_probability = 0.5
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
input_sentence = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
return input_sentence
#return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
9,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License");
Open Buildings - spatial analysis examples
This notebook demonstrates some analysis methods with Open Buildings data
Step3: Download buildings data for a region in Africa [takes up to 15 minutes for large countries]
Step4: Visualise the data
First we convert the CSV file into a GeoDataFrame. The CSV files can be quite large because they include the polygon outline of every building. For this example we only need longitude and latitude, so we only process those columns to save memory.
Step5: For some countries there can be tens of millions of buildings, so we also take a random sample for doing plots.
Step6: Prepare the data for mapping building statistics
Set up a grid, which we will use to calculate statistics about buildings.
We also want to select the examples most likely to be buildings, using a threshold on the confidence score.
Step7: To calculate statistics, we need a function to convert between (longitude, latitude) coordinates in the world and (x, y) coordinates in the grid.
Step8: Now we can count how many buildings there are on each cell of the grid.
Step9: Plot the counts of buildings
Knowing the counts of buildings is useful for example in planning service delivery, estimating population or designing census enumeration areas.
Step10: [optional] Export a GeoTIFF file
This can be useful to carry our further analysis with software such as QGIS.
Step11: Generate a map of building sizes
Knowing average building sizes is useful too -- it is linked, for example, to how much economic activity there is in each area.
Step12: Health facility accessibility
We can combine different types of geospatial data to get various insights. If we have information on the locations of clinics and hospitals across Ghana, for example, then one interesting analysis is how accessible health services are in different places.
In this example, we'll look at the average distance to the nearest health facility.
We use this data made available by Global Healthsites Mapping Project.
Step13: We drop all columns not relevant to the computation of mean distance from health facilities. We also exclude all rows with empty or NaN values, select amenities captured as hospitals in the new geodata and choose values within the range of our area of interest.
Step14: Have a look at the locations of health facilities compared to the locations of buildings.
Note
Step15: Next we calculate, for each building, the distance to the nearest health facility. We use the sample of the buildings data that we took earlier, so that the computations don't take too long.
Step16: That has computed the distance in degrees (longitude and latitude), which is not very intuitive. We can convert this approximately to kilometers by multiplying with the distance spanned by one degree at the equator.
Step17: Now we can then find the mean distance to the nearest health facility by administrative area. First, we load data on the shapes of adminstrative areas.
We use this data made available by OCHA ROWCA - United Nations Office for the Coordination of Humanitarian Affairs for West and Central Africa.
Step18: Next, find the average distance to the nearest health facility within each area. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
Explanation: Copyright 2021 Google LLC. Licensed under the Apache License, Version 2.0 (the "License");
Open Buildings - spatial analysis examples
This notebook demonstrates some analysis methods with Open Buildings data:
Generating heatmaps of building density and size.
A simple analysis of accessibility to health facilities.
End of explanation
#@markdown Select a region from either the [Natural Earth low res](https://www.naturalearthdata.com/downloads/110m-cultural-vectors/110m-admin-0-countries/) (fastest), [Natural Earth high res](https://www.naturalearthdata.com/downloads/10m-cultural-vectors/10m-admin-0-countries/) or [World Bank high res](https://datacatalog.worldbank.org/dataset/world-bank-official-boundaries) shapefiles:
region_border_source = 'Natural Earth (Low Res 110m)' #@param ["Natural Earth (Low Res 110m)", "Natural Earth (High Res 10m)", "World Bank (High Res 10m)"]
region = 'GHA (Ghana)' #@param ["", "AGO (Angola)", "BDI (Burundi)", "BEN (Benin)", "BFA (Burkina Faso)", "BWA (Botswana)", "CAF (Central African Republic)", "CIV (Côte d'Ivoire)", "COD (Democratic Republic of the Congo)", "COG (Republic of the Congo)", "DJI (Djibouti)", "DZA (Algeria)", "EGY (Egypt)", "ERI (Eritrea)", "ETH (Ethiopia)", "GAB (Gabon)", "GHA (Ghana)", "GIN (Guinea)", "GMB (The Gambia)", "GNB (Guinea-Bissau)", "GNQ (Equatorial Guinea)", "KEN (Kenya)", "LBR (Liberia)", "LSO (Lesotho)", "MDG (Madagascar)", "MOZ (Mozambique)", "MRT (Mauritania)", "MWI (Malawi)", "NAM (Namibia)", "NER (Niger)", "NGA (Nigeria)", "RWA (Rwanda)", "SDN (Sudan)", "SEN (Senegal)", "SLE (Sierra Leone)", "SOM (Somalia)", "SWZ (eSwatini)", "TGO (Togo)", "TUN (Tunisia)", "TZA (Tanzania)", "UGA (Uganda)", "ZAF (South Africa)", "ZMB (Zambia)", "ZWE (Zimbabwe)"]
# @markdown Alternatively, specify an area of interest in [WKT format](https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry) (assumes crs='EPSG:4326'); this [tool](https://arthur-e.github.io/Wicket/sandbox-gmaps3.html) might be useful.
your_own_wkt_polygon = '' #@param {type:"string"}
!pip install s2geometry pygeos geopandas
import functools
import glob
import gzip
import multiprocessing
import os
import shutil
import tempfile
from typing import List, Optional, Tuple
import gdal
import geopandas as gpd
from google.colab import files
from IPython import display
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import s2geometry as s2
import shapely
import tensorflow as tf
import tqdm.notebook
BUILDING_DOWNLOAD_PATH = ('gs://open-buildings-data/v1/'
'polygons_s2_level_6_gzip_no_header')
def get_filename_and_region_dataframe(
region_border_source: str, region: str,
your_own_wkt_polygon: str) -> Tuple[str, gpd.geodataframe.GeoDataFrame]:
Returns output filename and a geopandas dataframe with one region row.
if your_own_wkt_polygon:
filename = 'open_buildings_v1_polygons_your_own_wkt_polygon.csv.gz'
region_df = gpd.GeoDataFrame(
geometry=gpd.GeoSeries.from_wkt([your_own_wkt_polygon]),
crs='EPSG:4326')
if not isinstance(region_df.iloc[0].geometry,
shapely.geometry.polygon.Polygon) and not isinstance(
region_df.iloc[0].geometry,
shapely.geometry.multipolygon.MultiPolygon):
raise ValueError("`your_own_wkt_polygon` must be a POLYGON or "
"MULTIPOLYGON.")
print(f'Preparing your_own_wkt_polygon.')
return filename, region_df
if not region:
raise ValueError('Please select a region or set your_own_wkt_polygon.')
if region_border_source == 'Natural Earth (Low Res 110m)':
url = ('https://www.naturalearthdata.com/http//www.naturalearthdata.com/'
'download/110m/cultural/ne_110m_admin_0_countries.zip')
!wget -N {url}
display.clear_output()
region_shapefile_path = os.path.basename(url)
source_name = 'ne_110m'
elif region_border_source == 'Natural Earth (High Res 10m)':
url = ('https://www.naturalearthdata.com/http//www.naturalearthdata.com/'
'download/10m/cultural/ne_10m_admin_0_countries.zip')
!wget -N {url}
display.clear_output()
region_shapefile_path = os.path.basename(url)
source_name = 'ne_10m'
elif region_border_source == 'World Bank (High Res 10m)':
url = ('https://development-data-hub-s3-public.s3.amazonaws.com/ddhfiles/'
'779551/wb_countries_admin0_10m.zip')
!wget -N {url}
!unzip -o {os.path.basename(url)}
display.clear_output()
region_shapefile_path = 'WB_countries_Admin0_10m'
source_name = 'wb_10m'
region_iso_a3 = region.split(' ')[0]
filename = f'open_buildings_v1_polygons_{source_name}_{region_iso_a3}.csv.gz'
region_df = gpd.read_file(region_shapefile_path).query(
f'ISO_A3 == "{region_iso_a3}"').dissolve(by='ISO_A3')[['geometry']]
print(f'Preparing {region} from {region_border_source}.')
return filename, region_df
def get_bounding_box_s2_covering_tokens(
region_geometry: shapely.geometry.base.BaseGeometry) -> List[str]:
region_bounds = region_geometry.bounds
s2_lat_lng_rect = s2.S2LatLngRect_FromPointPair(
s2.S2LatLng_FromDegrees(region_bounds[1], region_bounds[0]),
s2.S2LatLng_FromDegrees(region_bounds[3], region_bounds[2]))
coverer = s2.S2RegionCoverer()
# NOTE: Should be kept in-sync with s2 level in BUILDING_DOWNLOAD_PATH.
coverer.set_fixed_level(6)
coverer.set_max_cells(1000000)
return [cell.ToToken() for cell in coverer.GetCovering(s2_lat_lng_rect)]
def s2_token_to_shapely_polygon(
s2_token: str) -> shapely.geometry.polygon.Polygon:
s2_cell = s2.S2Cell(s2.S2CellId_FromToken(s2_token, len(s2_token)))
coords = []
for i in range(4):
s2_lat_lng = s2.S2LatLng(s2_cell.GetVertex(i))
coords.append((s2_lat_lng.lng().degrees(), s2_lat_lng.lat().degrees()))
return shapely.geometry.Polygon(coords)
def download_s2_token(
s2_token: str, region_df: gpd.geodataframe.GeoDataFrame) -> Optional[str]:
Downloads the matching CSV file with polygons for the `s2_token`.
NOTE: Only polygons inside the region are kept.
NOTE: Passing output via a temporary file to reduce memory usage.
Args:
s2_token: S2 token for which to download the CSV file with building
polygons. The S2 token should be at the same level as the files in
BUILDING_DOWNLOAD_PATH.
region_df: A geopandas dataframe with only one row that contains the region
for which to keep polygons.
Returns:
Either filepath which contains a gzipped CSV without header for the
`s2_token` subfiltered to only contain building polygons inside the region
or None which means that there were no polygons inside the region for this
`s2_token`.
s2_cell_geometry = s2_token_to_shapely_polygon(s2_token)
region_geometry = region_df.iloc[0].geometry
prepared_region_geometry = shapely.prepared.prep(region_geometry)
# If the s2 cell doesn't intersect the country geometry at all then we can
# know that all rows would be dropped so instead we can just return early.
if not prepared_region_geometry.intersects(s2_cell_geometry):
return None
try:
# Using tf.io.gfile.GFile gives better performance than passing the GCS path
# directly to pd.read_csv.
with tf.io.gfile.GFile(
os.path.join(BUILDING_DOWNLOAD_PATH, f'{s2_token}_buildings.csv.gz'),
'rb') as gf:
# If the s2 cell is fully covered by country geometry then can skip
# filtering as we need all rows.
if prepared_region_geometry.covers(s2_cell_geometry):
with tempfile.NamedTemporaryFile(mode='w+b', delete=False) as tmp_f:
shutil.copyfileobj(gf, tmp_f)
return tmp_f.name
# Else take the slow path.
# NOTE: We read in chunks to save memory.
csv_chunks = pd.read_csv(
gf, chunksize=2000000, dtype=object, compression='gzip', header=None)
tmp_f = tempfile.NamedTemporaryFile(mode='w+b', delete=False)
tmp_f.close()
for csv_chunk in csv_chunks:
points = gpd.GeoDataFrame(
geometry=gpd.points_from_xy(csv_chunk[1], csv_chunk[0]),
crs='EPSG:4326')
# sjoin 'within' was faster than using shapely's 'within' directly.
points = gpd.sjoin(points, region_df, predicate='within')
csv_chunk = csv_chunk.iloc[points.index]
csv_chunk.to_csv(
tmp_f.name,
mode='ab',
index=False,
header=False,
compression={
'method': 'gzip',
'compresslevel': 1
})
return tmp_f.name
except tf.errors.NotFoundError:
return None
# Clear output after pip install.
display.clear_output()
filename, region_df = get_filename_and_region_dataframe(region_border_source,
region,
your_own_wkt_polygon)
# Remove any old outputs to not run out of disk.
for f in glob.glob('/tmp/open_buildings_*'):
os.remove(f)
# Write header to the compressed CSV file.
with gzip.open(f'/tmp/{filename}', 'wt') as merged:
merged.write(','.join([
'latitude', 'longitude', 'area_in_meters', 'confidence', 'geometry',
'full_plus_code'
]) + '\n')
download_s2_token_fn = functools.partial(download_s2_token, region_df=region_df)
s2_tokens = get_bounding_box_s2_covering_tokens(region_df.iloc[0].geometry)
# Downloads CSV files for relevant S2 tokens and after filtering appends them
# to the compressed output CSV file. Relies on the fact that concatenating
# gzipped files produces a valid gzip file.
# NOTE: Uses a pool to speed up output preparation.
with open(f'/tmp/{filename}', 'ab') as merged:
with multiprocessing.Pool(4) as e:
for fname in tqdm.notebook.tqdm(
e.imap_unordered(download_s2_token_fn, s2_tokens),
total=len(s2_tokens)):
if fname:
with open(fname, 'rb') as tmp_f:
shutil.copyfileobj(tmp_f, merged)
os.unlink(fname)
Explanation: Download buildings data for a region in Africa [takes up to 15 minutes for large countries]
End of explanation
buildings = pd.read_csv(
f"/tmp/{filename}", engine="c",
usecols=['latitude', 'longitude', 'area_in_meters', 'confidence'])
print(f"Read {len(buildings):,} records.")
Explanation: Visualise the data
First we convert the CSV file into a GeoDataFrame. The CSV files can be quite large because they include the polygon outline of every building. For this example we only need longitude and latitude, so we only process those columns to save memory.
End of explanation
sample_size = 200000 #@param
buildings_sample = (buildings.sample(sample_size)
if len(buildings) > sample_size else buildings)
plt.plot(buildings_sample.longitude, buildings_sample.latitude, 'k.',
alpha=0.25, markersize=0.5)
plt.gcf().set_size_inches(10, 10)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.axis('equal');
Explanation: For some countries there can be tens of millions of buildings, so we also take a random sample for doing plots.
End of explanation
max_grid_dimension = 1000 #@param
confidence_threshold = 0.75 #@param
buildings = buildings.query(f"confidence > {confidence_threshold}")
# Create a grid covering the dataset bounds
min_lon = buildings.longitude.min()
max_lon = buildings.longitude.max()
min_lat = buildings.latitude.min()
max_lat = buildings.latitude.max()
grid_density_degrees = (max(max_lon - min_lon, max_lat - min_lat)
/ max_grid_dimension)
bounds = [min_lon, min_lat, max_lon, max_lat]
xcoords = np.arange(min_lon, max_lon, grid_density_degrees)
ycoords = np.arange(max_lat, min_lat, -grid_density_degrees)
xv, yv = np.meshgrid(xcoords, ycoords)
xy = np.stack([xv.ravel(), yv.ravel()]).transpose()
print(f"Calculated grid of size {xv.shape[0]} x {xv.shape[1]}.")
Explanation: Prepare the data for mapping building statistics
Set up a grid, which we will use to calculate statistics about buildings.
We also want to select the examples most likely to be buildings, using a threshold on the confidence score.
End of explanation
geotransform = (min_lon, grid_density_degrees, 0,
max_lat, 0, -grid_density_degrees)
def lonlat_to_xy(lon, lat, geotransform):
x = int((lon - geotransform[0])/geotransform[1])
y = int((lat - geotransform[3])/geotransform[5])
return x,y
Explanation: To calculate statistics, we need a function to convert between (longitude, latitude) coordinates in the world and (x, y) coordinates in the grid.
End of explanation
counts = np.zeros(xv.shape)
area_totals = np.zeros(xv.shape)
for lat, lon, area in tqdm.notebook.tqdm(
zip(buildings.latitude, buildings.longitude, buildings.area_in_meters)):
x, y = lonlat_to_xy(lon, lat, geotransform)
if x >= 0 and y >= 0 and x < len(xcoords) and y < len(ycoords):
counts[y, x] += 1
area_totals[y, x] += area
area_totals[counts == 0] = np.nan
counts[counts == 0] = np.nan
mean_area = area_totals / counts
Explanation: Now we can count how many buildings there are on each cell of the grid.
End of explanation
plt.imshow(np.log10(np.nan_to_num(counts) + 1.), cmap="viridis")
plt.gcf().set_size_inches(15, 15)
cbar = plt.colorbar(shrink=0.5)
cbar.ax.set_yticklabels([f'{x:.0f}' for x in 10 ** cbar.ax.get_yticks()])
plt.title("Building counts per grid cell");
Explanation: Plot the counts of buildings
Knowing the counts of buildings is useful for example in planning service delivery, estimating population or designing census enumeration areas.
End of explanation
def save_geotiff(filename, values, geotransform):
driver = gdal.GetDriverByName("GTiff")
dataset = driver.Create(filename, values.shape[1], values.shape[0], 1,
gdal.GDT_Float32)
dataset.SetGeoTransform(geotransform)
band = dataset.GetRasterBand(1)
band.WriteArray(values)
band.SetNoDataValue(-1)
dataset.FlushCache()
filename = "building_counts.tiff"
save_geotiff(filename, counts, geotransform)
files.download(filename)
Explanation: [optional] Export a GeoTIFF file
This can be useful to carry our further analysis with software such as QGIS.
End of explanation
# Only calculate the mean building size for grid locations with at
# least a few buildings, so that we get more reliable averages.
mean_area_filtered = mean_area.copy()
mean_area_filtered[counts < 10] = 0
# Set a maximum value for the colour scale, to make the plot brighter.
plt.imshow(np.nan_to_num(mean_area_filtered), vmax=250, cmap="viridis")
plt.title("Mean building size (m$^2$)")
plt.colorbar(shrink=0.5, extend="max")
plt.gcf().set_size_inches(15, 15)
Explanation: Generate a map of building sizes
Knowing average building sizes is useful too -- it is linked, for example, to how much economic activity there is in each area.
End of explanation
health_sites = pd.read_csv("https://data.humdata.org/dataset/364c5aca-7cd7-4248-b394-335113293c7a/"
"resource/b7e55f34-9e3b-417f-b329-841cff6a9554/download/ghana.csv")
health_sites = gpd.GeoDataFrame(
health_sites, geometry=gpd.points_from_xy(health_sites.X, health_sites.Y))
health_sites.head()
Explanation: Health facility accessibility
We can combine different types of geospatial data to get various insights. If we have information on the locations of clinics and hospitals across Ghana, for example, then one interesting analysis is how accessible health services are in different places.
In this example, we'll look at the average distance to the nearest health facility.
We use this data made available by Global Healthsites Mapping Project.
End of explanation
health_sites = health_sites[['X', 'Y', 'amenity', 'name', 'geometry']]
health_sites.dropna(axis=0, inplace=True)
health_sites = health_sites[health_sites['amenity'].isin(['hospital','clinic','health_post', 'doctors'])]
health_sites = health_sites.query(
f'Y > {min_lat} and Y < {max_lat}'
f'and X > {min_lon} and X < {max_lon}')
health_sites.head()
Explanation: We drop all columns not relevant to the computation of mean distance from health facilities. We also exclude all rows with empty or NaN values, select amenities captured as hospitals in the new geodata and choose values within the range of our area of interest.
End of explanation
plt.plot(buildings_sample.longitude,
buildings_sample.latitude,
'k.', alpha=0.25, markersize=0.5)
plt.plot(health_sites.X, health_sites.Y,
marker='$\\oplus$', color= 'red', alpha = 0.8,
markersize=10, linestyle='None')
plt.gcf().set_size_inches(10, 10)
plt.xlabel('Longitude')
plt.ylabel('Latitude')
plt.legend(['Building', 'Health center'])
plt.axis('equal');
Explanation: Have a look at the locations of health facilities compared to the locations of buildings.
Note: this data may not be complete.
End of explanation
buildings_sample = gpd.GeoDataFrame(buildings_sample,
geometry=gpd.points_from_xy(buildings_sample.longitude,
buildings_sample.latitude))
buildings_sample["distance_to_nearest_health_facility"] = buildings_sample.geometry.apply(
lambda g: health_sites.distance(g).min())
buildings_sample.head()
Explanation: Next we calculate, for each building, the distance to the nearest health facility. We use the sample of the buildings data that we took earlier, so that the computations don't take too long.
End of explanation
buildings_sample["distance_to_nearest_health_facility"] *= 111.32
Explanation: That has computed the distance in degrees (longitude and latitude), which is not very intuitive. We can convert this approximately to kilometers by multiplying with the distance spanned by one degree at the equator.
End of explanation
!wget https://data.humdata.org/dataset/dc4c17cf-59d9-478c-b2b7-acd889241194/resource/4443ddba-eeaf-4367-9457-7820ea482f7f/download/gha_admbnda_gss_20210308_shp.zip
!unzip gha_admbnda_gss_20210308_shp.zip
display.clear_output()
admin_areas = gpd.read_file("gha_admbnda_gss_20210308_SHP/gha_admbnda_adm2_gss_20210308.shp")
Explanation: Now we can then find the mean distance to the nearest health facility by administrative area. First, we load data on the shapes of adminstrative areas.
We use this data made available by OCHA ROWCA - United Nations Office for the Coordination of Humanitarian Affairs for West and Central Africa.
End of explanation
# Both data frames have the same coordinate system.
buildings_sample.crs = admin_areas.crs
# Spatial join to find out which administrative area every building is in.
points_polys = gpd.sjoin(buildings_sample, admin_areas, how="left")
# Aggregate by admin area to get the average distance to nearest health facility.
stats = points_polys.groupby("index_right")["distance_to_nearest_health_facility"].agg(["mean"])
admin_areas_with_distances = gpd.GeoDataFrame(stats.join(admin_areas))
admin_areas_with_distances.plot(
column="mean", legend=True, legend_kwds={"shrink": 0.5})
plt.title("Average distance to the nearest health facility (km)")
plt.gcf().set_size_inches(15, 15)
Explanation: Next, find the average distance to the nearest health facility within each area.
End of explanation |
9,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EEG source localization given electrode locations on an MRI
This tutorial explains how to compute the forward operator from EEG data
when the electrodes are in MRI voxel coordinates.
Step1: Prerequisites
For this we will assume that you have
Step2: Visualizing the MRI
Let's take our MRI-with-eeg-locations and adjust the affine to put the data
in MNI space, and plot using
Step3: Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords
Let's load our
Step4: We can then get our transformation from the MRI coordinate frame (where our
points are defined) to the head coordinate frame from the object.
Step5: Let's apply this digitization to our dataset, and in the process
automatically convert our locations to the head coordinate frame, as
shown by
Step6: Now we can do standard sensor-space operations like make joint plots of
evoked data.
Step7: Getting a source estimate
New we have all of the components we need to compute a forward solution,
but first we should sanity check that everything is well aligned
Step8: Now we can actually compute the forward
Step9: Finally let's compute the inverse and apply it | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD Style.
import os.path as op
import nibabel
from nilearn.plotting import plot_glass_brain
import numpy as np
import mne
from mne.channels import compute_native_head_t, read_custom_montage
from mne.viz import plot_alignment
Explanation: EEG source localization given electrode locations on an MRI
This tutorial explains how to compute the forward operator from EEG data
when the electrodes are in MRI voxel coordinates.
:depth: 2
End of explanation
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_raw = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
bem_dir = op.join(subjects_dir, 'sample', 'bem')
fname_bem = op.join(bem_dir, 'sample-5120-5120-5120-bem-sol.fif')
fname_src = op.join(bem_dir, 'sample-oct-6-src.fif')
misc_path = mne.datasets.misc.data_path()
fname_T1_electrodes = op.join(misc_path, 'sample_eeg_mri', 'T1_electrodes.mgz')
fname_mon = op.join(misc_path, 'sample_eeg_mri', 'sample_mri_montage.elc')
Explanation: Prerequisites
For this we will assume that you have:
raw EEG data
your subject's MRI reconstrcted using FreeSurfer
an appropriate boundary element model (BEM)
an appropriate source space (src)
your EEG electrodes in Freesurfer surface RAS coordinates, stored
in one of the formats :func:mne.channels.read_custom_montage supports
Let's set the paths to these files for the sample dataset, including
a modified sample MRI showing the electrode locations plus a .elc
file corresponding to the points in MRI coords (these were synthesized
<https://gist.github.com/larsoner/0ac6fad57e31cb2d9caa77350a9ff366>__,
and thus are stored as part of the misc dataset).
End of explanation
img = nibabel.load(fname_T1_electrodes) # original subject MRI w/EEG
ras_mni_t = mne.transforms.read_ras_mni_t('sample', subjects_dir) # from FS
mni_affine = np.dot(ras_mni_t['trans'], img.affine) # vox->ras->MNI
img_mni = nibabel.Nifti1Image(img.dataobj, mni_affine) # now in MNI coords!
plot_glass_brain(img_mni, cmap='hot_black_bone', threshold=0., black_bg=True,
resampling_interpolation='nearest', colorbar=True)
Explanation: Visualizing the MRI
Let's take our MRI-with-eeg-locations and adjust the affine to put the data
in MNI space, and plot using :func:nilearn.plotting.plot_glass_brain,
which does a maximum intensity projection (easy to see the fake electrodes).
This plotting function requires data to be in MNI space.
Because img.affine gives the voxel-to-world (RAS) mapping, if we apply a
RAS-to-MNI transform to it, it becomes the voxel-to-MNI transformation we
need. Thus we create a "new" MRI image in MNI coordinates and plot it as:
End of explanation
dig_montage = read_custom_montage(fname_mon, head_size=None, coord_frame='mri')
dig_montage.plot()
Explanation: Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords
Let's load our :class:~mne.channels.DigMontage using
:func:mne.channels.read_custom_montage, making note of the fact that
we stored our locations in Freesurfer surface RAS (MRI) coordinates.
.. collapse:: |question| What if my electrodes are in MRI voxels?
:class: info
If you have voxel coordinates in MRI voxels, you can transform these to
FreeSurfer surface RAS (called "mri" in MNE) coordinates using the
transformations that FreeSurfer computes during reconstruction.
``nibabel`` calls this transformation the ``vox2ras_tkr`` transform
and operates in millimeters, so we can load it, convert it to meters,
and then apply it::
>>> pos_vox = ... # loaded from a file somehow
>>> img = nibabel.load(fname_T1)
>>> vox2mri_t = img.header.get_vox2ras_tkr() # voxel -> mri trans
>>> pos_mri = mne.transforms.apply_trans(vox2mri_t, pos_vox)
>>> pos_mri /= 1000. # mm -> m
You can also verify that these are correct (or manually convert voxels
to MRI coords) by looking at the points in Freeview or tkmedit.
End of explanation
trans = compute_native_head_t(dig_montage)
print(trans) # should be mri->head, as the "native" space here is MRI
Explanation: We can then get our transformation from the MRI coordinate frame (where our
points are defined) to the head coordinate frame from the object.
End of explanation
raw = mne.io.read_raw_fif(fname_raw)
raw.pick_types(meg=False, eeg=True, stim=True, exclude=()).load_data()
raw.set_montage(dig_montage)
raw.plot_sensors(show_names=True)
Explanation: Let's apply this digitization to our dataset, and in the process
automatically convert our locations to the head coordinate frame, as
shown by :meth:~mne.io.Raw.plot_sensors.
End of explanation
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events)
cov = mne.compute_covariance(epochs, tmax=0.)
evoked = epochs['1'].average() # trigger 1 in auditory/left
evoked.plot_joint()
Explanation: Now we can do standard sensor-space operations like make joint plots of
evoked data.
End of explanation
fig = plot_alignment(
evoked.info, trans=trans, show_axes=True, surfaces='head-dense',
subject='sample', subjects_dir=subjects_dir)
Explanation: Getting a source estimate
New we have all of the components we need to compute a forward solution,
but first we should sanity check that everything is well aligned:
End of explanation
fwd = mne.make_forward_solution(
evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True)
Explanation: Now we can actually compute the forward:
End of explanation
inv = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, verbose=True)
stc = mne.minimum_norm.apply_inverse(evoked, inv)
brain = stc.plot(subjects_dir=subjects_dir, initial_time=0.1)
Explanation: Finally let's compute the inverse and apply it:
End of explanation |
9,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding Correlations in a CSV of Malware Events via Hypergraph Views
To find patterns and outliers in CSVs and event data, Graphistry provides the hypergraph transform.
As an example, this notebook examines different malware files reported to a security vendor. It reveals phenomena such as
Step1: Default Hypergraph Transform
The hypergraph transform creates
Step2: Configured Hypergraph Transform
We clean up the visualization in a few ways
Step3: Directly connecting metadata
Do not show actual malware instance nodes | Python Code:
import pandas as pd
import graphistry as g
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
df = pd.read_csv('./barncat.1k.csv', encoding = "utf8")
print("# samples", len(df))
eval(df[:10]['value'].tolist()[0])
#avoid double counting
df3 = df[df['value'].str.contains("{")]
df3[:1]
#Unpack 'value' json
import json
df4 = pd.concat([df3.drop('value', axis=1), df3.value.apply(json.loads).apply(pd.Series)])
len(df4)
df4[:1]
Explanation: Finding Correlations in a CSV of Malware Events via Hypergraph Views
To find patterns and outliers in CSVs and event data, Graphistry provides the hypergraph transform.
As an example, this notebook examines different malware files reported to a security vendor. It reveals phenomena such as:
The malware files cluster into several families
The nodes central to a cluster reveal attributes specific to a strain of malware
The nodes bordering a cluster reveal attributes that show up in a strain, but are unique to each instance in that strain
Several families have attributes connecting them, suggesting they had the same authors
Load CSV
End of explanation
g.hypergraph(df4)['graph'].plot()
Explanation: Default Hypergraph Transform
The hypergraph transform creates:
* A node for every row,
* A node for every unique value in a columns (so multiple if found across columns)
* An edge connecting a row to its values
When multiple rows share similar values, they will cluster together. When a row has unique values, those will form a ring around only that node.
End of explanation
g.hypergraph(
df4,
opts={
'CATEGORIES': {
'hash': ['sha1', 'sha256', 'md5'],
'section': [x for x in df4.columns if 'section_' in x]
},
'SKIP': ['event_id', 'InstallFlag', 'type', 'val', 'Date', 'date', 'Port', 'FTPPort', 'Origin', 'category', 'comment', 'to_ids']
})['graph'].plot()
Explanation: Configured Hypergraph Transform
We clean up the visualization in a few ways:
Categorize hash codes as in the same family. This simplifies coloring by the generated 'category' field. If columns share the same value, such as two columns using md5 values, this would also cause them to only create 1 node per hash, instead of per-column instance.
Not show a lot of attributes as nodes, such as numbers and dates
Running help(graphistry.hypergraph) reveals more options.
End of explanation
g.hypergraph(
df4,
direct=True,
opts={
'CATEGORIES': {
'hash': ['sha1', 'sha256', 'md5'],
'section': [x for x in df4.columns if 'section_' in x]
},
'SKIP': ['event_id', 'InstallFlag', 'type', 'val', 'Date', 'date', 'Port', 'FTPPort', 'Origin', 'category', 'comment', 'to_ids']
})['graph'].plot()
Explanation: Directly connecting metadata
Do not show actual malware instance nodes
End of explanation |
9,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Below are examples of the theorems proved in Kleinberg's paper (https
Step1: 1. K-Cluster Stopping Condition - Fails Richness Condition
k-cluster stopping condition
Step2: Given these inputs, no distance function will allow this algorithm to classify the points into any other desired partition. The only partition available is the one given.
Takeaway
Step3: Euclidean Distance * 1
Step4: Euclidean Distance * 1.1
Step5: Takeaway | Python Code:
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data, labels = make_blobs(n_samples=10, n_features=2, centers=2,cluster_std=3,random_state=5)
plt.scatter(data[:,0], data[:,1], c = labels, cmap='coolwarm');
Explanation: Below are examples of the theorems proved in Kleinberg's paper (https://www.cs.cornell.edu/home/kleinber/nips15.pdf)
i) Note:
Third example is incorrect as it currently stands.
ii) Generate data to cluster
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=1)
kmeans.fit(data)
plt.scatter(data[:,0], data[:,1], c = kmeans.labels_, cmap='coolwarm');
Explanation: 1. K-Cluster Stopping Condition - Fails Richness Condition
k-cluster stopping condition: Stop adding edges when the subgraph first
consists of k connected components.
__Richness condition:__Every partition of S is a possible output. To state this more compactly,
let Range(f) denote the set of all partitions Γ such that f(d) = Γ for some distance
function d.
Richness. Range(f) is equal to the set of all partitions of S.
In other words, suppose we are given the names of the points only (i.e. the indices
in S) but not the distances between them. Richness requires that for any desired
partition Γ, it should be possible to construct a distance function d on S for which
f(d) = Γ.
End of explanation
from sklearn.metrics.pairwise import euclidean_distances
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=4,metric='precomputed')
distance = euclidean_distances(data,data)
Explanation: Given these inputs, no distance function will allow this algorithm to classify the points into any other desired partition. The only partition available is the one given.
Takeaway:
This model was unable to provide the various cluster partitions based on different distance functions. It thereby fails the "richness" condition.
In general, models with k-cluster stopping conditions will be similarly limited by being unable to create all possible partitions given various distance functions.
2. Distance-r Stopping Condition - Fails Scale-Invariance Condition
Distance-r stopping condition: Only add edges of weight at most r
Scale-Invariance: For any distance function d and any α > 0,
we have f(d) = f(α · d).
DBSCAN Example
Max r for algorithm = 4
End of explanation
db.fit(distance)
euc = db.labels_
plt.scatter(data[:,0], data[:,1], c = euc, cmap='coolwarm');
Explanation: Euclidean Distance * 1
End of explanation
db.fit(distance*1.1)
euc_alpha = db.labels_
plt.scatter(data[:,0], data[:,1], c = euc_alpha, cmap='coolwarm');
#-1 or 1 values are differently classified by each model. Data point at i=i was differently classified.
euc - euc_alpha
Explanation: Euclidean Distance * 1.1
End of explanation
from sklearn.cluster import AgglomerativeClustering
agg_model_euclidean = AgglomerativeClustering(n_clusters=2,affinity='euclidean',linkage='complete')
agg_model_euclidean.fit(data)
plt.scatter(data[:,0], data[:,1], c = agg_model_euclidean.labels_, cmap='coolwarm');
agg_model_manhattan = AgglomerativeClustering(n_clusters=2,affinity='manhattan',linkage='complete')
agg_model_manhattan.fit(data)
plt.scatter(data[:,0], data[:,1], c = agg_model_manhattan.labels_, cmap='coolwarm');
#-1 or 1 values are differently classified by each model. Data points at i=0 and i=3 were differently classified.
agg_model_euclidean.labels_-agg_model_manhattan.labels_
Explanation: Takeaway:
The model was unable to provide the same clusters based on differently scaled data. It thereby does not have the property of scale invariance.
In general, models with distance-r stopping conditions will be similarly sensitive to data which is scaled.
3. Scale-α Stopping Condition - Fails Consistency Condition
Scale-α stopping condition: Let ρ
∗ denote the maximum pairwise distance;
i.e. ρ
∗ = maxi,j d(i, j). Only add edges of weight at most αρ∗
Consistency condition: Consistency. Let d and d0 be two distance functions. If f(d) = Γ,
and d0
is a Γ-transformation of d, then f(d0
) = Γ.
Agglomerative Clustering Example
End of explanation |
9,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CCSDT theory for a closed-shell reference
In this notebook we will use wicked to generate equations for the CCSDT method
Step2: ```python
def evaluate_residual_0_0(H,T)
Step3: Prepare integrals for Forte
Step4: Define orbital spaces and dimensions
Step5: Build the Fock matrix and the zeroth-order Fock matrix
Step6: Build the MP denominators | Python Code:
import wicked as w
import psi4
import forte
import forte.utils
from forte import forte_options
import numpy as np
import time
w.reset_space()
w.add_space("o", "fermion", "occupied", ["i", "j", "k", "l", "m", "n"])
w.add_space("v", "fermion", "unoccupied", ["a", "b", "c", "d", "e", "f"])
Top = w.op("T", ["v+ o", "v+ v+ o o", "v+ v+ v+ o o o"])
Hop = w.utils.gen_op("H",1,"ov","ov") + w.utils.gen_op("H",2,"ov","ov")
wt = w.WickTheorem()
Hbar = w.bch_series(Hop,Top,4)
expr = wt.contract(w.rational(1), Hbar, 0, 6)
mbeq = expr.to_manybody_equation("R")
def generate_equation(mbeq, nocc, nvir):
res_sym = f"R{'o' * nocc}{'v' * nvir}"
code = [f"def evaluate_residual_{nocc}_{nvir}(H,T):",
" # contributions to the residual"]
if nocc + nvir == 0:
code.append(" R = 0.0")
else:
dims = ','.join(['nocc'] * nocc + ['nvir'] * nvir)
code.append(f" {res_sym} = np.zeros(({dims}))")
for eq in mbeq["o" * nocc + "|" + "v" * nvir]:
contraction = eq.compile("einsum")
code.append(f' {contraction}')
code.append(f' return {res_sym}')
funct = '\n'.join(code)
exec(funct)
print(f'\n\n{funct}\n')
return funct
energy_eq = generate_equation(mbeq, 0,0)
exec(energy_eq)
t1_eq = generate_equation(mbeq, 1,1)
exec(t1_eq)
t2_eq = generate_equation(mbeq, 2,2)
exec(t2_eq)
t3_eq = generate_equation(mbeq, 3,3)
exec(t3_eq)
Explanation: CCSDT theory for a closed-shell reference
In this notebook we will use wicked to generate equations for the CCSDT method
End of explanation
# setup xyz geometry for linear H6
geometry =
H 0.0 0.0 0.0
H 0.0 0.0 1.0
H 0.0 0.0 2.0
H 0.0 0.0 3.0
H 0.0 0.0 4.0
H 0.0 0.0 5.1
symmetry c1
(Escf, psi4_wfn) = forte.utils.psi4_scf(geometry,
basis='sto-3g',
reference='rhf',
options={'E_CONVERGENCE' : 1.e-12})
Explanation: ```python
def evaluate_residual_0_0(H,T):
# contributions to the residual
R = 0.0
R += 1.000000000 * np.einsum("ai,ia->",H["vo"],T["ov"])
R += 0.500000000 * np.einsum("abij,ia,jb->",H["vvoo"],T["ov"],T["ov"])
R += 0.250000000 * np.einsum("abij,ijab->",H["vvoo"],T["oovv"])
return R
def evaluate_residual_1_1(H,T):
# contributions to the residual
Rov = np.zeros((nocc,nvir))
Rov += 1.000000000 * np.einsum("ba,ib->ia",H["vv"],T["ov"])
Rov += 1.000000000 * np.einsum("ia->ia",H["ov"])
Rov += -1.000000000 * np.einsum("bj,ja,ib->ia",H["vo"],T["ov"],T["ov"])
Rov += 1.000000000 * np.einsum("bj,ijab->ia",H["vo"],T["oovv"])
Rov += -1.000000000 * np.einsum("ij,ja->ia",H["oo"],T["ov"])
Rov += 1.000000000 * np.einsum("bcja,ic,jb->ia",H["vvov"],T["ov"],T["ov"])
Rov += -0.500000000 * np.einsum("bcja,ijbc->ia",H["vvov"],T["oovv"])
Rov += -1.000000000 * np.einsum("ibja,jb->ia",H["ovov"],T["ov"])
Rov += 1.000000000 * np.einsum("bcjk,kc,ijab->ia",H["vvoo"],T["ov"],T["oovv"])
Rov += 0.500000000 * np.einsum("bcjk,ic,jkab->ia",H["vvoo"],T["ov"],T["oovv"])
Rov += -1.000000000 * np.einsum("bcjk,ka,ic,jb->ia",H["vvoo"],T["ov"],T["ov"],T["ov"])
Rov += 0.500000000 * np.einsum("bcjk,ka,ijbc->ia",H["vvoo"],T["ov"],T["oovv"])
Rov += 0.250000000 * np.einsum("bcjk,ijkabc->ia",H["vvoo"],T["ooovvv"])
Rov += 1.000000000 * np.einsum("ibjk,ka,jb->ia",H["ovoo"],T["ov"],T["ov"])
Rov += -0.500000000 * np.einsum("ibjk,jkab->ia",H["ovoo"],T["oovv"])
return Rov
def evaluate_residual_2_2(H,T):
# contributions to the residual
Roovv = np.zeros((nocc,nocc,nvir,nvir))
Roovv += -2.000000000 * np.einsum("ca,ijbc->ijab",H["vv"],T["oovv"])
Roovv += 1.000000000 * np.einsum("cdab,ic,jd->ijab",H["vvvv"],T["ov"],T["ov"])
Roovv += 0.500000000 * np.einsum("cdab,ijcd->ijab",H["vvvv"],T["oovv"])
Roovv += 2.000000000 * np.einsum("icab,jc->ijab",H["ovvv"],T["ov"])
Roovv += 1.000000000 * np.einsum("ijab->ijab",H["oovv"])
Roovv += 2.000000000 * np.einsum("ck,ic,jkab->ijab",H["vo"],T["ov"],T["oovv"])
Roovv += 2.000000000 * np.einsum("ck,ka,ijbc->ijab",H["vo"],T["ov"],T["oovv"])
Roovv += 1.000000000 * np.einsum("ck,ijkabc->ijab",H["vo"],T["ooovvv"])
Roovv += 2.000000000 * np.einsum("ik,jkab->ijab",H["oo"],T["oovv"])
Roovv += 2.000000000 * np.einsum("cdka,kd,ijbc->ijab",H["vvov"],T["ov"],T["oovv"])
Roovv += 4.000000000 * np.einsum("cdka,id,jkbc->ijab",H["vvov"],T["ov"],T["oovv"])
Roovv += 2.000000000 * np.einsum("cdka,kb,ic,jd->ijab",H["vvov"],T["ov"],T["ov"],T["ov"])
Roovv += 1.000000000 * np.einsum("cdka,kb,ijcd->ijab",H["vvov"],T["ov"],T["oovv"])
Roovv += 1.000000000 * np.einsum("cdka,ijkbcd->ijab",H["vvov"],T["ooovvv"])
Roovv += 4.000000000 * np.einsum("icka,kb,jc->ijab",H["ovov"],T["ov"],T["ov"])
Roovv += -4.000000000 * np.einsum("icka,jkbc->ijab",H["ovov"],T["oovv"])
Roovv += 2.000000000 * np.einsum("ijka,kb->ijab",H["ooov"],T["ov"])
Roovv += 1.000000000 * np.einsum("cdkl,ld,ijkabc->ijab",H["vvoo"],T["ov"],T["ooovvv"])
Roovv += -2.000000000 * np.einsum("cdkl,id,lc,jkab->ijab",H["vvoo"],T["ov"],T["ov"],T["oovv"])
Roovv += -1.000000000 * np.einsum("cdkl,id,jklabc->ijab",H["vvoo"],T["ov"],T["ooovvv"])
Roovv += 0.500000000 * np.einsum("cdkl,ic,jd,klab->ijab",H["vvoo"],T["ov"],T["ov"],T["oovv"])
Roovv += -2.000000000 * np.einsum("cdkl,la,kd,ijbc->ijab",H["vvoo"],T["ov"],T["ov"],T["oovv"])
Roovv += -4.000000000 * np.einsum("cdkl,la,id,jkbc->ijab",H["vvoo"],T["ov"],T["ov"],T["oovv"])
Roovv += -1.000000000 * np.einsum("cdkl,la,ijkbcd->ijab",H["vvoo"],T["ov"],T["ooovvv"])
Roovv += 1.000000000 * np.einsum("cdkl,ka,lb,ic,jd->ijab",H["vvoo"],T["ov"],T["ov"],T["ov"],T["ov"])
Roovv += 0.500000000 * np.einsum("cdkl,ka,lb,ijcd->ijab",H["vvoo"],T["ov"],T["ov"],T["oovv"])
Roovv += 1.000000000 * np.einsum("cdkl,ijad,klbc->ijab",H["vvoo"],T["oovv"],T["oovv"])
Roovv += 2.000000000 * np.einsum("cdkl,ikac,jlbd->ijab",H["vvoo"],T["oovv"],T["oovv"])
Roovv += 0.250000000 * np.einsum("cdkl,klab,ijcd->ijab",H["vvoo"],T["oovv"],T["oovv"])
Roovv += 1.000000000 * np.einsum("cdkl,ilab,jkcd->ijab",H["vvoo"],T["oovv"],T["oovv"])
Roovv += 2.000000000 * np.einsum("ickl,lc,jkab->ijab",H["ovoo"],T["ov"],T["oovv"])
Roovv += 1.000000000 * np.einsum("ickl,jc,klab->ijab",H["ovoo"],T["ov"],T["oovv"])
Roovv += 4.000000000 * np.einsum("ickl,la,jkbc->ijab",H["ovoo"],T["ov"],T["oovv"])
Roovv += 2.000000000 * np.einsum("ickl,ka,lb,jc->ijab",H["ovoo"],T["ov"],T["ov"],T["ov"])
Roovv += 1.000000000 * np.einsum("ickl,jklabc->ijab",H["ovoo"],T["ooovvv"])
Roovv += 1.000000000 * np.einsum("ijkl,ka,lb->ijab",H["oooo"],T["ov"],T["ov"])
Roovv += 0.500000000 * np.einsum("ijkl,klab->ijab",H["oooo"],T["oovv"])
return Roovv
def evaluate_residual_3_3(H,T):
# contributions to the residual
Rooovvv = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))
Rooovvv += 3.000000000 * np.einsum("da,ijkbcd->ijkabc",H["vv"],T["ooovvv"])
Rooovvv += 9.000000000 * np.einsum("deab,ie,jkcd->ijkabc",H["vvvv"],T["ov"],T["oovv"])
Rooovvv += 1.500000000 * np.einsum("deab,ijkcde->ijkabc",H["vvvv"],T["ooovvv"])
Rooovvv += -9.000000000 * np.einsum("idab,jkcd->ijkabc",H["ovvv"],T["oovv"])
Rooovvv += -3.000000000 * np.einsum("dl,id,jklabc->ijkabc",H["vo"],T["ov"],T["ooovvv"])
Rooovvv += -3.000000000 * np.einsum("dl,la,ijkbcd->ijkabc",H["vo"],T["ov"],T["ooovvv"])
Rooovvv += 9.000000000 * np.einsum("dl,ilab,jkcd->ijkabc",H["vo"],T["oovv"],T["oovv"])
Rooovvv += -3.000000000 * np.einsum("il,jklabc->ijkabc",H["oo"],T["ooovvv"])
Rooovvv += -3.000000000 * np.einsum("dela,le,ijkbcd->ijkabc",H["vvov"],T["ov"],T["ooovvv"])
Rooovvv += 9.000000000 * np.einsum("dela,ie,jklbcd->ijkabc",H["vvov"],T["ov"],T["ooovvv"])
Rooovvv += -9.000000000 * np.einsum("dela,id,je,klbc->ijkabc",H["vvov"],T["ov"],T["ov"],T["oovv"])
Rooovvv += 18.000000000 * np.einsum("dela,lb,ie,jkcd->ijkabc",H["vvov"],T["ov"],T["ov"],T["oovv"])
Rooovvv += 3.000000000 * np.einsum("dela,lb,ijkcde->ijkabc",H["vvov"],T["ov"],T["ooovvv"])
Rooovvv += -18.000000000 * np.einsum("dela,ijbe,klcd->ijkabc",H["vvov"],T["oovv"],T["oovv"])
Rooovvv += -4.500000000 * np.einsum("dela,ilbc,jkde->ijkabc",H["vvov"],T["oovv"],T["oovv"])
Rooovvv += -18.000000000 * np.einsum("idla,jd,klbc->ijkabc",H["ovov"],T["ov"],T["oovv"])
Rooovvv += -18.000000000 * np.einsum("idla,lb,jkcd->ijkabc",H["ovov"],T["ov"],T["oovv"])
Rooovvv += -9.000000000 * np.einsum("idla,jklbcd->ijkabc",H["ovov"],T["ooovvv"])
Rooovvv += -9.000000000 * np.einsum("ijla,klbc->ijkabc",H["ooov"],T["oovv"])
Rooovvv += 9.000000000 * np.einsum("delm,me,ilab,jkcd->ijkabc",H["vvoo"],T["ov"],T["oovv"],T["oovv"])
Rooovvv += 3.000000000 * np.einsum("delm,ie,md,jklabc->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ooovvv"])
Rooovvv += 4.500000000 * np.einsum("delm,ie,lmab,jkcd->ijkabc",H["vvoo"],T["ov"],T["oovv"],T["oovv"])
Rooovvv += 18.000000000 * np.einsum("delm,ie,jmab,klcd->ijkabc",H["vvoo"],T["ov"],T["oovv"],T["oovv"])
Rooovvv += 1.500000000 * np.einsum("delm,id,je,klmabc->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ooovvv"])
Rooovvv += -1.500000000 * np.einsum("delm,imde,jklabc->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += 0.750000000 * np.einsum("delm,ijde,klmabc->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += 3.000000000 * np.einsum("delm,ma,le,ijkbcd->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ooovvv"])
Rooovvv += -9.000000000 * np.einsum("delm,ma,ie,jklbcd->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ooovvv"])
Rooovvv += 9.000000000 * np.einsum("delm,ma,id,je,klbc->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ov"],T["oovv"])
Rooovvv += 18.000000000 * np.einsum("delm,ma,ijbe,klcd->ijkabc",H["vvoo"],T["ov"],T["oovv"],T["oovv"])
Rooovvv += 4.500000000 * np.einsum("delm,ma,ilbc,jkde->ijkabc",H["vvoo"],T["ov"],T["oovv"],T["oovv"])
Rooovvv += 9.000000000 * np.einsum("delm,la,mb,ie,jkcd->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ov"],T["oovv"])
Rooovvv += 1.500000000 * np.einsum("delm,la,mb,ijkcde->ijkabc",H["vvoo"],T["ov"],T["ov"],T["ooovvv"])
Rooovvv += -1.500000000 * np.einsum("delm,lmae,ijkbcd->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += 9.000000000 * np.einsum("delm,imae,jklbcd->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += -4.500000000 * np.einsum("delm,ijae,klmbcd->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += 0.750000000 * np.einsum("delm,lmab,ijkcde->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += -4.500000000 * np.einsum("delm,imab,jklcde->ijkabc",H["vvoo"],T["oovv"],T["ooovvv"])
Rooovvv += -3.000000000 * np.einsum("idlm,md,jklabc->ijkabc",H["ovoo"],T["ov"],T["ooovvv"])
Rooovvv += 3.000000000 * np.einsum("idlm,jd,klmabc->ijkabc",H["ovoo"],T["ov"],T["ooovvv"])
Rooovvv += 18.000000000 * np.einsum("idlm,ma,jd,klbc->ijkabc",H["ovoo"],T["ov"],T["ov"],T["oovv"])
Rooovvv += 9.000000000 * np.einsum("idlm,ma,jklbcd->ijkabc",H["ovoo"],T["ov"],T["ooovvv"])
Rooovvv += -9.000000000 * np.einsum("idlm,la,mb,jkcd->ijkabc",H["ovoo"],T["ov"],T["ov"],T["oovv"])
Rooovvv += -4.500000000 * np.einsum("idlm,lmab,jkcd->ijkabc",H["ovoo"],T["oovv"],T["oovv"])
Rooovvv += -18.000000000 * np.einsum("idlm,jmab,klcd->ijkabc",H["ovoo"],T["oovv"],T["oovv"])
Rooovvv += 9.000000000 * np.einsum("ijlm,ma,klbc->ijkabc",H["oooo"],T["ov"],T["oovv"])
Rooovvv += 1.500000000 * np.einsum("ijlm,klmabc->ijkabc",H["oooo"],T["ooovvv"])
return Rooovvv
```
Compute the Hartree–Fock and MP2 energy
End of explanation
# Define the orbital spaces
mo_spaces = {'RESTRICTED_DOCC': [3],'RESTRICTED_UOCC': [3]}
# pass Psi4 options to Forte
options = psi4.core.get_options()
options.set_current_module('FORTE')
forte_options.get_options_from_psi4(options)
# Grab the number of MOs per irrep
nmopi = psi4_wfn.nmopi()
# Grab the point group symbol (e.g. "C2V")
point_group = psi4_wfn.molecule().point_group().symbol()
# create a MOSpaceInfo object
mo_space_info = forte.make_mo_space_info_from_map(nmopi, point_group,mo_spaces, [])
# make a ForteIntegral object
ints = forte.make_ints_from_psi4(psi4_wfn, forte_options, mo_space_info)
Explanation: Prepare integrals for Forte
End of explanation
occmos = mo_space_info.corr_absolute_mo('RESTRICTED_DOCC')
virmos = mo_space_info.corr_absolute_mo('RESTRICTED_UOCC')
allmos = mo_space_info.corr_absolute_mo('CORRELATED')
nocc = 2 * len(occmos)
nvir = 2 * len(virmos)
Explanation: Define orbital spaces and dimensions
End of explanation
H = {'oo': forte.spinorbital_fock(ints,occmos, occmos,occmos),
'vv': forte.spinorbital_fock(ints,virmos, virmos,occmos),
'ov': forte.spinorbital_fock(ints,occmos, virmos,occmos),
'vo': forte.spinorbital_fock(ints,occmos, virmos,occmos),
'oovv' : forte.spinorbital_tei(ints,occmos,occmos,virmos,virmos),
'ooov' : forte.spinorbital_tei(ints,occmos,occmos,occmos,virmos),
'vvvv' : forte.spinorbital_tei(ints,virmos,virmos,virmos,virmos),
'vvoo' : forte.spinorbital_tei(ints,virmos,virmos,occmos,occmos),
'ovov' : forte.spinorbital_tei(ints,occmos,virmos,occmos,virmos),
'ovvv' : forte.spinorbital_tei(ints,occmos,virmos,virmos,virmos),
'vvov' : forte.spinorbital_tei(ints,virmos,virmos,occmos,virmos),
'ovoo' : forte.spinorbital_tei(ints,occmos,virmos,occmos,occmos),
'oooo' : forte.spinorbital_tei(ints,occmos,occmos,occmos,occmos)}
Explanation: Build the Fock matrix and the zeroth-order Fock matrix
End of explanation
fo = np.diag(H['oo'])
fv = np.diag(H['vv'])
D = {}
d1 = np.zeros((nocc,nvir))
for i in range(nocc):
for a in range(nvir):
si = i % 2
sa = a % 2
if si == sa:
d1[i][a] = 1.0 / (fo[i] - fv[a])
D['ov'] = d1
d2 = np.zeros((nocc,nocc,nvir,nvir))
for i in range(nocc):
for j in range(nocc):
for a in range(nvir):
for b in range(nvir):
si = i % 2
sj = j % 2
sa = a % 2
sb = b % 2
if si == sj == sa == sb:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
if si == sa and sj == sb and si != sj:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
if si == sb and sj == sa and si != sj:
d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])
D['oovv'] = d2
d3 = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))
for i in range(nocc):
for j in range(nocc):
for k in range(nocc):
for a in range(nvir):
for b in range(nvir):
for c in range(nvir):
si = i % 2
sj = j % 2
sk = k % 2
sa = a % 2
sb = b % 2
sc = c % 2
d3[i][j][k][a][b][c] = 1.0 / (fo[i] + fo[j] + fo[k]- fv[a] - fv[b] - fv[c])
D['ooovvv'] = d3
# Compute the MP2 correlation energy
Emp2 = 0.0
for i in range(nocc):
for j in range(nocc):
for a in range(nvir):
for b in range(nvir):
Emp2 += 0.25 * H["oovv"][i][j][a][b] ** 2 / (fo[i] + fo[j] - fv[a] - fv[b])
print(f"MP2 corr. energy: {Emp2:.12f} Eh")
def antisymmetrize_residual_2_2(Roovv):
# antisymmetrize the residual
Roovv_anti = np.zeros((nocc,nocc,nvir,nvir))
Roovv_anti += np.einsum("ijab->ijab",Roovv)
Roovv_anti -= np.einsum("ijab->jiab",Roovv)
Roovv_anti -= np.einsum("ijab->ijba",Roovv)
Roovv_anti += np.einsum("ijab->jiba",Roovv)
return Roovv_anti
def antisymmetrize_residual_3_3(Rooovvv):
# antisymmetrize the residual
Rooovvv_anti = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))
Rooovvv_anti += +1 * np.einsum("ijkabc->ijkabc",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ijkacb",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ijkbac",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->ijkbca",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->ijkcab",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ijkcba",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ikjabc",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->ikjacb",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->ikjbac",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ikjbca",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->ikjcab",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->ikjcba",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jikabc",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jikacb",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jikbac",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jikbca",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jikcab",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jikcba",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jkiabc",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jkiacb",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jkibac",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jkibca",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->jkicab",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->jkicba",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kijabc",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kijacb",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kijbac",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kijbca",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kijcab",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kijcba",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kjiabc",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kjiacb",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kjibac",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kjibca",Rooovvv)
Rooovvv_anti += -1 * np.einsum("ijkabc->kjicab",Rooovvv)
Rooovvv_anti += +1 * np.einsum("ijkabc->kjicba",Rooovvv)
return Rooovvv_anti
def update_amplitudes(T,R,d):
T['ov'] += np.einsum("ia,ia->ia",R['ov'],D['ov'])
T['oovv'] += np.einsum("ijab,ijab->ijab",R['oovv'],D['oovv'])
T['ooovvv'] += np.einsum("ijkabc,ijkabc->ijkabc",R['ooovvv'],D['ooovvv'])
ref_CCSDT = -0.108354659115 # from forte sparse implementation
T = {}
T["ov"] = np.zeros((nocc,nvir))
T["oovv"] = np.zeros((nocc,nocc,nvir,nvir))
T["ooovvv"] = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))
header = "Iter. Corr. energy |R| "
print("-" * len(header))
print(header)
print("-" * len(header))
start = time.perf_counter()
maxiter = 100
for i in range(maxiter):
R = {}
Ewicked = float(evaluate_residual_0_0(H,T))
R['ov'] = evaluate_residual_1_1(H,T)
Roovv = evaluate_residual_2_2(H,T)
R['oovv'] = antisymmetrize_residual_2_2(Roovv)
Rooovvv = evaluate_residual_3_3(H,T)
R['ooovvv'] = antisymmetrize_residual_3_3(Rooovvv)
update_amplitudes(T,R,D)
# check for convergence
norm_R = np.sqrt(np.linalg.norm(R['ov'])**2 + np.linalg.norm(R['oovv'])**2 + np.linalg.norm(R['ooovvv'])**2)
print(f"{i:3d} {Ewicked:+.12f} {norm_R:e}")
if norm_R < 1.0e-9:
break
end = time.perf_counter()
t = end - start
print("-" * len(header))
print(f"CCSDT correlation energy: {Ewicked:+.12f} [Eh]")
print(f"Error: {Ewicked - ref_CCSDT:+.12e} [Eh]")
print(f"Timing: {t:+.12e} [s]")
Explanation: Build the MP denominators
End of explanation |
9,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 1
Step1: Step 2 | Python Code:
mnist = input_data.read_data_sets('/data/mnist', one_hot=True)
Explanation: Step 1: Read in data<br>
using TF Learn's built in function to load MNIST data to the folder data/mnist
End of explanation
with tf.Session() as sess:
start_time = time.time()
sess.run(tf.global_variables_initializer())
n_batches = int(mnist.train.num_examples/batch_size)
for i in range(n_epochs): # train the model n_epochs times
total_loss = 0
for _ in range(n_batches):
X_batch, Y_batch = mnist.train.next_batch(batch_size)
# TO-DO: run optimizer + fetch loss_batch
#
#
total_loss += loss_batch
print('Average loss epoch {0}: {1}'.format(i, total_loss/n_batches))
print('Total time: {0} seconds'.format(time.time() - start_time))
print('Optimization Finished!') # should be around 0.35 after 25 epochs
# test the model
preds = tf.nn.softmax(logits)
correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) # need numpy.count_nonzero(boolarr) :(
n_batches = int(mnist.test.num_examples/batch_size)
total_correct_preds = 0
for i in range(n_batches):
X_batch, Y_batch = mnist.test.next_batch(batch_size)
accuracy_batch = sess.run([accuracy], feed_dict={X: X_batch, Y:Y_batch})
total_correct_preds += accuracy_batch
print('Accuracy {0}'.format(total_correct_preds/mnist.test.num_examples))
Explanation: Step 2: create placeholders for features and labels<br>
each image in the MNIST data is of shape 28*28 = 784<br>
therefore, each image is represented with a 1x784 tensor<br>
there are 10 classes for each image, corresponding to digits 0 - 9.<br>
Features are of the type float, and labels are of the type int<br>
Step 3: create weights and bias<br>
weights and biases are initialized to 0<br>
shape of w depends on the dimension of X and Y so that Y = X * w + b<br>
shape of b depends on Y<br>
Step 4: build model<br>
the model that returns the logits.<br>
this logits will be later passed through softmax layer<br>
to get the probability distribution of possible label of the image<br>
DO NOT DO SOFTMAX HERE<br>
Step 5: define loss function<br>
use cross entropy loss of the real labels with the softmax of logits<br>
use the method:<br>
tf.nn.softmax_cross_entropy_with_logits(logits, Y)<br>
then use tf.reduce_mean to get the mean loss of the batch<br>
Step 6: define training op<br>
using gradient descent to minimize loss
End of explanation |
9,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
9,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimension Reduction and Visualization (single cell as bulk & bulk)
Step1: Prior smushing
Step2: For single cell and bulk combined wide mtx and metadata | Python Code:
# Import required modules
# Python plotting library
import matplotlib.pyplot as plt
# Numerical python library
import numpy as np
# Dataframes in Python
import pandas as pd
# Statistical plotting library we'll use
import seaborn as sns
# Import packages for dimension reduction
from sklearn.decomposition import PCA, FastICA, NMF
from sklearn.manifold import TSNE, MDS
# This is necessary to show the plotted figures inside the notebook -- "inline" with the notebook cells
%matplotlib inline
# Import interactive modules
from ipywidgets import interact,interactive,fixed
from IPython.display import display
# Import stats for Fisher exact p-value calculation
import scipy.stats as stats
# Read in the DGE data sheet
inFile = './AGGallSamp_wBulkNorm_logged_widemtx.csv'
subset = pd.read_table(inFile, sep=',',
# Sets the first (Python starts counting from 0 not 1) column as the row names
index_col=0,
# Tells pandas to decompress the gzipped file if required
compression='gzip')
# Print out the shape and the top 5 rows of the newly assigned dataframe. Should be cells as indexes, genes as column names
# Transpose by subset = subset.T if required.
print(subset.shape)
subset.head()
# Should have specified data type for indexes. convert all indexes to string type
subset.index = [str(index) for index in subset.index]
# Read in metadata
metadata = pd.read_table("./AGGallSamp_wBulk_metadata.csv", sep = ',', index_col=0)
print(metadata.shape)
metadata.head()
## Transpose matrix
# subset = subset.T
# subset.head()
Explanation: Dimension Reduction and Visualization (single cell as bulk & bulk)
End of explanation
# Is the data highly skewed?
## Can be roughly estimated by:
for i in range(1,2500,250):
sns.distplot(subset.iloc[i][subset.iloc[i] > 1], )
#if yes:
## whole dataframe log transformation
# loggedsubset = np.log2(subset + 1)
# loggedsubset.head()
# Distribution after log transformation
# for i in range(1,2500,250):
# sns.distplot(loggedsubset.iloc[i][loggedsubset.iloc[i] > 1], )
Explanation: Prior smushing: Log transformation?
End of explanation
# Fit metadata to mtx shape
## Index has different data type, so are joined incorrectly...
## Reloading with low_memory option kills the kernel.
## Instead, try turning all index to string before combining (joining)
# Join data subset with metadata, keep the intersected metadata
subsetWmeta = subset.join(metadata)
fitMeta = subsetWmeta[list(metadata.columns)]
print("Metadata with fitted shape of: ", fitMeta.shape)
# Sample ID to meta extractor
dict_sampIDtoMeta = {
"1": ["0dpa","1"],
"2": ["0dpa","2"],
"3": ["1dpa","1"],
"4": ["1dpa","2"],
"5": ["2dpa","1"],
"6": ["2dpa","2"],
"7": ["4dpa","1"],
"8": ["4dpa","2"],
}
# Extract bulk data subset and save as a new dataframe
bulkMask = fitMeta["Type"] == "Bulk"
asBulkDF = subset[bulkMask]
# Loop through all possible samples (1-8)
for i in range(1,9):
# Make mask
sampID = str(i)
sampGPMask1 = fitMeta["SampleID"] == i
sampGPMask2 = fitMeta["GFP"] == True
sampGPMask = [all(tup) for tup in zip(sampGPMask1,sampGPMask2)]
sampGNMask1 = fitMeta["SampleID"] == i
sampGNMask2 = fitMeta["GFP"] == False
sampGNMask = [all(tup) for tup in zip(sampGNMask1,sampGNMask2)]
# Extract sample subset
sampGPSubset = subset[sampGPMask]
sampGNSubset = subset[sampGNMask]
# Calculate mean for all cells within sample
sampGPMean = sampGPSubset.mean()
sampGNMean = sampGNSubset.mean()
# Rename the mean array
sampGPName = dict_sampIDtoMeta[sampID][0]+dict_sampIDtoMeta[sampID][1]+"GFPpo"
sampGPMean = sampGPMean.rename(sampGPName)
sampGNName = dict_sampIDtoMeta[sampID][0]+dict_sampIDtoMeta[sampID][1]+"GFPne"
sampGNMean = sampGNMean.rename(sampGNName)
# Append calculated mean to bulk matrix
asBulkDF = asBulkDF.append(sampGPMean)
asBulkDF = asBulkDF.append(sampGNMean)
# report the shape of appended matrix
print("Current matrix shape: ", asBulkDF.shape)
# mean only dataframe?
asBulkDF.head()
# Adjust all sample total count to 3000?
## Add one column to save the sum of each sample
asBulkDF["SUM"] = asBulkDF.sum(axis=1)
## Transpose for easier calculation
asBulkD = asBulkDF.T
## For every value in frame
normAsBulkDF = asBulkD.divide(asBulkD.iloc[18830] / 3000)
normAsBulkDF
# Transpose back and drop SUM column
DFforPCA = normAsBulkDF.T.drop(["SUM"],axis=1)
print("Shape of matrix for PCA: ",DFforPCA.shape)
DFforPCA.head()
# Estimate percentage of variance can be explained by principal components:
sns.set()
pca = PCA().fit(DFforPCA)
plt.figure(figsize=(8,6),dpi=300)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlim(0,12)
plt.xlabel('number of components')
plt.ylabel('cumulative explained variance')
# Use the number of PCs determined to transform data
pca_tsfed = pca = PCA(n_components=4).fit_transform(DFforPCA)
print(pca_tsfed.shape)
# Plot samples by transformed components
fig, ax = plt.subplots(figsize=(8,6),dpi=300)
ax.scatter(pca_tsfed[:, 1], pca_tsfed[:, 2],
c=["b","g","r","c","b","g","r","c","m","y","m","y","k","b","k","b","g","r","g","r","c","m","c","m"], edgecolor='none', alpha=0.5)
name = ["B4P","B4N","B0P","B0N","B4P","B4N","B0P","B0N","SC0P","SC0N","SC0P","SC0N","SC1P","SC1N","SC1P","SC1N","SC2P","SC2N","SC2P","SC2N","SC4P","SC4N","SC4P","SC4N"]
#name = DFforPCA.index
for i, txt in enumerate(name):
ax.annotate(txt, (pca_tsfed[:, 1][i], pca_tsfed[:, 2][i]))
ax.set(xlabel='component 2', ylabel='component 3')
Explanation: For single cell and bulk combined wide mtx and metadata
End of explanation |
9,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wayne H Nixalo - 09 Aug 2017
FADL2 L9
Step1: Content Recreation
Step2: In this implementation, need to define an object that'll allow us to separately access the loss function and gradients of a function, | Python Code:
%matplotlib inline
import importlib
import os, sys
sys.path.insert(1, os.path.join('../utils'))
from utils2 import *
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
from keras import metrics
from vgg16_avg import VGG16_Avg
limit_mem()
path = '../data/nst/'
# names = os.listdir(path)
# pkl_out = open('fnames.pkl','wb')
# pickle.dump(names, pkl_out)
# pkl_out.close()
fnames = pickle.load(open(path + 'fnames.pkl', 'rb'))
fnames = glob.glob(path+'**/*.JPG', recursive=True)
fn = fnames[0]
fn
img = Image.open(fn); img
# Subtracting mean and reversing color-channel order:
rn_mean = np.array([123.68,116.779,103.939], dtype=np.float32)
preproc = lambda x: (x - rn_mean)[:,:,:,::-1]
# later undoing preprocessing for image generation
deproc = lambda x,s: np.clip(x.reshape(s)[:,:,:,::-1] + rn_mean, 0, 255)
img_arr = preproc(np.expand_dims(np.array(img), 0))
shp = img_arr.shape
Explanation: Wayne H Nixalo - 09 Aug 2017
FADL2 L9: Generative Models
neural-style-GPU.ipynb
End of explanation
# had to fix some compatibility issues w/ Keras 1 -> Keras 2
import vgg16_avg
importlib.reload(vgg16_avg)
from vgg16_avg import VGG16_Avg
model = VGG16_Avg(include_top=False)
# grabbing activations from near the end of the CNN model
layer = model.get_layer('block5_conv1').output
# calculating layer's target activations
layer_model = Model(model.input, layer)
targ = K.variable(layer_model.predict(img_arr))
Explanation: Content Recreation
End of explanation
class Evaluator(object):
def __init__(self, f, shp): self.f, self.shp = f, shp
def loss(self, x):
loss_, self.grad_values = self.f([x.reshape(self.shp)])
return loss_.astype(np.float64)
def grads(self, x): return self.grad_values.flatten().astype(np.float64)
# Define loss function to calc MSE betwn the 2 outputs at specfd Conv layer
loss = metrics.mse(layer, targ)
grads = K.gradients(loss, model.input)
fn = K.function([model.input], [loss]+grads)
evaluator = Evaluator(fn, shp)
# optimize loss fn w/ deterministic approach using Line Search
def solve_image(eval_obj, niter, x):
for i in range(niter):
x, min_val, info = fmin_l_bfgs_b(eval_obj.loss, x.flatten(),
fprime=eval_obj.grads, maxfun=20)
x = np.clip(x, -127,127)
print('Current loss value:', min_val)
imsave(f'{path}/results/res_at_iteration_{i}.png', deproc(x.copy(), shp)[0])
return x
# generating a random image:
rand_img = lambda shape: np.random.uniform(-2.5,2.5,shape)/100
x = rand_img(shp)
plt.imshow(x[0])
iterations = 10
x = solve_image(evaluator, iterations, x)
Image.open(path + 'results/res_at_iteration_1.png')
# Looking at result for earlier Conv block (4):
layer = model.get_layer('block4_conv1').output
layer_model = Model(model.input, layer)
targ = K.variable(layer_model.predict(img_arr))
loss = metrics.mse(layer, targ)
grads = K.gradients(loss, model.input)
fn = K.function([model.input], [loss]+grads)
evaluator = Evaluator(fn, shp)
x = solve_image(evaluator, iterations, x)
Image.open(path + 'results/res_at_iteration_9.png')
Explanation: In this implementation, need to define an object that'll allow us to separately access the loss function and gradients of a function,
End of explanation |
9,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Install Twitter Sentiment with Watson scala library from Github
Step1: Run the Twitter sentiment application using the JavaWrapper
Step2: Run the Twitter sentiment application using Scala
Step3: Variables can be passed back to Python from Scala if they are prefixed with __
In the cell below, we declare 2 variables to be passed back to Python
Step4: You can now use the __df variable as a regular Python dataframe | Python Code:
import pixiedust
pixiedust.installPackage("https://github.com/ibm-cds-labs/spark.samples/raw/master/dist/streaming-twitter-assembly-1.6.jar")
Explanation: Install Twitter Sentiment with Watson scala library from Github
End of explanation
from pixiedust.utils.javaBridge import *
demo = JavaWrapper("com.ibm.cds.spark.samples.StreamingTwitter$", True)
duration = JavaWrapper("org.apache.spark.streaming.Durations$")
demo.setConfig("twitter4j.oauth.consumerKey","XXXX")
demo.setConfig("twitter4j.oauth.consumerSecret","XXXX")
demo.setConfig("twitter4j.oauth.accessToken","XXXX")
demo.setConfig("twitter4j.oauth.accessTokenSecret","XXXX")
demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api")
demo.setConfig("watson.tone.password","XXXX")
demo.setConfig("watson.tone.username","XXXX")
demo.startTwitterStreaming(pd_getJavaSparkContext(), duration.seconds(10) )
Explanation: Run the Twitter sentiment application using the JavaWrapper
End of explanation
%%scala
val demo = com.ibm.cds.spark.samples.StreamingTwitter
demo.setConfig("twitter4j.oauth.consumerKey","XXXX")
demo.setConfig("twitter4j.oauth.consumerSecret","XXXX")
demo.setConfig("twitter4j.oauth.accessToken","XXXX")
demo.setConfig("twitter4j.oauth.accessTokenSecret","XXXX")
demo.setConfig("watson.tone.url","https://gateway.watsonplatform.net/tone-analyzer/api")
demo.setConfig("watson.tone.password","XXXX")
demo.setConfig("watson.tone.username","XXXX")
import org.apache.spark.streaming._
demo.startTwitterStreaming(sc, Seconds(10))
Explanation: Run the Twitter sentiment application using Scala
End of explanation
%%scala
val demo = com.ibm.cds.spark.samples.StreamingTwitter
val (__sqlContext, __df) = demo.createTwitterDataFrames(sc)
Explanation: Variables can be passed back to Python from Scala if they are prefixed with __
In the cell below, we declare 2 variables to be passed back to Python: __sqlContext and __df
End of explanation
tweets=__df
tweets.count()
#create an array that will hold the count for each sentiment
sentimentDistribution=[0] * 13
#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%
#Store the data in the array
for i, sentiment in enumerate(tweets.columns[-13:]):
sentimentDistribution[i]=__sqlContext.sql("SELECT count(*) as sentCount FROM tweets where " + sentiment + " > 60")\
.collect()[0].sentCount
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
ind=np.arange(13)
width = 0.35
bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions")
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Tweet count')
plt.xlabel('Tone')
plt.title('Distribution of tweets by sentiments > 60%')
plt.xticks(ind+width, tweets.columns[-13:])
plt.legend()
plt.show()
from operator import add
import re
tagsRDD = tweets.flatMap( lambda t: re.split("\s", t.text))\
.filter( lambda word: word.startswith("#") )\
.map( lambda word : (word, 1 ))\
.reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))
top10tags = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2, plSize[1]*2) )
labels = [i[0] for i in top10tags]
sizes = [int(i[1]) for i in top10tags]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"]
plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.show()
cols = tweets.columns[-13:]
def expand( t ):
ret = []
for s in [i[0] for i in top10tags]:
if ( s in t.text ):
for tone in cols:
ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))]
return ret
def makeList(l):
return l if isinstance(l, list) else [l]
#Create RDD from tweets dataframe
tagsRDD = tweets.map(lambda t: t )
#Filter to only keep the entries that are in top10tags
tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )
#Create a flatMap using the expand function defined above, this will be used to collect all the scores
#for a particular tag with the following format: Tag-Tone-ToneScore
tagsRDD = tagsRDD.flatMap( expand )
#Create a map indexed by Tag-Tone keys
tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) ))
#Call combineByKey to format the data as follow
#Key=Tag-Tone
#Value=(count, sum_of_all_score_for_this_tone)
tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),
(lambda x, y: (x[0] + y, x[1] + 1)),
(lambda x, y: (x[0] + y[0], x[1] + y[1])))
#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple
#Key=Tag
#Value=(Tone, average_score)
tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2))))
#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples
tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )
#Sort the (Tone,average_score) tuples alphabetically by Tone
tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )
#Format the data as expected by the plotting code in the next cell.
#map the Values to a tuple as follow: ([list of tone], [list of average score])
#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])
tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )
#Use custom sort function to sort the entries by order of appearance in top10tags
def customCompare( key ):
for (k,v) in top10tags:
if k == key:
return v
return 0
tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)
#Take the mean tone scores for the top 10 tags
top10tagsMeanScores = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*3, plSize[1]*2) )
top5tagsMeanScores = top10tagsMeanScores[:5]
width = 0
ind=np.arange(13)
(a,b) = top5tagsMeanScores[0]
labels=b[0]
colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"]
idx=0
for key, value in top5tagsMeanScores:
plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)
width += 0.15
idx += 1
plt.xticks(ind+0.3, labels)
plt.ylabel('AVERAGE SCORE')
plt.xlabel('TONES')
plt.title('Breakdown of top hashtags by sentiment tones')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.)
plt.show()
Explanation: You can now use the __df variable as a regular Python dataframe
End of explanation |
9,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Access a Database with Python - Iris Dataset
The Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.
It is often used to introduce classification Machine Learning algorithms.
First let's download the dataset in SQLite format from Kaggle
Step1: Access the Database with the sqlite3 Package
We can use the sqlite3 package from the Python standard library to connect to the sqlite database
Step2: A sqlite3.Cursor object is our interface to the database, mostly throught the execute method that allows to run any SQL query on our database.
First of all we can get a list of all the tables saved into the database, this is done by reading the column name from the sqlite_master metadata table with
Step3: a shortcut to directly execute the query and gather the results is the fetchall method
Step4: Notice
Step5: It is evident that the interface provided by sqlite3 is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like pandas.
Import data from a database to pandas
Step6: pandas.read_sql_query takes a SQL query and a connection object and imports the data into a DataFrame, also keeping the same data types of the database columns. pandas provides a lot of the same functionality of SQL with a more user-friendly interface.
However, sqlite3 is extremely useful for downselecting data before importing them in pandas.
For example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into pandas and then filter them, therefore we should tell the database to perform the filtering and just load into pandas the downsized dataset. | Python Code:
import os
data_iris_folder_content = os.listdir("data/iris")
error_message = "Error: sqlite file not available, check instructions above to download it"
assert "database.sqlite" in data_iris_folder_content, error_message
Explanation: Access a Database with Python - Iris Dataset
The Iris dataset is a popular dataset especially in the Machine Learning community, it is a set of features of 50 Iris flowers and their classification into 3 species.
It is often used to introduce classification Machine Learning algorithms.
First let's download the dataset in SQLite format from Kaggle:
https://www.kaggle.com/uciml/iris/
Download database.sqlite and save it in the data/iris folder.
<p><img src="https://upload.wikimedia.org/wikipedia/commons/4/49/Iris_germanica_%28Purple_bearded_Iris%29%2C_Wakehurst_Place%2C_UK_-_Diliff.jpg" alt="Iris germanica (Purple bearded Iris), Wakehurst Place, UK - Diliff.jpg" height="145" width="114"></p>
<p><br> From <a href="https://commons.wikimedia.org/wiki/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg#/media/File:Iris_germanica_(Purple_bearded_Iris),_Wakehurst_Place,_UK_-_Diliff.jpg">Wikimedia</a>, by <a href="//commons.wikimedia.org/wiki/User:Diliff" title="User:Diliff">Diliff</a> - <span class="int-own-work" lang="en">Own work</span>, <a href="http://creativecommons.org/licenses/by-sa/3.0" title="Creative Commons Attribution-Share Alike 3.0">CC BY-SA 3.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=33037509">Link</a></p>
First let's check that the sqlite database is available and display an error message if the file is not available (assert checks if the expression is True, otherwise throws AssertionError with the error message string provided):
End of explanation
import sqlite3
conn = sqlite3.connect('data/iris/database.sqlite')
cursor = conn.cursor()
type(cursor)
Explanation: Access the Database with the sqlite3 Package
We can use the sqlite3 package from the Python standard library to connect to the sqlite database:
End of explanation
for row in cursor.execute("SELECT name FROM sqlite_master"):
print(row)
Explanation: A sqlite3.Cursor object is our interface to the database, mostly throught the execute method that allows to run any SQL query on our database.
First of all we can get a list of all the tables saved into the database, this is done by reading the column name from the sqlite_master metadata table with:
SELECT name FROM sqlite_master
The output of the execute method is an iterator that can be used in a for loop to print the value of each row.
End of explanation
cursor.execute("SELECT name FROM sqlite_master").fetchall()
Explanation: a shortcut to directly execute the query and gather the results is the fetchall method:
End of explanation
sample_data = cursor.execute("SELECT * FROM Iris LIMIT 20").fetchall()
print(type(sample_data))
sample_data
[row[0] for row in cursor.description]
Explanation: Notice: this way of finding the available tables in a database is specific to sqlite, other databases like MySQL or PostgreSQL have different syntax.
Then we can execute standard SQL query on the database, SQL is a language designed to interact with data stored in a relational database. It has a standard specification, therefore the commands below work on any database.
If you need to connect to another database, you would use another package instead of sqlite3, for example:
MySQL Connector for MySQL
Psycopg for PostgreSQL
pymssql for Microsoft MS SQL
then you would connect to the database using specific host, port and authentication credentials but then you could execute the same exact SQL statements.
Let's take a look for example at the first 3 rows in the Iris table:
End of explanation
import pandas as pd
iris_data = pd.read_sql_query("SELECT * FROM Iris", conn)
iris_data.head()
iris_data.dtypes
Explanation: It is evident that the interface provided by sqlite3 is low-level, for data exploration purposes we would like to directly import data into a more user friendly library like pandas.
Import data from a database to pandas
End of explanation
iris_setosa_data = pd.read_sql_query("SELECT * FROM Iris WHERE Species == 'Iris-setosa'", conn)
iris_setosa_data
print(iris_setosa_data.shape)
print(iris_data.shape)
Explanation: pandas.read_sql_query takes a SQL query and a connection object and imports the data into a DataFrame, also keeping the same data types of the database columns. pandas provides a lot of the same functionality of SQL with a more user-friendly interface.
However, sqlite3 is extremely useful for downselecting data before importing them in pandas.
For example you might have 1 TB of data in a table stored in a database on a server machine. You are interested in working on a subset of the data based on some criterion, unfortunately it would be impossible to first load data into pandas and then filter them, therefore we should tell the database to perform the filtering and just load into pandas the downsized dataset.
End of explanation |
9,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Advanced automatic differentiation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Controlling gradient recording
In the automatic differentiation guide you saw how to control which variables and tensors are watched by the tape while building the gradient calculation.
The tape also has methods to manipulate the recording.
Stop recording
If you wish to stop recording gradients, you can use tf.GradientTape.stop_recording to temporarily suspend recording.
This may be useful to reduce overhead if you do not wish to differentiate a complicated operation in the middle of your model. This could include calculating a metric or an intermediate result
Step3: Reset/start recording from scratch
If you wish to start over entirely, use tf.GradientTape.reset. Simply exiting the gradient tape block and restarting is usually easier to read, but you can use the reset method when exiting the tape block is difficult or impossible.
Step4: Stop gradient flow with precision
In contrast to the global tape controls above, the tf.stop_gradient function is much more precise. It can be used to stop gradients from flowing along a particular path, without needing access to the tape itself
Step5: Custom gradients
In some cases, you may want to control exactly how gradients are calculated rather than using the default. These situations include
Step6: Refer to the tf.custom_gradient decorator API docs for more details.
Custom gradients in SavedModel
Note
Step7: A note about the above example
Step8: Higher-order gradients
Operations inside of the tf.GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well.
For example
Step9: While that does give you the second derivative of a scalar function, this pattern does not generalize to produce a Hessian matrix, since tf.GradientTape.gradient only computes the gradient of a scalar. To construct a Hessian matrix, go to the Hessian example under the Jacobian section.
"Nested calls to tf.GradientTape.gradient" is a good pattern when you are calculating a scalar from a gradient, and then the resulting scalar acts as a source for a second gradient calculation, as in the following example.
Example
Step10: Jacobians
All the previous examples took the gradients of a scalar target with respect to some source tensor(s).
The Jacobian matrix represents the gradients of a vector valued function. Each row contains the gradient of one of the vector's elements.
The tf.GradientTape.jacobian method allows you to efficiently calculate a Jacobian matrix.
Note that
Step11: When you take the Jacobian with respect to a scalar the result has the shape of the target, and gives the gradient of the each element with respect to the source
Step12: Tensor source
Whether the input is scalar or tensor, tf.GradientTape.jacobian efficiently calculates the gradient of each element of the source with respect to each element of the target(s).
For example, the output of this layer has a shape of (10, 7)
Step13: And the layer's kernel's shape is (5, 10)
Step14: The shape of the Jacobian of the output with respect to the kernel is those two shapes concatenated together
Step15: If you sum over the target's dimensions, you're left with the gradient of the sum that would have been calculated by tf.GradientTape.gradient
Step16: <a id="hessian"> </hessian>
Example
Step17: To use this Hessian for a Newton's method step, you would first flatten out its axes into a matrix, and flatten out the gradient into a vector
Step18: The Hessian matrix should be symmetric
Step19: The Newton's method update step is shown below
Step20: Note
Step21: While this is relatively simple for a single tf.Variable, applying this to a non-trivial model would require careful concatenation and slicing to produce a full Hessian across multiple variables.
Batch Jacobian
In some cases, you want to take the Jacobian of each of a stack of targets with respect to a stack of sources, where the Jacobians for each target-source pair are independent.
For example, here the input x is shaped (batch, ins) and the output y is shaped (batch, outs)
Step22: The full Jacobian of y with respect to x has a shape of (batch, ins, batch, outs), even if you only want (batch, ins, outs)
Step23: If the gradients of each item in the stack are independent, then every (batch, batch) slice of this tensor is a diagonal matrix
Step24: To get the desired result, you can sum over the duplicate batch dimension, or else select the diagonals using tf.einsum
Step25: It would be much more efficient to do the calculation without the extra dimension in the first place. The tf.GradientTape.batch_jacobian method does exactly that
Step26: Caution
Step27: In this case, batch_jacobian still runs and returns something with the expected shape, but its contents have an unclear meaning | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 6)
Explanation: Advanced automatic differentiation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/advanced_autodiff"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/advanced_autodiff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/advanced_autodiff.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/advanced_autodiff.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
The Introduction to gradients and automatic differentiation guide includes everything required to calculate gradients in TensorFlow. This guide focuses on deeper, less common features of the tf.GradientTape API.
Setup
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
x_sq = x * x
with t.stop_recording():
y_sq = y * y
z = x_sq + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: Controlling gradient recording
In the automatic differentiation guide you saw how to control which variables and tensors are watched by the tape while building the gradient calculation.
The tape also has methods to manipulate the recording.
Stop recording
If you wish to stop recording gradients, you can use tf.GradientTape.stop_recording to temporarily suspend recording.
This may be useful to reduce overhead if you do not wish to differentiate a complicated operation in the middle of your model. This could include calculating a metric or an intermediate result:
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
reset = True
with tf.GradientTape() as t:
y_sq = y * y
if reset:
# Throw out all the tape recorded so far.
t.reset()
z = x * x + y_sq
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: Reset/start recording from scratch
If you wish to start over entirely, use tf.GradientTape.reset. Simply exiting the gradient tape block and restarting is usually easier to read, but you can use the reset method when exiting the tape block is difficult or impossible.
End of explanation
x = tf.Variable(2.0)
y = tf.Variable(3.0)
with tf.GradientTape() as t:
y_sq = y**2
z = x**2 + tf.stop_gradient(y_sq)
grad = t.gradient(z, {'x': x, 'y': y})
print('dz/dx:', grad['x']) # 2*x => 4
print('dz/dy:', grad['y'])
Explanation: Stop gradient flow with precision
In contrast to the global tape controls above, the tf.stop_gradient function is much more precise. It can be used to stop gradients from flowing along a particular path, without needing access to the tape itself:
End of explanation
# Establish an identity operation, but clip during the gradient pass.
@tf.custom_gradient
def clip_gradients(y):
def backward(dy):
return tf.clip_by_norm(dy, 0.5)
return y, backward
v = tf.Variable(2.0)
with tf.GradientTape() as t:
output = clip_gradients(v * v)
print(t.gradient(output, v)) # calls "backward", which clips 4 to 2
Explanation: Custom gradients
In some cases, you may want to control exactly how gradients are calculated rather than using the default. These situations include:
There is no defined gradient for a new op you are writing.
The default calculations are numerically unstable.
You wish to cache an expensive computation from the forward pass.
You want to modify a value (for example, using tf.clip_by_value or tf.math.round) without modifying the gradient.
For the first case, to write a new op you can use tf.RegisterGradient to set up your own (refer to the API docs for details). (Note that the gradient registry is global, so change it with caution.)
For the latter three cases, you can use tf.custom_gradient.
Here is an example that applies tf.clip_by_norm to the intermediate gradient:
End of explanation
class MyModule(tf.Module):
@tf.function(input_signature=[tf.TensorSpec(None)])
def call_custom_grad(self, x):
return clip_gradients(x)
model = MyModule()
tf.saved_model.save(
model,
'saved_model',
options=tf.saved_model.SaveOptions(experimental_custom_gradients=True))
# The loaded gradients will be the same as the above example.
v = tf.Variable(2.0)
loaded = tf.saved_model.load('saved_model')
with tf.GradientTape() as t:
output = loaded.call_custom_grad(v * v)
print(t.gradient(output, v))
Explanation: Refer to the tf.custom_gradient decorator API docs for more details.
Custom gradients in SavedModel
Note: This feature is available from TensorFlow 2.6.
Custom gradients can be saved to SavedModel by using the option tf.saved_model.SaveOptions(experimental_custom_gradients=True).
To be saved into the SavedModel, the gradient function must be traceable (to learn more, check out the Better performance with tf.function guide).
End of explanation
x0 = tf.constant(0.0)
x1 = tf.constant(0.0)
with tf.GradientTape() as tape0, tf.GradientTape() as tape1:
tape0.watch(x0)
tape1.watch(x1)
y0 = tf.math.sin(x0)
y1 = tf.nn.sigmoid(x1)
y = y0 + y1
ys = tf.reduce_sum(y)
tape0.gradient(ys, x0).numpy() # cos(x) => 1.0
tape1.gradient(ys, x1).numpy() # sigmoid(x1)*(1-sigmoid(x1)) => 0.25
Explanation: A note about the above example: If you try replacing the above code with tf.saved_model.SaveOptions(experimental_custom_gradients=False), the gradient will still produce the same result on loading. The reason is that the gradient registry still contains the custom gradient used in the function call_custom_op. However, if you restart the runtime after saving without custom gradients, running the loaded model under the tf.GradientTape will throw the error: LookupError: No gradient defined for operation 'IdentityN' (op type: IdentityN).
Multiple tapes
Multiple tapes interact seamlessly.
For example, here each tape watches a different set of tensors:
End of explanation
x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
y = x * x * x
# Compute the gradient inside the outer `t2` context manager
# which means the gradient computation is differentiable as well.
dy_dx = t1.gradient(y, x)
d2y_dx2 = t2.gradient(dy_dx, x)
print('dy_dx:', dy_dx.numpy()) # 3 * x**2 => 3.0
print('d2y_dx2:', d2y_dx2.numpy()) # 6 * x => 6.0
Explanation: Higher-order gradients
Operations inside of the tf.GradientTape context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well.
For example:
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape() as t2:
# The inner tape only takes the gradient with respect to the input,
# not the variables.
with tf.GradientTape(watch_accessed_variables=False) as t1:
t1.watch(x)
y = layer(x)
out = tf.reduce_sum(layer(x)**2)
# 1. Calculate the input gradient.
g1 = t1.gradient(out, x)
# 2. Calculate the magnitude of the input gradient.
g1_mag = tf.norm(g1)
# 3. Calculate the gradient of the magnitude with respect to the model.
dg1_mag = t2.gradient(g1_mag, layer.trainable_variables)
[var.shape for var in dg1_mag]
Explanation: While that does give you the second derivative of a scalar function, this pattern does not generalize to produce a Hessian matrix, since tf.GradientTape.gradient only computes the gradient of a scalar. To construct a Hessian matrix, go to the Hessian example under the Jacobian section.
"Nested calls to tf.GradientTape.gradient" is a good pattern when you are calculating a scalar from a gradient, and then the resulting scalar acts as a source for a second gradient calculation, as in the following example.
Example: Input gradient regularization
Many models are susceptible to "adversarial examples". This collection of techniques modifies the model's input to confuse the model's output. The simplest implementation—such as the Adversarial example using the Fast Gradient Signed Method attack—takes a single step along the gradient of the output with respect to the input; the "input gradient".
One technique to increase robustness to adversarial examples is input gradient regularization (Finlay & Oberman, 2019), which attempts to minimize the magnitude of the input gradient. If the input gradient is small, then the change in the output should be small too.
Below is a naive implementation of input gradient regularization. The implementation is:
Calculate the gradient of the output with respect to the input using an inner tape.
Calculate the magnitude of that input gradient.
Calculate the gradient of that magnitude with respect to the model.
End of explanation
x = tf.linspace(-10.0, 10.0, 200+1)
delta = tf.Variable(0.0)
with tf.GradientTape() as tape:
y = tf.nn.sigmoid(x+delta)
dy_dx = tape.jacobian(y, delta)
Explanation: Jacobians
All the previous examples took the gradients of a scalar target with respect to some source tensor(s).
The Jacobian matrix represents the gradients of a vector valued function. Each row contains the gradient of one of the vector's elements.
The tf.GradientTape.jacobian method allows you to efficiently calculate a Jacobian matrix.
Note that:
Like gradient: The sources argument can be a tensor or a container of tensors.
Unlike gradient: The target tensor must be a single tensor.
Scalar source
As a first example, here is the Jacobian of a vector-target with respect to a scalar-source.
End of explanation
print(y.shape)
print(dy_dx.shape)
plt.plot(x.numpy(), y, label='y')
plt.plot(x.numpy(), dy_dx, label='dy/dx')
plt.legend()
_ = plt.xlabel('x')
Explanation: When you take the Jacobian with respect to a scalar the result has the shape of the target, and gives the gradient of the each element with respect to the source:
End of explanation
x = tf.random.normal([7, 5])
layer = tf.keras.layers.Dense(10, activation=tf.nn.relu)
with tf.GradientTape(persistent=True) as tape:
y = layer(x)
y.shape
Explanation: Tensor source
Whether the input is scalar or tensor, tf.GradientTape.jacobian efficiently calculates the gradient of each element of the source with respect to each element of the target(s).
For example, the output of this layer has a shape of (10, 7):
End of explanation
layer.kernel.shape
Explanation: And the layer's kernel's shape is (5, 10):
End of explanation
j = tape.jacobian(y, layer.kernel)
j.shape
Explanation: The shape of the Jacobian of the output with respect to the kernel is those two shapes concatenated together:
End of explanation
g = tape.gradient(y, layer.kernel)
print('g.shape:', g.shape)
j_sum = tf.reduce_sum(j, axis=[0, 1])
delta = tf.reduce_max(abs(g - j_sum)).numpy()
assert delta < 1e-3
print('delta:', delta)
Explanation: If you sum over the target's dimensions, you're left with the gradient of the sum that would have been calculated by tf.GradientTape.gradient:
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.relu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.relu)
with tf.GradientTape() as t2:
with tf.GradientTape() as t1:
x = layer1(x)
x = layer2(x)
loss = tf.reduce_mean(x**2)
g = t1.gradient(loss, layer1.kernel)
h = t2.jacobian(g, layer1.kernel)
print(f'layer.kernel.shape: {layer1.kernel.shape}')
print(f'h.shape: {h.shape}')
Explanation: <a id="hessian"> </hessian>
Example: Hessian
While tf.GradientTape doesn't give an explicit method for constructing a Hessian matrix it's possible to build one using the tf.GradientTape.jacobian method.
Note: The Hessian matrix contains N**2 parameters. For this and other reasons it is not practical for most models. This example is included more as a demonstration of how to use the tf.GradientTape.jacobian method, and is not an endorsement of direct Hessian-based optimization. A Hessian-vector product can be calculated efficiently with nested tapes, and is a much more efficient approach to second-order optimization.
End of explanation
n_params = tf.reduce_prod(layer1.kernel.shape)
g_vec = tf.reshape(g, [n_params, 1])
h_mat = tf.reshape(h, [n_params, n_params])
Explanation: To use this Hessian for a Newton's method step, you would first flatten out its axes into a matrix, and flatten out the gradient into a vector:
End of explanation
def imshow_zero_center(image, **kwargs):
lim = tf.reduce_max(abs(image))
plt.imshow(image, vmin=-lim, vmax=lim, cmap='seismic', **kwargs)
plt.colorbar()
imshow_zero_center(h_mat)
Explanation: The Hessian matrix should be symmetric:
End of explanation
eps = 1e-3
eye_eps = tf.eye(h_mat.shape[0])*eps
Explanation: The Newton's method update step is shown below:
End of explanation
# X(k+1) = X(k) - (∇²f(X(k)))^-1 @ ∇f(X(k))
# h_mat = ∇²f(X(k))
# g_vec = ∇f(X(k))
update = tf.linalg.solve(h_mat + eye_eps, g_vec)
# Reshape the update and apply it to the variable.
_ = layer1.kernel.assign_sub(tf.reshape(update, layer1.kernel.shape))
Explanation: Note: Don't actually invert the matrix.
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = layer2(y)
y.shape
Explanation: While this is relatively simple for a single tf.Variable, applying this to a non-trivial model would require careful concatenation and slicing to produce a full Hessian across multiple variables.
Batch Jacobian
In some cases, you want to take the Jacobian of each of a stack of targets with respect to a stack of sources, where the Jacobians for each target-source pair are independent.
For example, here the input x is shaped (batch, ins) and the output y is shaped (batch, outs):
End of explanation
j = tape.jacobian(y, x)
j.shape
Explanation: The full Jacobian of y with respect to x has a shape of (batch, ins, batch, outs), even if you only want (batch, ins, outs):
End of explanation
imshow_zero_center(j[:, 0, :, 0])
_ = plt.title('A (batch, batch) slice')
def plot_as_patches(j):
# Reorder axes so the diagonals will each form a contiguous patch.
j = tf.transpose(j, [1, 0, 3, 2])
# Pad in between each patch.
lim = tf.reduce_max(abs(j))
j = tf.pad(j, [[0, 0], [1, 1], [0, 0], [1, 1]],
constant_values=-lim)
# Reshape to form a single image.
s = j.shape
j = tf.reshape(j, [s[0]*s[1], s[2]*s[3]])
imshow_zero_center(j, extent=[-0.5, s[2]-0.5, s[0]-0.5, -0.5])
plot_as_patches(j)
_ = plt.title('All (batch, batch) slices are diagonal')
Explanation: If the gradients of each item in the stack are independent, then every (batch, batch) slice of this tensor is a diagonal matrix:
End of explanation
j_sum = tf.reduce_sum(j, axis=2)
print(j_sum.shape)
j_select = tf.einsum('bxby->bxy', j)
print(j_select.shape)
Explanation: To get the desired result, you can sum over the duplicate batch dimension, or else select the diagonals using tf.einsum:
End of explanation
jb = tape.batch_jacobian(y, x)
jb.shape
error = tf.reduce_max(abs(jb - j_sum))
assert error < 1e-3
print(error.numpy())
Explanation: It would be much more efficient to do the calculation without the extra dimension in the first place. The tf.GradientTape.batch_jacobian method does exactly that:
End of explanation
x = tf.random.normal([7, 5])
layer1 = tf.keras.layers.Dense(8, activation=tf.nn.elu)
bn = tf.keras.layers.BatchNormalization()
layer2 = tf.keras.layers.Dense(6, activation=tf.nn.elu)
with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
tape.watch(x)
y = layer1(x)
y = bn(y, training=True)
y = layer2(y)
j = tape.jacobian(y, x)
print(f'j.shape: {j.shape}')
plot_as_patches(j)
_ = plt.title('These slices are not diagonal')
_ = plt.xlabel("Don't use `batch_jacobian`")
Explanation: Caution: tf.GradientTape.batch_jacobian only verifies that the first dimension of the source and target match. It doesn't check that the gradients are actually independent. It's up to you to make sure you only use batch_jacobian where it makes sense. For example, adding a tf.keras.layers.BatchNormalization destroys the independence, since it normalizes across the batch dimension:
End of explanation
jb = tape.batch_jacobian(y, x)
print(f'jb.shape: {jb.shape}')
Explanation: In this case, batch_jacobian still runs and returns something with the expected shape, but its contents have an unclear meaning:
End of explanation |
9,341 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter State Farm
Step1: Setup batches
Step2: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
Step3: Re-run sample experiments on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
Step4: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Data augmentation
Step5: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.
Four conv/pooling pairs + dropout
Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
Step6: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
Step7: Batchnorm dense layers on pretrained conv layers
Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
Step8: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Pre-computed data augmentation + dropout
We'll use our usual data augmentation parameters
Step9: We use those to create a dataset of convolutional features 5x bigger than the training set.
Step10: Let's include the real training data as well in its non-augmented form.
Step11: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
Step12: Based on some experiments the previous model works well, with bigger dense layers.
Step13: Now we can train the model as usual, with pre-computed augmented data.
Step14: Looks good - let's save those weights.
Step15: Pseudo labeling
We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.
To do this, we simply calculate the predictions of our model...
Step16: ...concatenate them with our training labels...
Step17: ...and fine-tune our model using that data.
Step18: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
Step19: Submit
We'll find a good clipping amount using the validation set, prior to submitting.
Step20: This gets 0.534 on the leaderboard.
The "things that didn't really work" section
You can safely ignore everything from here on, because they didn't really help.
Finetune some conv layers too
Step21: Ensembling | Python Code:
from theano.sandbox import cuda
cuda.use('gpu0')
%matplotlib inline
from __future__ import print_function, division
path = "data/state/"
#path = "data/state/sample/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
batch_size=64
Explanation: Enter State Farm
End of explanation
batches = get_batches(path+'train', batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
Explanation: Setup batches
End of explanation
trn = get_data(path+'train')
val = get_data(path+'valid')
save_array(path+'results/val.dat', val)
save_array(path+'results/trn.dat', trn)
val = load_array(path+'results/val.dat')
trn = load_array(path+'results/trn.dat')
Explanation: Rather than using batches, we could just import all the data into an array to save some processing time. (In most examples I'm using the batches, however - just because that's how I happened to start out.)
End of explanation
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model = conv1(batches)`
Explanation: Re-run sample experiments on full dataset
We should find that everything that worked on the sample (see statefarm-sample.ipynb), works on the full dataset too. Only better! Because now we have more data. So let's see how they go - the models in this section are exact copies of the sample notebook models.
Single conv layer
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = conv1(batches)
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=15, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Interestingly, with no regularization or augmentation we're getting some reasonable results from our simple convolutional model. So with augmentation, we hopefully will see some very good results.
Data augmentation
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Convolution2D(128,3,3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(200, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.00001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: I'm shocked by how good these results are! We're regularly seeing 75-80% accuracy on the validation set, which puts us into the top third or better of the competition. With such a simple model and no dropout or semi-supervised learning, this really speaks to the power of this approach to data augmentation.
Four conv/pooling pairs + dropout
Unfortunately, the results are still very unstable - the validation accuracy jumps from epoch to epoch. Perhaps a deeper model with some dropout would help.
End of explanation
vgg = Vgg16()
model=vgg.model
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# batches shuffle must be set to False when pre-computing features
batches = get_batches(path+'train', batch_size=batch_size, shuffle=False)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
conv_feat = conv_model.predict_generator(batches, batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches, val_batches.nb_sample)
conv_test_feat = conv_model.predict_generator(test_batches, test_batches.nb_sample)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_val_feat.shape
Explanation: This is looking quite a bit better - the accuracy is similar, but the stability is higher. There's still some way to go however...
Imagenet conv features
Since we have so little data, and it is similar to imagenet images (full color photos), using pre-trained VGG weights is likely to be helpful - in fact it seems likely that we won't need to fine-tune the convolutional layer weights much, if at all. So we can pre-compute the output of the last convolutional layer, as we did in lesson 3 when we experimented with dropout. (However this means that we can't use full data augmentation, since we can't pre-compute something that changes every image.)
End of explanation
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(conv_feat, trn_labels, batch_size=batch_size, nb_epoch=2,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/conv8.h5')
Explanation: Batchnorm dense layers on pretrained conv layers
Since we've pre-computed the output of the last convolutional layer, we need to create a network that takes that as input, and predicts our 10 classes. Let's try using a simplified version of VGG's dense layers.
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
Explanation: Looking good! Let's try pre-computing 5 epochs worth of augmented data, so we can experiment with combining dropout and augmentation on the pre-trained model.
Pre-computed data augmentation + dropout
We'll use our usual data augmentation parameters:
End of explanation
da_conv_feat = conv_model.predict_generator(da_batches, da_batches.nb_sample*5)
save_array(path+'results/da_conv_feat2.dat', da_conv_feat)
da_conv_feat = load_array(path+'results/da_conv_feat2.dat')
Explanation: We use those to create a dataset of convolutional features 5x bigger than the training set.
End of explanation
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
Explanation: Let's include the real training data as well in its non-augmented form.
End of explanation
da_trn_labels = np.concatenate([trn_labels]*6)
Explanation: Since we've now got a dataset 6x bigger than before, we'll need to copy our labels 6 times too.
End of explanation
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model = Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
Explanation: Based on some experiments the previous model works well, with bigger dense layers.
End of explanation
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
Explanation: Now we can train the model as usual, with pre-computed augmented data.
End of explanation
bn_model.save_weights(path+'models/da_conv8_1.h5')
Explanation: Looks good - let's save those weights.
End of explanation
val_pseudo = bn_model.predict(conv_val_feat, batch_size=batch_size)
Explanation: Pseudo labeling
We're going to try using a combination of pseudo labeling and knowledge distillation to allow us to use unlabeled data (i.e. do semi-supervised learning). For our initial experiment we'll use the validation set as the unlabeled data, so that we can see that it is working without using the test set. At a later date we'll try using the test set.
To do this, we simply calculate the predictions of our model...
End of explanation
comb_pseudo = np.concatenate([da_trn_labels, val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
Explanation: ...concatenate them with our training labels...
End of explanation
bn_model.load_weights(path+'models/da_conv8_1.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
Explanation: ...and fine-tune our model using that data.
End of explanation
bn_model.save_weights(path+'models/bn-ps8.h5')
Explanation: That's a distinct improvement - even although the validation set isn't very big. This looks encouraging for when we try this on the test set.
End of explanation
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.gz'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
Explanation: Submit
We'll find a good clipping amount using the validation set, prior to submitting.
End of explanation
for l in get_bn_layers(p): conv_model.add(l)
for l1,l2 in zip(bn_model.layers, conv_model.layers[last_conv_idx+1:]):
l2.set_weights(l1.get_weights())
for l in conv_model.layers: l.trainable =False
for l in conv_model.layers[last_conv_idx+1:]: l.trainable =True
comb = np.concatenate([trn, val])
gen_t = image.ImageDataGenerator(rotation_range=8, height_shift_range=0.04,
shear_range=0.03, channel_shift_range=10, width_shift_range=0.08)
batches = gen_t.flow(comb, comb_pseudo, batch_size=batch_size)
val_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=False)
conv_model.compile(Adam(lr=0.00001), loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, batches.N, nb_epoch=1, validation_data=val_batches,
nb_val_samples=val_batches.N)
conv_model.optimizer.lr = 0.0001
conv_model.fit_generator(batches, batches.N, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.N)
for l in conv_model.layers[16:]: l.trainable =True
conv_model.optimizer.lr = 0.00001
conv_model.fit_generator(batches, batches.N, nb_epoch=8, validation_data=val_batches,
nb_val_samples=val_batches.N)
conv_model.save_weights(path+'models/conv8_ps.h5')
conv_model.load_weights(path+'models/conv8_da.h5')
val_pseudo = conv_model.predict(val, batch_size=batch_size*2)
save_array(path+'models/pseudo8_da.dat', val_pseudo)
Explanation: This gets 0.534 on the leaderboard.
The "things that didn't really work" section
You can safely ignore everything from here on, because they didn't really help.
Finetune some conv layers too
End of explanation
drivers_ds = pd.read_csv(path+'driver_imgs_list.csv')
drivers_ds.head()
img2driver = drivers_ds.set_index('img')['subject'].to_dict()
driver2imgs = {k: g["img"].tolist()
for k,g in drivers_ds[['subject', 'img']].groupby("subject")}
def get_idx(driver_list):
return [i for i,f in enumerate(filenames) if img2driver[f[3:]] in driver_list]
drivers = driver2imgs.keys()
rnd_drivers = np.random.permutation(drivers)
ds1 = rnd_drivers[:len(rnd_drivers)//2]
ds2 = rnd_drivers[len(rnd_drivers)//2:]
models=[fit_conv([d]) for d in drivers]
models=[m for m in models if m is not None]
all_preds = np.stack([m.predict(conv_test_feat, batch_size=128) for m in models])
avg_preds = all_preds.mean(axis=0)
avg_preds = avg_preds/np.expand_dims(avg_preds.sum(axis=1), 1)
keras.metrics.categorical_crossentropy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
keras.metrics.categorical_accuracy(val_labels, np.clip(avg_val_preds,0.01,0.99)).eval()
Explanation: Ensembling
End of explanation |
9,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 探索 TF-Hub CORD-19 Swivel 嵌入向量
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: 分析嵌入向量
首先,我们通过计算和绘制不同术语之间的相关矩阵来分析嵌入向量。如果嵌入向量学会了成功捕获不同单词的含义,则语义相似的单词的嵌入向量应相互靠近。我们来看一些与 COVID-19 相关的术语。
Step5: 可以看到,嵌入向量成功捕获了不同术语的含义。每个单词都与其所在簇的其他单词相似(即“coronavirus”与“SARS”和“MERS”高度相关),但与其他簇的术语不同(即“SARS”与“Spain”之间的相似度接近于 0)。
现在,我们来看看如何使用这些嵌入向量解决特定任务。
SciCite:引用意图分类
本部分介绍了将嵌入向量用于下游任务(如文本分类)的方法。我们将使用 TensorFlow 数据集中的 SciCite 数据集对学术论文中的引文意图进行分类。给定一个带有学术论文引文的句子,对引用的主要意图进行分类:是背景信息、使用方法,还是比较结果。
Step6: 训练引用意图分类器
我们将使用 Estimator 在 SciCite 数据集上对分类器进行训练。让我们设置 input_fns,将数据集读取到模型中。
Step7: 我们构建一个模型,该模型使用 CORD-19 嵌入向量,并在顶部具有一个分类层。
Step8: 训练并评估模型
让我们训练并评估模型以查看在 SciCite 任务上的性能。
Step9: 可以看到,损失迅速减小,而准确率迅速提高。我们绘制一些样本来检查预测与真实标签的关系: | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
tf.logging.set_verbosity('ERROR')
import tensorflow_datasets as tfds
import tensorflow_hub as hub
try:
from google.colab import data_table
def display_df(df):
return data_table.DataTable(df, include_index=False)
except ModuleNotFoundError:
# If google-colab is not available, just display the raw DataFrame
def display_df(df):
return df
Explanation: 探索 TF-Hub CORD-19 Swivel 嵌入向量
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/cord_19_embeddings"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">Download notebook</a>
</td>
<td><a href="https://tfhub.dev/tensorflow/cord-19/swivel-128d/1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
TF-Hub 上的 CORD-19 Swivel 文本嵌入向量模块 (https://tfhub.dev/tensorflow/cord-19/swivel-128d/1) 旨在支持研究员分析与 COVID-19 相关的自然语言文本。这些嵌入向量针对 CORD-19 数据集中文章的标题、作者、摘要、正文文本和参考文献标题进行了训练。
在此 Colab 中,我们将进行以下操作:
分析嵌入向量空间中语义相似的单词
使用 CORD-19 嵌入向量在 SciCite 数据集上训练分类器
设置
End of explanation
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
with tf.Graph().as_default():
# Load the module
query_input = tf.placeholder(tf.string)
module = hub.Module('https://tfhub.dev/tensorflow/cord-19/swivel-128d/1')
embeddings = module(query_input)
with tf.train.MonitoredTrainingSession() as sess:
# Generate embeddings for some terms
queries = [
# Related viruses
"coronavirus", "SARS", "MERS",
# Regions
"Italy", "Spain", "Europe",
# Symptoms
"cough", "fever", "throat"
]
features = sess.run(embeddings, feed_dict={query_input: queries})
plot_correlation(queries, features)
Explanation: 分析嵌入向量
首先,我们通过计算和绘制不同术语之间的相关矩阵来分析嵌入向量。如果嵌入向量学会了成功捕获不同单词的含义,则语义相似的单词的嵌入向量应相互靠近。我们来看一些与 COVID-19 相关的术语。
End of explanation
#@title Set up the dataset from TFDS
class Dataset:
Build a dataset from a TFDS dataset.
def __init__(self, tfds_name, feature_name, label_name):
self.dataset_builder = tfds.builder(tfds_name)
self.dataset_builder.download_and_prepare()
self.feature_name = feature_name
self.label_name = label_name
def get_data(self, for_eval):
splits = THE_DATASET.dataset_builder.info.splits
if tfds.Split.TEST in splits:
split = tfds.Split.TEST if for_eval else tfds.Split.TRAIN
else:
SPLIT_PERCENT = 80
split = "train[{}%:]".format(SPLIT_PERCENT) if for_eval else "train[:{}%]".format(SPLIT_PERCENT)
return self.dataset_builder.as_dataset(split=split)
def num_classes(self):
return self.dataset_builder.info.features[self.label_name].num_classes
def class_names(self):
return self.dataset_builder.info.features[self.label_name].names
def preprocess_fn(self, data):
return data[self.feature_name], data[self.label_name]
def example_fn(self, data):
feature, label = self.preprocess_fn(data)
return {'feature': feature, 'label': label}, label
def get_example_data(dataset, num_examples, **data_kw):
Show example data
with tf.Session() as sess:
batched_ds = dataset.get_data(**data_kw).take(num_examples).map(dataset.preprocess_fn).batch(num_examples)
it = tf.data.make_one_shot_iterator(batched_ds).get_next()
data = sess.run(it)
return data
TFDS_NAME = 'scicite' #@param {type: "string"}
TEXT_FEATURE_NAME = 'string' #@param {type: "string"}
LABEL_NAME = 'label' #@param {type: "string"}
THE_DATASET = Dataset(TFDS_NAME, TEXT_FEATURE_NAME, LABEL_NAME)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 20 #@param {type:"integer"}
data = get_example_data(THE_DATASET, NUM_EXAMPLES, for_eval=False)
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.decode('utf8') for ex in data[0]],
LABEL_NAME: [THE_DATASET.class_names()[x] for x in data[1]]
}))
Explanation: 可以看到,嵌入向量成功捕获了不同术语的含义。每个单词都与其所在簇的其他单词相似(即“coronavirus”与“SARS”和“MERS”高度相关),但与其他簇的术语不同(即“SARS”与“Spain”之间的相似度接近于 0)。
现在,我们来看看如何使用这些嵌入向量解决特定任务。
SciCite:引用意图分类
本部分介绍了将嵌入向量用于下游任务(如文本分类)的方法。我们将使用 TensorFlow 数据集中的 SciCite 数据集对学术论文中的引文意图进行分类。给定一个带有学术论文引文的句子,对引用的主要意图进行分类:是背景信息、使用方法,还是比较结果。
End of explanation
def preprocessed_input_fn(for_eval):
data = THE_DATASET.get_data(for_eval=for_eval)
data = data.map(THE_DATASET.example_fn, num_parallel_calls=1)
return data
def input_fn_train(params):
data = preprocessed_input_fn(for_eval=False)
data = data.repeat(None)
data = data.shuffle(1024)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_eval(params):
data = preprocessed_input_fn(for_eval=True)
data = data.repeat(1)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_predict(params):
data = preprocessed_input_fn(for_eval=True)
data = data.batch(batch_size=params['batch_size'])
return data
Explanation: 训练引用意图分类器
我们将使用 Estimator 在 SciCite 数据集上对分类器进行训练。让我们设置 input_fns,将数据集读取到模型中。
End of explanation
def model_fn(features, labels, mode, params):
# Embed the text
embed = hub.Module(params['module_name'], trainable=params['trainable_module'])
embeddings = embed(features['feature'])
# Add a linear layer on top
logits = tf.layers.dense(
embeddings, units=THE_DATASET.num_classes(), activation=None)
predictions = tf.argmax(input=logits, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'logits': logits,
'predictions': predictions,
'features': features['feature'],
'labels': features['label']
})
# Set up a multi-class classification head
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
loss = tf.reduce_mean(loss)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=params['learning_rate'])
train_op = optimizer.minimize(loss, global_step=tf.train.get_or_create_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
precision = tf.metrics.precision(labels=labels, predictions=predictions)
recall = tf.metrics.recall(labels=labels, predictions=predictions)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
eval_metric_ops={
'accuracy': accuracy,
'precision': precision,
'recall': recall,
})
#@title Hyperparmeters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/1' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
STEPS = 8000#@param {type: "integer"}
EVAL_EVERY = 200 #@param {type: "integer"}
BATCH_SIZE = 10 #@param {type: "integer"}
LEARNING_RATE = 0.01 #@param {type: "number"}
params = {
'batch_size': BATCH_SIZE,
'learning_rate': LEARNING_RATE,
'module_name': EMBEDDING,
'trainable_module': TRAINABLE_MODULE
}
Explanation: 我们构建一个模型,该模型使用 CORD-19 嵌入向量,并在顶部具有一个分类层。
End of explanation
estimator = tf.estimator.Estimator(functools.partial(model_fn, params=params))
metrics = []
for step in range(0, STEPS, EVAL_EVERY):
estimator.train(input_fn=functools.partial(input_fn_train, params=params), steps=EVAL_EVERY)
step_metrics = estimator.evaluate(input_fn=functools.partial(input_fn_eval, params=params))
print('Global step {}: loss {:.3f}, accuracy {:.3f}'.format(step, step_metrics['loss'], step_metrics['accuracy']))
metrics.append(step_metrics)
global_steps = [x['global_step'] for x in metrics]
fig, axes = plt.subplots(ncols=2, figsize=(20,8))
for axes_index, metric_names in enumerate([['accuracy', 'precision', 'recall'],
['loss']]):
for metric_name in metric_names:
axes[axes_index].plot(global_steps, [x[metric_name] for x in metrics], label=metric_name)
axes[axes_index].legend()
axes[axes_index].set_xlabel("Global Step")
Explanation: 训练并评估模型
让我们训练并评估模型以查看在 SciCite 任务上的性能。
End of explanation
predictions = estimator.predict(functools.partial(input_fn_predict, params))
first_10_predictions = list(itertools.islice(predictions, 10))
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [pred['features'].decode('utf8') for pred in first_10_predictions],
LABEL_NAME: [THE_DATASET.class_names()[pred['labels']] for pred in first_10_predictions],
'prediction': [THE_DATASET.class_names()[pred['predictions']] for pred in first_10_predictions]
}))
Explanation: 可以看到,损失迅速减小,而准确率迅速提高。我们绘制一些样本来检查预测与真实标签的关系:
End of explanation |
9,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This workbook shows a example derived from the EDA exercise in Chapter 2 of Doing Data Science, by o'Neil abd Schutt
Step1: Well. Half a million rows. That would be painful in excel.
Add a column of 1's, so that a sum will count people.
Step2: Now we can group the table by Age Range and count how many clicks come from each range.
Step3: Now we can do some other interesting summaries of these categories
Step4: We might want to do the click rate calculation a little more carefully. We don't care about clicks where there are zero impressions or missing age/gender information. So let's filter those out of our data set.
Step5: Group returns a new table. If we wanted to specify the formats on columns of this table, assign it to a name. | Python Code:
clicks = Table.read_table("http://stat.columbia.edu/~rachel/datasets/nyt1.csv")
clicks
Explanation: This workbook shows a example derived from the EDA exercise in Chapter 2 of Doing Data Science, by o'Neil abd Schutt
End of explanation
age_upper_bounds = [18, 25, 35, 45, 55, 65]
def age_range(n):
if n == 0:
return '0'
lower = 1
for upper in age_upper_bounds:
if lower <= n < upper:
return str(lower) + '-' + str(upper-1)
lower = upper
return str(lower) + '+'
# a little test
np.unique([age_range(n) for n in range(100)])
clicks["Age Range"] = clicks.apply(age_range, 'Age')
clicks["Person"] = 1
clicks
Explanation: Well. Half a million rows. That would be painful in excel.
Add a column of 1's, so that a sum will count people.
End of explanation
clicks_by_age = clicks.group('Age Range', sum)
clicks_by_age
clicks_by_age.select(['Age Range', 'Clicks sum', 'Impressions sum', 'Person sum']).barh('Age Range')
Explanation: Now we can group the table by Age Range and count how many clicks come from each range.
End of explanation
clicks_by_age['Gender Mix'] = clicks_by_age['Gender sum'] / clicks_by_age['Person sum']
clicks_by_age["CTR"] = clicks_by_age['Clicks sum'] / clicks_by_age['Impressions sum']
clicks_by_age.select(['Age Range', 'Person sum', 'Gender Mix', 'CTR'])
# Format some columns as percent with limited precision
clicks_by_age.set_format('Gender Mix', PercentFormatter(1))
clicks_by_age.set_format('CTR', PercentFormatter(2))
clicks_by_age
Explanation: Now we can do some other interesting summaries of these categories
End of explanation
impressed = clicks.where(clicks['Age'] > 0).where('Impressions')
impressed
# Impressions by age and gender
impressed.pivot(rows='Gender', columns='Age Range', values='Impressions', collect=sum)
impressed.pivot("Age Range", "Gender", "Clicks",sum)
impressed.pivot_hist('Age Range','Impressions')
distributions = impressed.pivot_bin('Age Range','Impressions')
distributions
impressed['Gen'] = [['Male','Female'][i] for i in impressed['Gender']]
impressed
Explanation: We might want to do the click rate calculation a little more carefully. We don't care about clicks where there are zero impressions or missing age/gender information. So let's filter those out of our data set.
End of explanation
# How does gender and clicks vary with age?
gi = impressed.group('Age Range', np.mean).select(['Age Range', 'Gender mean', 'Clicks mean'])
gi.set_format(['Gender mean', 'Clicks mean'], PercentFormatter)
gi
Explanation: Group returns a new table. If we wanted to specify the formats on columns of this table, assign it to a name.
End of explanation |
9,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SVD Applied to a Word-Document Matrix
This notebook applies the SVD to a simple word-document matrix. The aim is to see what the reconstructed reduced dimension matrix looks like.
Step1: A Slightly Bigger Word-Document Matrix
The example word-document matrix is taken from http
Step2: Word-Document Matrix is A
Step3: Now Take the SVD
Step4: We can see above that the values in the diagonal S matrix are ordered by magnitide. There is a significant different between the biggest value 1.1, and the smallest 0.05. The halfway value of 0.28 is still much smaller than the largest.
Check U, S and V Do Actually Reconstruct A
Step5: Yes, that worked .. the reconstructed A2 is the same as the original A (within the bounds of small floating point accuracy)
Now Reduce Dimensions, Extract Topics
Here we use only the top 3 values of the S singular value matrix, pretty brutal reduction in dimensions!
Why 3, and not 2?
We'll only plot 2 dimensions for the document cluster view, and later we'll use 3 dimensions for the topic word view
Step6: New View Of Documents
Step7: The above shows that there are indeed 3 clusters of documents. That matches our expectations as we constructed the example data set that way.
Topics from New View of Words | Python Code:
#import pandas for conviently labelled arrays
import pandas
# import numpy for SVD function
import numpy
# import matplotlib.pyplot for visualising arrays
import matplotlib.pyplot as plt
Explanation: SVD Applied to a Word-Document Matrix
This notebook applies the SVD to a simple word-document matrix. The aim is to see what the reconstructed reduced dimension matrix looks like.
End of explanation
# create a simple word-document matrix as a pandas dataframe, the content values have been normalised
words = ['wheel', ' seat', ' engine', ' slice', ' oven', ' boil', 'door', 'kitchen', 'roof']
print(words)
documents = ['doc1', 'doc2', 'doc3', 'doc4', 'doc5', 'doc6', 'doc7', 'doc8', 'doc9']
word_doc = pandas.DataFrame([[0.5,0.3333, 0.25, 0, 0, 0, 0, 0, 0],
[0.25, 0.3333, 0, 0, 0, 0, 0, 0.25, 0],
[0.25, 0.3333, 0.75, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0.5, 0.5, 0.6, 0, 0, 0],
[0, 0, 0, 0.3333, 0.1667, 0, 0.5, 0, 0],
[0, 0, 0, 0.1667, 0.3333, 0.4, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0.5, 0.25, 0.25],
[0, 0, 0, 0, 0, 0, 0, 0.25, 0.5]], index=words, columns=documents)
# and show it
word_doc
Explanation: A Slightly Bigger Word-Document Matrix
The example word-document matrix is taken from http://makeyourowntextminingtoolkit.blogspot.co.uk/2016/11/so-many-dimensions-and-how-to-reduce.html but expanded to cover a 3rd topic related to a home or house
End of explanation
# create a numpy array from the pandas dataframe
A = word_doc.values
Explanation: Word-Document Matrix is A
End of explanation
# break it down into an SVD
U, s, VT = numpy.linalg.svd(A, full_matrices=False)
S = numpy.diag(s)
# what are U, S and V
print("U =\n", numpy.round(U, decimals=2), "\n")
print("S =\n", numpy.round(S, decimals=2), "\n")
print("V^T =\n", numpy.round(VT, decimals=2), "\n")
Explanation: Now Take the SVD
End of explanation
# rebuild A2 from U.S.V
A2 = numpy.dot(U,numpy.dot(S,VT))
print("A2 =\n", numpy.round(A2, decimals=2))
Explanation: We can see above that the values in the diagonal S matrix are ordered by magnitide. There is a significant different between the biggest value 1.1, and the smallest 0.05. The halfway value of 0.28 is still much smaller than the largest.
Check U, S and V Do Actually Reconstruct A
End of explanation
# S_reduced is the same as S but with only the top 3 elements kept
S_reduced = numpy.zeros_like(S)
# only keep top two eigenvalues
l = 3
S_reduced[:l, :l] = S[:l,:l]
# show S_rediced which has less info than original S
print("S_reduced =\n", numpy.round(S_reduced, decimals=2))
Explanation: Yes, that worked .. the reconstructed A2 is the same as the original A (within the bounds of small floating point accuracy)
Now Reduce Dimensions, Extract Topics
Here we use only the top 3 values of the S singular value matrix, pretty brutal reduction in dimensions!
Why 3, and not 2?
We'll only plot 2 dimensions for the document cluster view, and later we'll use 3 dimensions for the topic word view
End of explanation
# what is the document matrix now?
S_reduced_VT = numpy.dot(S_reduced, VT)
print("S_reduced_VT = \n", numpy.round(S_reduced_VT, decimals=2))
# plot the array
p = plt.subplot(111)
p.axis('scaled'); p.axis([-2, 2, -2, 2]); p.axhline(y=0, color='lightgrey'); p.axvline(x=0, color='lightgrey')
p.set_yticklabels([]); p.set_xticklabels([])
p.set_title("S_reduced_VT")
p.plot(S_reduced_VT[0,],S_reduced_VT[1,],'ro')
plt.show()
Explanation: New View Of Documents
End of explanation
# topics are a linear combination of original words
U_S_reduced = numpy.dot(U, S_reduced)
df = pandas.DataFrame(numpy.round(U_S_reduced, decimals=2), index=words)
# show colour coded so it is easier to see significant word contributions to a topic
df.style.background_gradient(cmap=plt.get_cmap('Blues'), low=0, high=2)
Explanation: The above shows that there are indeed 3 clusters of documents. That matches our expectations as we constructed the example data set that way.
Topics from New View of Words
End of explanation |
9,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CSE 6040, Fall 2015 [05, Part B]
Step1: Exercise
Step2: Example
Step3: Exercise. Try modifying and extending the above code to retrieve the 13th entry in the search results.
Step4: Interacting with a web API
We hope the preceding exercise was painful
Step5: A more advanced example
Step6: You can inspect the contents of this archive.
Step8: Exercise | Python Code:
# Download the Georgia Tech home page
import requests
response = requests.get ('http://www.gatech.edu')
webpage = response.text # or response.content for raw bytes
print (webpage[0:100]) # Prints the first hundred characters only
Explanation: CSE 6040, Fall 2015 [05, Part B]: Web services 101
The second part of today's lab considers another rich source of data: the web! You will need some of these ideas to do the first homework assignment.
References for today's topics:
* The Requests module: [docs]
* Github's Web API
* The zipfile module: [docs]
The Requests module
A simple way to download a web page in Python is to use the Requests module.
The following example downloads the Georgia Tech home page, storing the raw HTML returned as a string named content.
End of explanation
# (Enter your code for the preceding exercise in this code box)
Explanation: Exercise: Write some Python code that (a) downloads the class home page, and (b) prints a list of all the "base filenames" of IPython notebooks that the page references. The base filename is the name of the file ignoring the preceding path. For instance, the base filename of the notebook you are reading now is, 05b--www.
End of explanation
url_command = 'http://yelp.com/search'
url_args = {'find_desc': "ramen"
, 'find_loc': "atlanta, ga"
, 'ns': 1
, 'start': 0}
response = requests.get (url_command, params=url_args)
print ("==> Downloading from: '%s'" % response.url) # confirm URL
print ("\n==> Excerpt from this URL:\n\n%s\n" % response.text[0:100])
Explanation: Example: Yelp! search. Here's a more complex example, motivated by a screenshot from Yelp! after executing a search for ramen in Atlanta. Take note of the URL.
<img src="yelp-screenshot.png">
The URL encodes what is known as an HTTP "get" method (or request). It basically means a URL with two parts: a command followed by one or more arguments. In this case, the command is everything up to and including the word search; the arguments are the rest, where individual arguments are separated by the & or #.
"HTTP" stands for "HyperText Transport Protocol," which is a standardized set of communication protocols that allow web clients, like your web browser or your Python program, to communicate with web servers.
In this next example, let's see how to build a "get request" with the requests module. It's pretty easy!
End of explanation
# (Enter your code for the preceding exercise in this code box)
Explanation: Exercise. Try modifying and extending the above code to retrieve the 13th entry in the search results.
End of explanation
response = requests.get ('https://api.github.com/repos/rvuduc/cse6040-ipynbs/events')
urls = set ()
for event in response.json ():
urls.add (event['actor']['url'])
# Blank cell, for you to debug or print program state, as needed
peeps = {}
for url in urls:
response = requests.get (url)
key = response.json ()['login']
value = response.json ()['name']
response.close ()
peeps[key] = value
for key, value in peeps.items ():
print ("%s: '%s'" % (key, str (value)))
# Blank cell, for you to debug or print program state, as needed
Explanation: Interacting with a web API
We hope the preceding exercise was painful: it is rough downloading raw HTML and trying to extract information from it!
Luckily, many websites provide an application programming interface (API) for querying their data or otherwise accessing their services from your programs. For instance, Twitter provides a web API for gathering tweets, Flickr provides one for gathering image data, and Github for accessing information about repository histories.
These kinds of web APIs are much easier to use than, for instance, the preceding technique which scrapes raw web pages and then has to parse the resulting HTML. Moreover, there are more scalable in the sense that the web servers can transmit structured data in a less verbose form than raw HTML. In Homework 1, you will apply the techniques below, as well as others, to write some Python scripts to interact with the Yelp! web API.
As a starting example, here is some code to look at all the activity on Github related to our course's IPython notebook repository.
Inspect this code and try running it. See if you can figure out what it does. Note that it is split into two parts, so you can try to digest one before moving on to the second.
End of explanation
import zipfile
import StringIO
URL_ZIPPED = "http://cse6040.gatech.edu/fa15/skilling-j.zip"
r = requests.get (URL_ZIPPED)
zipped_maildir = zipfile.ZipFile (StringIO.StringIO (r.content), 'r')
print ("==> Downloaded: %s" % URL_ZIPPED)
Explanation: A more advanced example: Unpacking a zip file
In Labs 4 and 5-A, you worked with an email repository that you had to manually download and unpack.
As it happens, you can do that from within your Python program as well!
End of explanation
# For the first COUNT items in the archive,
# print the original and compressed file sizes.
COUNT = 10
print ("Contents (first %d items):" % COUNT)
for zi in zipped_maildir.infolist ()[0:COUNT]:
print (" %s: %d -> %d bytes"
% (zi.filename, zi.file_size, zi.compress_size))
Explanation: You can inspect the contents of this archive.
End of explanation
def count_zipped_messages (zipped_maildir):
Returns the number of email messages in a zipped maildir.
pass # Replace with your implementation
msg_count = count_zipped_messages (zipped_maildir)
print ("==> Found %d messages." % msg_count)
assert msg_count == 4139
Explanation: Exercise: Count messages. Write a Python program to count the number of messages in the archive.
Hint: How can you tell a folder from a file?
End of explanation |
9,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Worked machine learning examples using SDSS data</h1>
[AstroHackWeek 2014, 2016- J. S. Bloom @profjsb]
<hr>
Here we'll see some worked ML examples using scikit-learn on Sloan Digital Sky Survey Data (SDSS). This should work in both Python 2 and Python 3.
It's easiest to grab data from the <a href="http
Step1: Notice that there are several things about this dataset. First, RA and DEC are probably not something we want to use in making predictions
Step2: Pretty clearly a big cut at around $z=2$.
Step3: Egad. Some pretty crazy values for dered_r and g_r_color. Let's figure out why.
Step4: Looks like there are some missing values in the catalog which are set at -9999. Let's zoink those from the dataset for now.
Step5: Ok. This looks pretty clean. Let's save this for future use.
Step6: Data Munging done. Let's do some ML!
Basic Model Fitting
We need to create a training set and a testing set.
Step7: Linear Regression
http
Step8: k-Nearest Neighbor (KNN) Regression
Step9: Random Forests
Pretty good intro
http
Step10: model selection
Step11: Classification
Let's do a 3-class classification problem
Step12: Let's look at random forest
Step13: what are the important features in the data?
Step14: model improvement with GridSearchCV
Hyperparameter optimization. Parallel
Step15: Parallelism & Hyperparameter Fitting
GridSearchCV is not compute/RAM optimized. It's also not obviously optimal.
Step16: Let's do this without a full search...
Step17: Clustering, Unsupervised Learning & Anomoly Detection
It's often of interest to find patterns in the data that you didn't know where there, as an end to itself or as a starting point of exploration.
One approach is to look at individual sources that are mis-classified.
Step18: We can also do manifold learning to be able to project structure in lower dimensions. | Python Code:
## get the data locally ... I put this on a gist
!curl -k -O https://gist.githubusercontent.com/anonymous/53781fe86383c435ff10/raw/4cc80a638e8e083775caec3005ae2feaf92b8d5b/qso10000.csv
!curl -k -O https://gist.githubusercontent.com/anonymous/2984cf01a2485afd2c3e/raw/964d4f52c989428628d42eb6faad5e212e79b665/star1000.csv
!curl -k -O https://gist.githubusercontent.com/anonymous/2984cf01a2485afd2c3e/raw/335cd1953e72f6c7cafa9ebb81b43c47cb757a9d/galaxy1000.csv
## Python 2 backward compatibility
from __future__ import absolute_import, division, print_function, unicode_literals
# For pretty plotting, pandas, sklearn
!conda install pandas seaborn matplotlib scikit-learn==0.17.1 -y
import copy
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['axes.labelsize'] = 20
import pandas as pd
pd.set_option('display.max_columns', None)
import seaborn as sns
sns.set()
pd.read_csv("qso10000.csv",index_col=0).head()
Explanation: <h1>Worked machine learning examples using SDSS data</h1>
[AstroHackWeek 2014, 2016- J. S. Bloom @profjsb]
<hr>
Here we'll see some worked ML examples using scikit-learn on Sloan Digital Sky Survey Data (SDSS). This should work in both Python 2 and Python 3.
It's easiest to grab data from the <a href="http://skyserver.sdss3.org/public/en/tools/search/sql.aspx">SDSS skyserver SQL</a> server.
For example to do a basic query to get two types of photometry (aperature and petrosian), corrected for extinction, for 1000 QSO sources with redshifts:
<font color="blue">
<pre>SELECT *,dered_u - mag_u AS diff_u, dered_g - mag_g AS diff_g, dered_r - mag_r AS diff_g, dered_i - mag_i AS diff_i, dered_z - mag_z AS diff_z from
(SELECT top 1000
objid, ra, dec, dered_u,dered_g,dered_r,dered_i,dered_z,psfmag_u-extinction_u AS mag_u,
psfmag_g-extinction_g AS mag_g, psfmag_r-extinction_r AS mag_r, psfmag_i-extinction_i AS mag_i,psfmag_z-extinction_z AS mag_z,z AS spec_z,dered_u - dered_g AS u_g_color,
dered_g - dered_r AS g_r_color,dered_r - dered_i AS r_i_color,dered_i - dered_z AS i_z_color,class
FROM SpecPhoto
WHERE
(class = 'QSO')
) as sp
</pre>
</font>
Saving this and others like it as a csv we can then start to make our data set for classification/regression.
End of explanation
usecols = [str(x) for x in ["objid","dered_r","spec_z","u_g_color","g_r_color","r_i_color",
"i_z_color","diff_u",\
"diff_g1","diff_i","diff_z"]]
qsos = pd.read_csv("qso10000.csv",index_col=0,
usecols=usecols)
qso_features = copy.copy(qsos)
qso_redshifts = qsos["spec_z"]
del qso_features["spec_z"]
qso_features.head()
f, ax = plt.subplots()
bins = ax.hist(qso_redshifts.values)
ax.set_xlabel("redshift", fontsize=18)
ax.set_ylabel("N",fontsize=18)
Explanation: Notice that there are several things about this dataset. First, RA and DEC are probably not something we want to use in making predictions: it's the location of the object on the sky. Second, the magnitudes are highly covariant with the colors. So dumping all but one of the magnitudes might be a good idea to avoid overfitting.
End of explanation
import matplotlib as mpl
import matplotlib.cm as cm
## truncate the color at z=2.5 just to keep some contrast.
norm = mpl.colors.Normalize(vmin=min(qso_redshifts.values), vmax=2.5)
cmap = cm.jet
m = cm.ScalarMappable(norm=norm, cmap=cmap)
rez = pd.scatter_matrix(qso_features[0:2000],
alpha=0.2,figsize=[15,15],color=m.to_rgba(qso_redshifts.values))
Explanation: Pretty clearly a big cut at around $z=2$.
End of explanation
min(qso_features["dered_r"].values)
Explanation: Egad. Some pretty crazy values for dered_r and g_r_color. Let's figure out why.
End of explanation
qsos = pd.read_csv("qso10000.csv",index_col=0,
usecols=usecols)
qsos = qsos[(qsos["dered_r"] > -9999) & (qsos["g_r_color"] > -10) & (qsos["g_r_color"] < 10)]
qso_features = copy.copy(qsos)
qso_redshifts = qsos["spec_z"]
del qso_features["spec_z"]
rez = pd.scatter_matrix(qso_features[0:2000], alpha=0.2,figsize=[15,15],\
color=m.to_rgba(qso_redshifts.values))
Explanation: Looks like there are some missing values in the catalog which are set at -9999. Let's zoink those from the dataset for now.
End of explanation
qsos.to_csv("qsos.clean.csv")
Explanation: Ok. This looks pretty clean. Let's save this for future use.
End of explanation
X = qso_features.values # 9-d feature space
Y = qso_redshifts.values # redshifts
print("feature vector shape=", X.shape)
print("class shape=", Y.shape)
# half of data
import math
half = math.floor(len(Y)/2)
train_X = X[:half]
train_Y = Y[:half]
test_X = X[half:]
test_Y = Y[half:]
Explanation: Data Munging done. Let's do some ML!
Basic Model Fitting
We need to create a training set and a testing set.
End of explanation
from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.
# fit the model
clf.fit(train_X, train_Y)
# now do the prediction
Y_lr_pred = clf.predict(test_X)
# how well did we do?
from sklearn.metrics import mean_squared_error
mse = np.sqrt(mean_squared_error(test_Y,Y_lr_pred)) ; print("MSE",mse)
plt.plot(test_Y,Y_lr_pred - test_Y,'o',alpha=0.1)
plt.title("Linear Regression Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
# here's the MSE guessing the AVERAGE value
print("naive mse", ((1./len(train_Y))*(train_Y - train_Y.mean())**2).sum())
mean_squared_error?
Explanation: Linear Regression
http://scikit-learn.org/stable/modules/linear_model.html
End of explanation
from sklearn import neighbors
from sklearn import preprocessing
X_scaled = preprocessing.scale(X) # many methods work better on scaled X
clf1 = neighbors.KNeighborsRegressor(10)
train_X = X_scaled[:half]
test_X = X_scaled[half:]
clf1.fit(train_X,train_Y)
Y_knn_pred = clf1.predict(test_X)
mse = mean_squared_error(test_Y,Y_knn_pred) ; print("MSE (KNN)", mse)
plt.plot(test_Y, Y_knn_pred - test_Y,'o',alpha=0.2)
plt.title("k-NN Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
from sklearn import neighbors
from sklearn import preprocessing
X_scaled = preprocessing.scale(X) # many methods work better on scaled X
train_X = X_scaled[:half]
train_Y = Y[:half]
test_X = X_scaled[half:]
test_Y = Y[half:]
clf1 = neighbors.KNeighborsRegressor(5)
clf1.fit(train_X,train_Y)
Y_knn_pred = clf1.predict(test_X)
mse = mean_squared_error(test_Y,Y_knn_pred) ; print("MSE=",mse)
plt.scatter(test_Y, Y_knn_pred - test_Y,alpha=0.2)
plt.title("k-NN Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
Explanation: k-Nearest Neighbor (KNN) Regression
End of explanation
from sklearn.ensemble import RandomForestRegressor
clf2 = RandomForestRegressor(n_estimators=100,
criterion='mse', max_depth=None,
min_samples_split=2, min_samples_leaf=1,
max_features='auto', max_leaf_nodes=None,
bootstrap=True, oob_score=False, n_jobs=1,
random_state=None, verbose=0, warm_start=False)
clf2.fit(train_X,train_Y)
Y_rf_pred = clf2.predict(test_X)
mse = mean_squared_error(test_Y,Y_rf_pred) ; print("MSE",mse)
plt.scatter(test_Y, Y_rf_pred - test_Y,alpha=0.2)
plt.title("RF Residuals - MSE = %.1f" % mse)
plt.xlabel("Spectroscopic Redshift")
plt.ylabel("Residual")
plt.hlines(0,min(test_Y),max(test_Y),color="red")
Explanation: Random Forests
Pretty good intro
http://blog.yhathq.com/posts/random-forests-in-python.html
End of explanation
from sklearn import cross_validation
from sklearn import linear_model
clf = linear_model.LinearRegression()
from sklearn.cross_validation import cross_val_score
def print_cv_score_summary(model, xx, yy, cv):
scores = cross_val_score(model, xx, yy, cv=cv, n_jobs=1)
print("mean: {:3f}, stdev: {:3f}".format(
np.mean(scores), np.std(scores)))
print_cv_score_summary(clf,X,Y,cv=cross_validation.KFold(len(Y), 5))
print_cv_score_summary(clf,X,Y,
cv=cross_validation.KFold(len(Y),10,shuffle=True,random_state=1))
print_cv_score_summary(clf2,X,Y,
cv=cross_validation.KFold(len(Y),3,shuffle=True,random_state=1))
Explanation: model selection: cross-validation
End of explanation
usecols = [str(x) for x in ["objid","dered_r","u_g_color","g_r_color","r_i_color","i_z_color","diff_u",\
"diff_g1","diff_i","diff_z","class"]]
all_sources = pd.read_csv("qso10000.csv",index_col=0,usecols=usecols)[:1000]
all_sources = all_sources.append(pd.read_csv("star1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources.append(pd.read_csv("galaxy1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources[(all_sources["dered_r"] > -9999) & (all_sources["g_r_color"] > -10) & (all_sources["g_r_color"] < 10)]
all_features = copy.copy(all_sources)
all_label = all_sources["class"]
del all_features["class"]
X = copy.copy(all_features.values)
Y = copy.copy(all_label.values)
all_sources.tail()
print("feature vector shape=", X.shape)
print("class shape=", Y.shape)
Y[Y=="QSO"] = 0
Y[Y=="STAR"] = 1
Y[Y=="GALAXY"] = 2
Y = list(Y)
Explanation: Classification
Let's do a 3-class classification problem: star, galaxy, or QSO
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=200,oob_score=True)
clf.fit(X,Y)
Explanation: Let's look at random forest
End of explanation
sorted(zip(all_sources.columns.values,clf.feature_importances_),key=lambda q: q[1],reverse=True)
clf.oob_score_ ## "Out of Bag" Error
import numpy as np
from sklearn import svm, datasets
cmap = cm.jet_r
# import some data to play with
plt.figure(figsize=(10,10))
X = all_features.values[:, 1:3] # use only two features for training and plotting purposes
h = 0.02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel=str('linear'), C=C).fit(X, Y)
rbf_svc = svm.SVC(kernel=str('rbf'), gamma=0.7, C=C).fit(X, Y)
poly_svc = svm.SVC(kernel=str('poly'), degree=3, C=C).fit(X, Y)
lin_svc = svm.LinearSVC(C=C).fit(X, Y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# title for the plots
titles = ['SVC with linear kernel',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel',
'LinearSVC (linear kernel)']
norm = mpl.colors.Normalize(vmin=min(Y), vmax=max(Y))
m = cm.ScalarMappable(norm=norm, cmap=cmap)
for i, clf in enumerate((svc, rbf_svc, poly_svc, lin_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z,cmap=cm.Paired)
plt.axis('off')
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=m.to_rgba(Y),cmap=cm.Paired)
plt.title(titles[i])
Explanation: what are the important features in the data?
End of explanation
# fit a support vector machine classifier
from sklearn import grid_search
from sklearn import svm
from sklearn import metrics
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# instantiate the SVM object
sdss_svm = svm.SVC()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'kernel':(str('linear'), str('rbf')), \
'gamma':[0.5, 0.3, 0.1, 0.01],
'C':[0.1, 2, 4, 5, 10, 20,30]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
svm_tune = grid_search.GridSearchCV(sdss_svm, parameters,\
n_jobs = -1, cv = 3,verbose=1)
svm_opt = svm_tune.fit(X, Y)
# print the best score and estimator
print(svm_opt.best_score_)
print(svm_opt.best_estimator_)
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
X_train, X_test, y_train, y_test = train_test_split(X, Y, random_state=0)
classifier = svm.SVC(**svm_opt.best_estimator_.get_params())
y_pred = classifier.fit(X_train, y_train).predict(X_test)
# Compute confusion matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# Show confusion matrix in a separate window
plt.matshow(cm)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':[str("gini"),str("entropy")],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
clf.get_params()
svm_opt.best_estimator_.get_params()
grid_search.GridSearchCV?
Explanation: model improvement with GridSearchCV
Hyperparameter optimization. Parallel: makes use of joblib
End of explanation
import time
start = time.time()
## this takes about 30 seconds
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
Explanation: Parallelism & Hyperparameter Fitting
GridSearchCV is not compute/RAM optimized. It's also not obviously optimal.
End of explanation
import time
start = time.time()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = grid_search.RandomizedSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 3,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
!conda install dask distributed -y
import os
myhome = os.getcwd()
os.environ["PYTHONPATH"] = myhome + "/dask-learn"
myhome = !pwd
!git clone https://github.com/dask/dask-learn.git
%cd dask-learn
!git pull
!python setup.py install
from dklearn.grid_search import GridSearchCV as DaskGridSearchCV
import time
start = time.time()
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(10,50,200),"max_features": ["auto",3,5],
'criterion':["gini","entropy"],"min_samples_leaf": [1,2]}
#parameters = {'kernel':('linear', 'rbf')}
# do a grid search to find the highest 3-fold CV zero-one score
rf_tune = DaskGridSearchCV(sdss_rf, parameters,\
cv = 3)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
print("total time in seconds",time.time()- start)
#To do distributed:
#from distributed import Executor
#executor = Executor()
#executor
Explanation: Let's do this without a full search...
End of explanation
usecols = [str(x) for x in ["objid","dered_r","u_g_color","g_r_color","r_i_color","i_z_color","diff_u",\
"diff_g1","diff_i","diff_z","class"]]
all_sources = pd.read_csv("qso10000.csv",index_col=0,usecols=usecols)[:1000]
all_sources = all_sources.append(pd.read_csv("star1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources.append(pd.read_csv("galaxy1000.csv",index_col=0,usecols=usecols))
all_sources = all_sources[(all_sources["dered_r"] > -9999) & (all_sources["g_r_color"] > -10) & (all_sources["g_r_color"] < 10)]
all_features = copy.copy(all_sources)
all_label = all_sources["class"]
del all_features["class"]
X = copy.copy(all_features.values)
Y = copy.copy(all_label.values)
# instantiate the RF learning object
sdss_rf = RandomForestClassifier()
X = all_features.values
Y = all_label.values
# parameter values over which we will search
parameters = {'n_estimators':(100,),"max_features": ["auto",3,4],
'criterion':["entropy"],"min_samples_leaf": [1,2]}
# do a grid search to find the highest 5-fold CV zero-one score
rf_tune = grid_search.GridSearchCV(sdss_rf, parameters,\
n_jobs = -1, cv = 5,verbose=1)
rf_opt = rf_tune.fit(X, Y)
# print the best score and estimator
print(rf_opt.best_score_)
print(rf_opt.best_estimator_)
probs = rf_opt.best_estimator_.predict_proba(X)
print(rf_opt.best_estimator_.classes_)
for i in range(probs.shape[0]):
if rf_opt.best_estimator_.classes_[np.argmax(probs[i,:])] != Y[i]:
print("Label={0:6s}".format(Y[i]), end=" ")
print("Pgal={0:0.3f} Pqso={1:0.3f} Pstar={2:0.3f}".format(probs[i,0],probs[i,1],probs[i,2]),end=" ")
print("http://skyserver.sdss.org/dr12/en/tools/quicklook/summary.aspx?id=" + str(all_sources.index[i]))
Explanation: Clustering, Unsupervised Learning & Anomoly Detection
It's often of interest to find patterns in the data that you didn't know where there, as an end to itself or as a starting point of exploration.
One approach is to look at individual sources that are mis-classified.
End of explanation
from sklearn import (manifold, datasets, decomposition, ensemble,
discriminant_analysis, random_projection)
rp = random_projection.SparseRandomProjection(n_components=2, density=0.3, random_state=1)
X_projected = rp.fit_transform(X)
Y[Y=="QSO"] = 0
Y[Y=="STAR"] = 1
Y[Y=="GALAXY"] = 2
Yi = Y.astype(np.int64)
plt.title("Manifold Sparse Random Projection")
plt.scatter(X_projected[:, 0], X_projected[:, 1],c=plt.cm.Set1(Yi / 3.),alpha=0.2,
edgecolor='none',s=5*(X[:,0] - np.min(X[:,0])))
clf = manifold.MDS(n_components=2, n_init=1, max_iter=100)
X_mds = clf.fit_transform(X)
plt.title("MDS Projection")
plt.scatter(X_mds[:, 0], X_mds[:, 1],c=plt.cm.Set1(Yi / 3.),alpha=0.3,
s=5*(X[:,0] - np.min(X[:,0])))
Explanation: We can also do manifold learning to be able to project structure in lower dimensions.
End of explanation |
9,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Tutorial
Now you are ready to start creating your own AutoML text multi-label classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step11: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step12: Now save the unique dataset identifier for the Dataset resource instance you created.
Step13: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text data.
Text examples must be stored in a CSV or JSONL file.
CSV
For text multi-label classification, the CSV file has a few requirements
Step14: Quick peek at your data
You will use a version of the McDonald's Service dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step15: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following
Step16: Train the model
Now train an AutoML text multi-label classification model using your Vertex Dataset resource. To train the model, do the following steps
Step17: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step18: Now save the unique identifier of the training pipeline you created.
Step19: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step20: Deployment
Training the above model may take upwards of 240 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step21: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step22: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step23: Now get the unique identifier for the Endpoint resource you created.
Step24: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step25: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step26: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step27: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step28: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step29: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML text multi-label classification model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_multi-label_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_multi-label_classification_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create text multi-label classification models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the McDonald's Service. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text multi-label classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Text Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_classification_multi_label_io_format_1.0.0.yaml"
# Text Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_classification_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML text multi-label classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("mcdonalds-" + TIMESTAMP, DATA_SCHEMA)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/ucaip_multi_tcn_dataset.csv"
Explanation: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text data.
Text examples must be stored in a CSV or JSONL file.
CSV
For text multi-label classification, the CSV file has a few requirements:
No heading.
First column is the text example.
Remaining columns are the labels.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
You will use a version of the McDonald's Service dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., mcdonalds).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML text multi-label classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
PIPE_NAME = "mcdonalds_pipe-" + TIMESTAMP
MODEL_NAME = "mcdonalds_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"multi_label": True,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
multi_label: Whether True/False this is a multi-label (vs single) classification.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 240 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
ENDPOINT_NAME = "mcdonalds_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "mcdonalds_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_item = ! gsutil cat $IMPORT_FILE | head -n1
cols = str(test_item[0]).split(",")
test_item = cols[0]
test_label = cols[1:]
print(test_item, test_label)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": data}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
return response
response = predict_item(test_item, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (text files) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters.
Request
The format of each instance is:
{ 'content': text_item }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what you pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding text in the request. You will see in the output for each prediction -- in this case there is just one:
confidences: Confidence level in the prediction.
displayNames: The predicted label.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
9,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised Analysis of Days of Week
Treating crossings each day of features to learn about the relationships between various days
Step1: Get Data
Step2: Principal Components Analysis
Step3: Unsupervised Clustering
Step4: Comparing with Day of Week
Step5: 0-4 weekdays
5, 6 weekend
Analyzing Outliers
The following points are weekdays with a holiday-like pattern | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
Explanation: Unsupervised Analysis of Days of Week
Treating crossings each day of features to learn about the relationships between various days
End of explanation
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns = data.index.date)
pivoted.plot(legend=False,alpha=0.01)
Explanation: Get Data
End of explanation
X = pivoted.fillna(0).T.values
X.shape
X2 =PCA(2, svd_solver='full').fit_transform(X)
X2.shape
import matplotlib.pyplot as plt
plt.scatter(X2[:, 0], X2[:, 1])
Explanation: Principal Components Analysis
End of explanation
gmm = GaussianMixture(2).fit(X)
labels = gmm.predict(X)
np.unique(labels)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')
plt.colorbar()
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1, ax=ax[0])
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1, ax=ax[1])
ax[0].set_title('Purple Cluster')
ax[1].set_title('Red Cluster')
Explanation: Unsupervised Clustering
End of explanation
pd.DatetimeIndex(pivoted.columns).dayofweek
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow')
plt.colorbar()
Explanation: Comparing with Day of Week
End of explanation
dates = pd.DatetimeIndex(pivoted.columns)
dates[(labels==1) & (dayofweek<5)]
Explanation: 0-4 weekdays
5, 6 weekend
Analyzing Outliers
The following points are weekdays with a holiday-like pattern
End of explanation |
9,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collating for real with CollateX. Plain texts
In this exercise, follow the instructions here
Step1: Or directly in the commandline
Step2: Now we're ready to make a collation object. We do this with the slightly hermetic line of code
Step3: Now we add some witnesses. Each witness gets a letter or name that will identify them, and for each we add the literal text of the witness to the collation object.
Step4: And now we can let CollateX do its work of collating these witnesses and sit back for about 0.001 seconds. The result will be an alignment table, so we'll refer to the result with a variable named alignment_table.
Step5: Well, that worked nicely it seems. But there's no printout, no visualization. That's okay, we can come up with a printout of the alignment table too
Step6: CollateX can also collect the segments that run parallel and display them together. To do that, just delete the option segmentation=False as in the line below. We can now collate and print the output again.
Step7: Jupyter Notebook cells order
You may have noticed that if you run the cells in the Notebook in order, they know about one another. For this reason, in the end of this tutorial we could produce different outputs using the information typed into the previous cells. When you open a notebook, remember to run the cells in order or to "run all cells" (from the menu Cell), otherwise you may get an error message.
Recap and exercise
Before moving forward and see how to collate texts stored in files and discover the various outputs that CollateX provide, let's recap what we've done and exercise a bit.
We are using
First, create a new Markdown cell at the end of this Notebook (you could also create a new Notebook, but we'll save time by working in this one). Write in the new cell something like My CollateX test, so you know that this is your tests from that cell onwards. You can use the Markdown cells to document what is happening around them.
Then, create a Code cell and copy the code here below | Python Code:
!pip install --upgrade collatex
Explanation: Collating for real with CollateX. Plain texts
In this exercise, follow the instructions here: read the Markdown cells and execute the Code cells (the ones with In + a number on their left).
Not sure how to execute cells in a Notebook? Check the Jupyter Notebook tutorial.
Delete the outputs
In this notebook, you might have already the code and the outputs, that is the results. We want to create the results afresh, so let's clear all the outputs. Go to the menu 'Kernel' and choose 'Restart & Clear Output' and confirm it when Jupyter asks for it. Wait some seconds, a blue string appears telling 'Kernel ready'; if you don't see it, don't worry, it is so quick that you might have lost it. But the Notebook is ready again.
Please note that we are clearing the results only because we want to run everything in the exercise. But if in the future you come back here, you don't need to delete the results before starting.
Update CollateX
CollateX is already installed, but we want to make sure to have the latest version of CollateX. You don't need to do this every time, but make sure you do it regularly.
That's why we do in the Jupyter Notebook:
End of explanation
from collatex import *
Explanation: Or directly in the commandline: pip install --upgrade collatex (without the exclamation mark at the beginning of the line).
Run CollateX
Finally, we can use CollateX.
We need to tell Python that we will be needing the CollateX package. A package or library is a program, a set of code files that together form a program. In Pythong, before using the library, you need to ask for it. Here is how you do it:
End of explanation
collation = Collation()
Explanation: Now we're ready to make a collation object. We do this with the slightly hermetic line of code:
collation = Collation()
Here the lower case collation is the arbitrary named variable that refers to a copy (officially it is called an instance) of the CollateX collation engine. We simply tell the collation library to create a new instance by saying Collation().
End of explanation
collation.add_plain_witness( "A", "The quick brown fox jumped over the lazy dog.")
collation.add_plain_witness( "B", "The brown fox jumped over the dog." )
collation.add_plain_witness( "C", "The bad fox jumped over the lazy dog." )
Explanation: Now we add some witnesses. Each witness gets a letter or name that will identify them, and for each we add the literal text of the witness to the collation object.
End of explanation
alignment_table = collate(collation, layout='vertical', segmentation=False )
Explanation: And now we can let CollateX do its work of collating these witnesses and sit back for about 0.001 seconds. The result will be an alignment table, so we'll refer to the result with a variable named alignment_table.
End of explanation
print( alignment_table )
Explanation: Well, that worked nicely it seems. But there's no printout, no visualization. That's okay, we can come up with a printout of the alignment table too:
End of explanation
alignment_table = collate(collation, layout='vertical' )
print( alignment_table )
Explanation: CollateX can also collect the segments that run parallel and display them together. To do that, just delete the option segmentation=False as in the line below. We can now collate and print the output again.
End of explanation
from collatex import *
collation = Collation()
collation.add_plain_witness( "W1", "Some texts here")
collation.add_plain_witness( "W2", "Some text here as well" )
collation.add_plain_witness( "W3", "Some texts in the third witness as well" )
collation.add_plain_witness( "W4", "Some texts here")
alignment_table = collate(collation, layout='vertical')
print( alignment_table )
Explanation: Jupyter Notebook cells order
You may have noticed that if you run the cells in the Notebook in order, they know about one another. For this reason, in the end of this tutorial we could produce different outputs using the information typed into the previous cells. When you open a notebook, remember to run the cells in order or to "run all cells" (from the menu Cell), otherwise you may get an error message.
Recap and exercise
Before moving forward and see how to collate texts stored in files and discover the various outputs that CollateX provide, let's recap what we've done and exercise a bit.
We are using
First, create a new Markdown cell at the end of this Notebook (you could also create a new Notebook, but we'll save time by working in this one). Write in the new cell something like My CollateX test, so you know that this is your tests from that cell onwards. You can use the Markdown cells to document what is happening around them.
Then, create a Code cell and copy the code here below: this is all CollateX needs to collate some texts, the same instructions we gave it before but all together.
Now run the cell a first time and see the results.
Make changes and see how the output changes when you run the cell again. Change one thing at a time: this way, if you get an error message, it will be easier to debug the code. Try the following changes:
Change the text for each witness
Set the segmentation option to True (you will see that it is the same as deleting it)
Add a new witness
It is also possible to change the sigil for each witness. The sigil is the abbreviation used for refering to a witness, here 'A', 'B', 'C'.
End of explanation |
9,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div Style="text-align
Step1: Cargamos imágenes y las covertimos a escala de grises
Step2: Ejemplo de fitolito en escala de grises
Step3: Dividimos el conjunto de imágenes para el entrenamiento y evaluación
Step4: Extraemos las características de las imágenes del conjunto de entrenamiento
Mediante la técnica de Bag of Words.
Step5: Obtenemos el conjunto de entrenamiento
Step6: Obtenemos clasificador
Step7: Predecimos imágenes
Mediante la técnica de Bag of Words.
Step8: Evaluamos la precisión del clasificador | Python Code:
%matplotlib inline
#para dibujar en el propio notebook
import numpy as np #numpy como np
import matplotlib.pyplot as plt #matplotlib como plot
from skimage.feature import daisy
from skimage.color import rgb2gray
from sklearn.cluster import MiniBatchKMeans as KMeans
from sklearn import svm
import warnings
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
Explanation: <div Style="text-align: center;line-height: 30px;font-size:32px;font-weight: bold"> Clasificador de fitolitos</div>
Este notebook esta basado en otro notebook cuyo autor es el Dr. José Francisco Diez (UBU). En este no se explican los detalles sobre las tareas de clasificación, Bag of Words, descriptores características, etc. Para ello revise la documentación en este mismo repositorio.
End of explanation
from ImageDataset import ImageDataset
# phytoliths_types = ['Rondel','Bulliform','Bilobate','Trichomas',
# 'Saddle', 'Spherical', 'Cyperaceae']
phytoliths_types = ['Phytolith', 'Background']
dataset=ImageDataset('../../rsc/img', phytoliths_types)
X,y = dataset.getData()
X = list(map(rgb2gray,X))
Explanation: Cargamos imágenes y las covertimos a escala de grises
End of explanation
from skimage.io import imshow
imshow(X[25])
print(y[25])
Explanation: Ejemplo de fitolito en escala de grises
End of explanation
TRAIN_SIZE = 0.7
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
stratify=y,
train_size = TRAIN_SIZE)
Explanation: Dividimos el conjunto de imágenes para el entrenamiento y evaluación
End of explanation
def features_extractor(img, descriptor = daisy):
features = descriptor(img)
numFils, numCols, sizeDesc = features.shape
features = features.reshape((numFils*numCols,sizeDesc))
return features
PROGRESS_NOTIFICATION = 10
def whole_features_extractor(X_train):
'''Método encargado de extraer las
características de un conjunto de
imágenes'''
train_features = []
i = 0
for img in X_train:
if i%PROGRESS_NOTIFICATION ==0:
print("Procesada imagen"+str(i)+"/"+
str(len(X_train)), end="\r")
train_features.append(features_extractor(img))
i += 1
print("Procesadas todas las imágenes", end="\r")
all_features = np.concatenate(train_features)
return train_features, all_features
NUM_CENTERS = 200
def get_features_cluster(X_train, num_centers):
train_features, all_features = whole_features_extractor(X_train)
# Se inicializa el algoritmo de Kmeans
# indicando el número de clusters
warnings.filterwarnings("ignore")
kmeans = KMeans(num_centers)
# Se construye el cluster con todas las
# características del conjunto de entramiento
return kmeans.fit(all_features),train_features
cluster, train_features = get_features_cluster(X_train, NUM_CENTERS)
Explanation: Extraemos las características de las imágenes del conjunto de entrenamiento
Mediante la técnica de Bag of Words.
End of explanation
def bow_histogram_extractor(imgFeatures, num_centers):
# extrae pertenencias a cluster
pertenencias=cluster.predict(imgFeatures)
# extrae histograma
bow_representation, _ = np.histogram(pertenencias,
bins=num_centers,
range=(0,num_centers-1))
return bow_representation
# Si le paso un conjunto de imagenes X saca las caracteristicas
# Si le paso las features usa directamente las features
def get_training_set(cluster,X=None, t_features=None):
train_features = t_features
if t_features is None:
train_features, _ = whole_features_extractor(X)
num_centers = len(cluster.cluster_centers_)
trainInstances = []
for imgFeatures in train_features:
# añade al conjunto de entrenamiento final
trainInstances.append(bow_histogram_extractor(imgFeatures,
num_centers))
trainInstances = np.array(trainInstances)
return trainInstances
trainInstances = get_training_set(cluster,X=X_train)
Explanation: Obtenemos el conjunto de entrenamiento
End of explanation
def get_trained_classifier(trainInstances,y_train,
classifier= svm.SVC(kernel='linear',
C=0.01,
probability=True)):
return classifier.fit(trainInstances, y_train)
cls = get_trained_classifier(trainInstances,y_train, AdaBoostClassifier(learning_rate=1.5,
base_estimator=DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=4,
max_features=None, max_leaf_nodes=None,
min_impurity_split=1e-07, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
presort=False, random_state=None, splitter='best'),
algorithm='SAMME.R', n_estimators=600))
Explanation: Obtenemos clasificador
End of explanation
def predict_image(imgTest):
global cluster
global cls
num_centers = len(cluster.cluster_centers_)
imgFeatures = features_extractor(imgTest)
testInstances = np.array(bow_histogram_extractor(imgFeatures,
num_centers))
return cls.predict_proba(testInstances)
def predict_image_class(imgTest, types = dataset.getClasses()):
return types[np.argmax(predict_image(imgTest)[0])]
predict_image_class(X_test[30])
imshow(X_test[30])
Explanation: Predecimos imágenes
Mediante la técnica de Bag of Words.
End of explanation
from sklearn.metrics import accuracy_score
testInstances = get_training_set(cluster,X=X_test)
y_pred = list(map(predict_image_class,X_test))
print("%.2f" % accuracy_score(y_test, y_pred))
Explanation: Evaluamos la precisión del clasificador
End of explanation |
9,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
102 - Training Regression Algorithms with the L-BFGS Solver
In this example, we run a linear regression on the Flight Delay dataset to predict the delay times.
We demonstrate how to use the TrainRegressor and the ComputePerInstanceStatistics APIs.
First, import the packages.
Step1: Next, import the CSV dataset.
Step2: Split the dataset into train and test sets.
Step3: Train a regressor on dataset with l-bfgs.
Step4: Score the regressor on the test data.
Step5: Compute model metrics against the entire scored dataset
Step6: Finally, compute and show per-instance statistics, demonstrating the usage
of ComputePerInstanceStatistics. | Python Code:
import numpy as np
import pandas as pd
import mmlspark
Explanation: 102 - Training Regression Algorithms with the L-BFGS Solver
In this example, we run a linear regression on the Flight Delay dataset to predict the delay times.
We demonstrate how to use the TrainRegressor and the ComputePerInstanceStatistics APIs.
First, import the packages.
End of explanation
# load raw data from small-sized 30 MB CSV file (trimmed to contain just what we use)
dataFile = "On_Time_Performance_2012_9.csv"
import os, urllib
if not os.path.isfile(dataFile):
urllib.request.urlretrieve("https://mmlspark.azureedge.net/datasets/"+dataFile, dataFile)
flightDelay = spark.createDataFrame(
pd.read_csv(dataFile, dtype={"Month": np.float64, "Quarter": np.float64,
"DayofMonth": np.float64, "DayOfWeek": np.float64,
"OriginAirportID": np.float64, "DestAirportID": np.float64,
"CRSDepTime": np.float64, "CRSArrTime": np.float64}))
# Print information on the dataset we loaded
print("records read: " + str(flightDelay.count()))
print("Schema:")
flightDelay.printSchema()
flightDelay.limit(10).toPandas()
Explanation: Next, import the CSV dataset.
End of explanation
train,test = flightDelay.randomSplit([0.75, 0.25])
Explanation: Split the dataset into train and test sets.
End of explanation
from mmlspark import TrainRegressor, TrainedRegressorModel
from pyspark.ml.regression import LinearRegression
from pyspark.ml.feature import StringIndexer
# Convert columns to categorical
catCols = ["Carrier", "DepTimeBlk", "ArrTimeBlk"]
trainCat = train
testCat = test
for catCol in catCols:
simodel = StringIndexer(inputCol=catCol, outputCol=catCol + "Tmp").fit(train)
trainCat = simodel.transform(trainCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
testCat = simodel.transform(testCat).drop(catCol).withColumnRenamed(catCol + "Tmp", catCol)
lr = LinearRegression().setSolver("l-bfgs").setRegParam(0.1).setElasticNetParam(0.3)
model = TrainRegressor(model=lr, labelCol="ArrDelay").fit(trainCat)
model.write().overwrite().save("flightDelayModel.mml")
Explanation: Train a regressor on dataset with l-bfgs.
End of explanation
flightDelayModel = TrainedRegressorModel.load("flightDelayModel.mml")
scoredData = flightDelayModel.transform(testCat)
scoredData.limit(10).toPandas()
Explanation: Score the regressor on the test data.
End of explanation
from mmlspark import ComputeModelStatistics
metrics = ComputeModelStatistics().transform(scoredData)
metrics.toPandas()
Explanation: Compute model metrics against the entire scored dataset
End of explanation
from mmlspark import ComputePerInstanceStatistics
evalPerInstance = ComputePerInstanceStatistics().transform(scoredData)
evalPerInstance.select("ArrDelay", "Scores", "L1_loss", "L2_loss").limit(10).toPandas()
Explanation: Finally, compute and show per-instance statistics, demonstrating the usage
of ComputePerInstanceStatistics.
End of explanation |
9,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data passing tutorial
Data passing is the most important aspect of Pipelines.
In Kubeflow Pipelines, the pipeline authors compose pipelines by creating component instances (tasks) and connecting them together.
Component have inputs and outputs. They can consume and produce arbitrary data.
Pipeline authors establish connections between component tasks by connecting their data inputs and outputs - by passing the output of one task as an argument to another task's input.
The system takes care of storing the data produced by components and later passing that data to other components for consumption as instructed by the pipeline.
This tutorial shows how to create python components that produce, consume and transform data.
It shows how to create data passing pipelines by instantiating components and connecting them together.
Step1: Small data
Small data is the data that you'll be comfortable passing as program's command-line argument. Small data size should not exceed few kilobytes.
Some examples of typical types of small data are
Step2: Producing small data
Step3: Producing and consuming multiple arguments
Step4: Consuming and producing data at the same time
Step5: Bigger data (files)
Bigger data should be read from files and written to files.
The paths for the input and output files are chosen by the system and are passed into the function (as strings).
Use the InputPath parameter annotation to tell the system that the function wants to consume the corresponding input data as a file. The system will download the data, write it to a local file and then pass the path of that file to the function.
Use the OutputPath parameter annotation to tell the system that the function wants to produce the corresponding output data as a file. The system will prepare and pass the path of a file where the function should write the output data. After the function exits, the system will upload the data to the storage system so that it can be passed to downstream components.
You can specify the type of the consumed/produced data by specifying the type argument to InputPath and OutputPath. The type can be a python type or an arbitrary type name string. OutputPath('TFModel') means that the function states that the data it has written to a file has type 'TFModel'. InputPath('TFModel') means that the function states that it expect the data it reads from a file to have type 'TFModel'. When the pipeline author connects inputs to outputs the system checks whether the types match.
Note on input/output names
Step6: Processing bigger data
Step7: Processing bigger data with pre-opened files
Step8: Example | Python Code:
# Put your KFP cluster endpoint URL here if working from GCP notebooks (or local notebooks). ('https://xxxxx.notebooks.googleusercontent.com/')
kfp_endpoint='https://XXXXX.{pipelines|notebooks}.googleusercontent.com/'
# Install Kubeflow Pipelines SDK. Add the --user argument if you get permission errors.
!PIP_DISABLE_PIP_VERSION_CHECK=1 python3 -m pip install 'kfp>=0.1.32.2' --quiet --user
from typing import NamedTuple
import kfp
from kfp.components import InputPath, InputTextFile, OutputPath, OutputTextFile
from kfp.components import func_to_container_op
Explanation: Data passing tutorial
Data passing is the most important aspect of Pipelines.
In Kubeflow Pipelines, the pipeline authors compose pipelines by creating component instances (tasks) and connecting them together.
Component have inputs and outputs. They can consume and produce arbitrary data.
Pipeline authors establish connections between component tasks by connecting their data inputs and outputs - by passing the output of one task as an argument to another task's input.
The system takes care of storing the data produced by components and later passing that data to other components for consumption as instructed by the pipeline.
This tutorial shows how to create python components that produce, consume and transform data.
It shows how to create data passing pipelines by instantiating components and connecting them together.
End of explanation
@func_to_container_op
def print_small_text(text: str):
'''Print small text'''
print(text)
def constant_to_consumer_pipeline():
'''Pipeline that passes small constant string to to consumer'''
consume_task = print_small_text('Hello world') # Passing constant as argument to consumer
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(constant_to_consumer_pipeline, arguments={})
def pipeline_parameter_to_consumer_pipeline(text: str):
'''Pipeline that passes small pipeline parameter string to to consumer'''
consume_task = print_small_text(text) # Passing pipeline parameter as argument to consumer
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(
pipeline_parameter_to_consumer_pipeline,
arguments={'text': 'Hello world'}
)
Explanation: Small data
Small data is the data that you'll be comfortable passing as program's command-line argument. Small data size should not exceed few kilobytes.
Some examples of typical types of small data are: number, URL, small string (e.g. column name).
Small lists, dictionaries and JSON structures are fine, but keep an eye on the size and consider switching to file-based data passing methods taht are more suitable for bigger data (more than several kilobytes) or binary data.
All small data outputs will be at some point serialized to strings and all small data input values will be at some point deserialized from strings (passed as command-line argumants). There are built-in serializers and deserializers for several common types (e.g. str, int, float, bool, list, dict). All other types of data need to be serialized manually before returning the data. Make sure to properly specify type annotations, otherwize there would be no automatic deserialization and the component function will receive strings instead of deserialized objects.
Consuming small data
End of explanation
@func_to_container_op
def produce_one_small_output() -> str:
return 'Hello world'
def task_output_to_consumer_pipeline():
'''Pipeline that passes small data from producer to consumer'''
produce_task = produce_one_small_output()
# Passing producer task output as argument to consumer
consume_task1 = print_small_text(produce_task.output) # task.output only works for single-output components
consume_task2 = print_small_text(produce_task.outputs['output']) # task.outputs[...] always works
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(task_output_to_consumer_pipeline, arguments={})
Explanation: Producing small data
End of explanation
@func_to_container_op
def produce_two_small_outputs() -> NamedTuple('Outputs', [('text', str), ('number', int)]):
return ("data 1", 42)
@func_to_container_op
def consume_two_arguments(text: str, number: int):
print('Text={}'.format(text))
print('Number={}'.format(str(number)))
def producers_to_consumers_pipeline(text: str = "Hello world"):
'''Pipeline that passes data from producer to consumer'''
produce1_task = produce_one_small_output()
produce2_task = produce_two_small_outputs()
consume_task1 = consume_two_arguments(produce1_task.output, 42)
consume_task2 = consume_two_arguments(text, produce2_task.outputs['number'])
consume_task3 = consume_two_arguments(produce2_task.outputs['text'], produce2_task.outputs['number'])
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(producers_to_consumers_pipeline, arguments={})
Explanation: Producing and consuming multiple arguments
End of explanation
@func_to_container_op
def get_item_from_list(list_of_strings: list, index: int) -> str:
return list_of_strings[index]
@func_to_container_op
def truncate_text(text: str, max_length: int) -> str:
return text[0:max_length]
def processing_pipeline(text: str = "Hello world"):
truncate_task = truncate_text(text, max_length=5)
get_item_task = get_item_from_list(list_of_strings=[3, 1, truncate_task.output, 1, 5, 9, 2, 6, 7], index=2)
print_small_text(get_item_task.output)
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(processing_pipeline, arguments={})
Explanation: Consuming and producing data at the same time
End of explanation
# Writing bigger data
@func_to_container_op
def repeat_line(line: str, output_text_path: OutputPath(str), count: int = 10):
'''Repeat the line specified number of times'''
with open(output_text_path, 'w') as writer:
for i in range(count):
writer.write(line + '\n')
# Reading bigger data
@func_to_container_op
def print_text(text_path: InputPath()): # The "text" input is untyped so that any data can be printed
'''Print text'''
with open(text_path, 'r') as reader:
for line in reader:
print(line, end = '')
def print_repeating_lines_pipeline():
repeat_lines_task = repeat_line(line='Hello', count=5000)
print_text(repeat_lines_task.output) # Don't forget .output !
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(print_repeating_lines_pipeline, arguments={})
Explanation: Bigger data (files)
Bigger data should be read from files and written to files.
The paths for the input and output files are chosen by the system and are passed into the function (as strings).
Use the InputPath parameter annotation to tell the system that the function wants to consume the corresponding input data as a file. The system will download the data, write it to a local file and then pass the path of that file to the function.
Use the OutputPath parameter annotation to tell the system that the function wants to produce the corresponding output data as a file. The system will prepare and pass the path of a file where the function should write the output data. After the function exits, the system will upload the data to the storage system so that it can be passed to downstream components.
You can specify the type of the consumed/produced data by specifying the type argument to InputPath and OutputPath. The type can be a python type or an arbitrary type name string. OutputPath('TFModel') means that the function states that the data it has written to a file has type 'TFModel'. InputPath('TFModel') means that the function states that it expect the data it reads from a file to have type 'TFModel'. When the pipeline author connects inputs to outputs the system checks whether the types match.
Note on input/output names: When the function is converted to component, the input and output names generally follow the parameter names, but the "_path" and "_file" suffixes are stripped from file/path inputs and outputs. E.g. the number_file_path: InputPath(int) parameter becomes the number: int input. This makes the argument passing look more natural: number=42 instead of number_file_path=42.
Writing and reading bigger data
End of explanation
@func_to_container_op
def split_text_lines(source_path: InputPath(str), odd_lines_path: OutputPath(str), even_lines_path: OutputPath(str)):
with open(source_path, 'r') as reader:
with open(odd_lines_path, 'w') as odd_writer:
with open(even_lines_path, 'w') as even_writer:
while True:
line = reader.readline()
if line == "":
break
odd_writer.write(line)
line = reader.readline()
if line == "":
break
even_writer.write(line)
def text_splitting_pipeline():
text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten'])
split_text_task = split_text_lines(text)
print_text(split_text_task.outputs['odd_lines'])
print_text(split_text_task.outputs['even_lines'])
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(text_splitting_pipeline, arguments={})
Explanation: Processing bigger data
End of explanation
@func_to_container_op
def split_text_lines2(source_file: InputTextFile(str), odd_lines_file: OutputTextFile(str), even_lines_file: OutputTextFile(str)):
while True:
line = source_file.readline()
if line == "":
break
odd_lines_file.write(line)
line = source_file.readline()
if line == "":
break
even_lines_file.write(line)
def text_splitting_pipeline2():
text = '\n'.join(['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine', 'ten'])
split_text_task = split_text_lines2(text)
print_text(split_text_task.outputs['odd_lines']).set_display_name('Odd lines')
print_text(split_text_task.outputs['even_lines']).set_display_name('Even lines')
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(text_splitting_pipeline2, arguments={})
Explanation: Processing bigger data with pre-opened files
End of explanation
# Writing many numbers
@func_to_container_op
def write_numbers(numbers_path: OutputPath(str), start: int = 0, count: int = 10):
with open(numbers_path, 'w') as writer:
for i in range(start, count):
writer.write(str(i) + '\n')
# Reading and summing many numbers
@func_to_container_op
def sum_numbers(numbers_path: InputPath(str)) -> int:
sum = 0
with open(numbers_path, 'r') as reader:
for line in reader:
sum = sum + int(line)
return sum
# Pipeline to sum 100000 numbers
def sum_pipeline(count: 'Integer' = 100000):
numbers_task = write_numbers(count=count)
print_text(numbers_task.output)
sum_task = sum_numbers(numbers_task.outputs['numbers'])
print_text(sum_task.output)
# Running the pipeline
kfp.Client(host=kfp_endpoint).create_run_from_pipeline_func(sum_pipeline, arguments={})
Explanation: Example: Pipeline that generates then sums many numbers
End of explanation |
9,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: According to the S-2 data product specifics, band 4 and band 5 are represented with rasters of different sizes and that can be easily verified
Step2: The obtained results confirm that the two bands have been detected with different resolutions and if the user wanted to implement some processing that involves those bands, he should first operate a RESAMPLING of the S-2 product, according to a selected pixel resolution. The resampling operation can be directly executed in SNAP because it is included in the SNAP Graph Processing Framework(GPF), a wide collection of data processors that can be applied to a Sentinel data product. Each data processor is called a GPF operator and it can be invoked in the desktop version of SNAP, in Python with the snappy module, or directly in the Windows/Linux command-line.
The resampling operation is a typical example of a GPF operator because it is provided with a dedicated user interface that is available in the desktop version of SNAP. It is really important to look at the input parameters that must be set when the user wants to invoke a specific GPF operator. As for most of the GPF operators, also for the RESAMPLE operator the list of input parameters can be found in its user interface, as follows
Step3: It is then possible to construct an empty HasMap object and include the selected parameters with their values in it. In this simple case only the resolution parameter will be set.
Step4: So the resolution in this specific case will be 20 meters per pixel. After the parameter definition it is possible to invoke the resampling operator using a syntax that is the same for all the GPF operators
Step5: The output variable is a new data product and all its bands are now represented with the same resolution. As a test to confirm the successful operation it is possible to look again at band 4 and 5 to see what happened with them | Python Code:
import snappy
from snappy import ProductIO
file_path = 'C:\Program Files\snap\S2A_MSIL1C_20170202T090201_N0204_R007_T35SNA_20170202T090155.SAFE\MTD_MSIL1C.xml'
product = ProductIO.readProduct(file_path)
list(product.getBandNames())
Explanation: Author: Antonio Vecoli
Date: 06/06/2017
Tech For Space www.techforspace.com
License: MIT License
For any technical or Python support please refer to our Project Page
Handling a Sentinel-2 product with SNAP in Python (Tutorial 2)
After the basic elements of SNAP explained in the first two tutorials, it is now possible to introduce a set of more advanced operations that allow to modify a Sentinel-2 data product and make it available for specific scientific processings. The Sentinel-2 product used in this case will be the same of Tutorial 1 and it can be downloaded( with personal account) at the following link :
https://scihub.copernicus.eu/dhus/odata/v1/Products('c94ebae2-3b0d-4472-96a0-324bb54d7bdf')/$value
Resampling a Sentinel-2 data product
In the first tutorial a single band image has been extracted from a complete Sentinel-2 product without implementing any scientific analysis on it, because the aim was to show how it is possible to read an S-2 product in Python, also suggesting some simple image processing technique for a better visualization. But in general, when working with multispectral data, several techniques of scientific analysis need to consider more than one band at the same time. In this case, for all the selected bands, their rasters should be available with the same spatial resolution, so that all the images and data arrays will have the same size in term of number of pixels and matrix dimensions.
Let's consider a simple comparison between two different bands of the current S-2 product:
End of explanation
B4 = product.getBand('B4')
B5 = product.getBand('B5')
Width_4 = B4.getRasterWidth()
Height_4 = B4.getRasterHeight()
print("Band 4 Size: " + str(Width_4) +','+ str(Height_4))
Width_5 = B5.getRasterWidth()
Height_5 = B5.getRasterHeight()
print("Band 5 Size: " + str(Width_5) +','+ str(Height_5))
Explanation: According to the S-2 data product specifics, band 4 and band 5 are represented with rasters of different sizes and that can be easily verified :
End of explanation
from snappy import jpy
HashMap = snappy.jpy.get_type('java.util.HashMap')
Explanation: The obtained results confirm that the two bands have been detected with different resolutions and if the user wanted to implement some processing that involves those bands, he should first operate a RESAMPLING of the S-2 product, according to a selected pixel resolution. The resampling operation can be directly executed in SNAP because it is included in the SNAP Graph Processing Framework(GPF), a wide collection of data processors that can be applied to a Sentinel data product. Each data processor is called a GPF operator and it can be invoked in the desktop version of SNAP, in Python with the snappy module, or directly in the Windows/Linux command-line.
The resampling operation is a typical example of a GPF operator because it is provided with a dedicated user interface that is available in the desktop version of SNAP. It is really important to look at the input parameters that must be set when the user wants to invoke a specific GPF operator. As for most of the GPF operators, also for the RESAMPLE operator the list of input parameters can be found in its user interface, as follows:
<img src="Resampling_list.jpg">
The displayed list can change depending on the type of resampling the user wants to implement; the available options can be found in the desktop version of SNAP, looking into the user interface of the operator.
In Python a GPF operator can be invoked only after the definition of the list of input parameters , using a Java HashMap object (java.util.HashMap). For this reason, whenever the user wants to work with a GPF operator he must always import that Java class in Python using the Jpy module:
End of explanation
parameters = HashMap()
parameters.put('targetResolution',20)
Explanation: It is then possible to construct an empty HasMap object and include the selected parameters with their values in it. In this simple case only the resolution parameter will be set.
End of explanation
result = snappy.GPF.createProduct('Resample',parameters,product)
Explanation: So the resolution in this specific case will be 20 meters per pixel. After the parameter definition it is possible to invoke the resampling operator using a syntax that is the same for all the GPF operators:
createProduct(String operatorName,Map(String,Object) parameters,Product sourceProduct)
and the Python implementation is given in the following line:
End of explanation
B4 = result.getBand('B4')
B5 = result.getBand('B5')
Width_4 = B4.getRasterWidth()
Height_4 = B4.getRasterHeight()
print("Band 4 Size: " + str(Width_4) +','+ str(Height_4))
Width_5 = B5.getRasterWidth()
Height_5 = B5.getRasterHeight()
print("Band 5 Size: " + str(Width_5) +','+ str(Height_5))
Explanation: The output variable is a new data product and all its bands are now represented with the same resolution. As a test to confirm the successful operation it is possible to look again at band 4 and 5 to see what happened with them:
End of explanation |
9,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encore une instruction pour bouger
QUESTIONS
Lorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ?
Quelle différence avec m.goal_position = 30 par exemple ?
Step1: Lecture de marqueurs sous forme de QR-codes
Exécuter ces instructions avec les cartes prévues à cet effet.
Que constatez-vous ?
Step2: Défi mouvement à l'aide de cartes
Enigme | Python Code:
i = 0
for m in poppy.motors:
m.compliant = False
m.goto_position(pos[i], 0.5, wait = True)
i = i + 1
Explanation: Encore une instruction pour bouger
QUESTIONS
Lorsque la liste pos contient 6 angles en degrés, que permet de faire le jeu d'instructions suivant ?
Quelle différence avec m.goal_position = 30 par exemple ?
End of explanation
# importation des outils nécessaires
import cv2
%matplotlib inline
import matplotlib.pyplot as plt
from hampy import detect_markers
# affichage de l'image capturée
img = poppy.camera.frame
plt.imshow(img)
#récupère dans une liste les marqueurs trouvés dans l'image
markers = detect_markers(img)
valeur = 0
for m in markers:
print('Found marker {} at {}'.format(m.id, m.center))
m.draw_contour(img)
valeur = m.id
print(valeur)
Explanation: Lecture de marqueurs sous forme de QR-codes
Exécuter ces instructions avec les cartes prévues à cet effet.
Que constatez-vous ?
End of explanation
while (condition):
#corps de la boucle
import time
# Aide : la commande time.sleep(2.0) permet de temporiser 2 secondes
RIGH = 82737172
LEFT = 76697084
NEXT = 78698884
PREV = 80826986
# l'instruction ci-dessous permet de créer une liste
liste_moteur = [m for m in poppy.motors]
# toutefois, poppy.motors est déjà une liste. Pour vous en assurer,
# type(poppy.motors) vous retourne le type du conteneur poppy.motors
Explanation: Défi mouvement à l'aide de cartes
Enigme : Les cartes ont été créées pour être lues. Pouvez-vous identifier comment à partir des valeurs, on peut reconstruire les noms des variables ?
Mettre toutes les leds des moteurs roses.
Détecter l'un des 4 marqueurs et lui faire effectuer l'action correspondant à son nom :
Next doit permettre de passer au moteur suivant de la liste des moteurs
Prev de revenir au précédent
Righ de faire augmenter la position courante de 5 degrés
Left de faire diminuer la position courante de 5 degrés
Pour identifier le moteur sélectionné, sa led sera rouge durant la selection.
Durant un mouvement, la led du moteur qui bouge sera verte.
On commence par le moteur m1 et lorsque l'on a atteint le moteur m6, si la carte next est lue, le code se termine.
remarque
En Python une boucle tant que s'écrit
End of explanation |
9,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates how to carry out an ordering of a disordered structure using pymatgen.
Step1: Note that each site is now 50% occupied by Cu and Au. Because the ordering algorithms uses an Ewald summation to rank the structures, you need to explicitly specify the oxidation state for each species, even if it is 0. Let us now perform ordering of these sites using two methods.
Method 1 - Using the OrderDisorderedStructureTransformation
The first method is to use the OrderDisorderedStructureTransformation.
Step2: Note that the OrderDisorderedTransformation (with a sufficiently large return_ranked_list parameter) returns all orderings, including duplicates without accounting for symmetry. A computed ewald energy is returned together with each structure. To eliminate duplicates, the best way is to use StructureMatcher's group_structures method, as demonstrated below.
Step3: Method 2 - Using the EnumerateStructureTransformation
If you have enumlib installed, you can use the EnumerateStructureTransformation. This automatically takes care of symmetrically equivalent orderings and can enumerate supercells, but is much more prone to parameter sensitivity and cannot handle very large structures. The example below shows an enumerate of CuAu up to cell sizes of 4. | Python Code:
# Let us start by creating a disordered CuAu fcc structure.
from pymatgen import Structure, Lattice
specie = {"Cu0+": 0.5, "Au0+": 0.5}
cuau = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.677), [specie], [[0, 0, 0]])
print cuau
Explanation: Introduction
This notebook demonstrates how to carry out an ordering of a disordered structure using pymatgen.
End of explanation
from pymatgen.transformations.standard_transformations import OrderDisorderedStructureTransformation
trans = OrderDisorderedStructureTransformation()
ss = trans.apply_transformation(cuau, return_ranked_list=100)
print(len(ss))
print(ss[0])
Explanation: Note that each site is now 50% occupied by Cu and Au. Because the ordering algorithms uses an Ewald summation to rank the structures, you need to explicitly specify the oxidation state for each species, even if it is 0. Let us now perform ordering of these sites using two methods.
Method 1 - Using the OrderDisorderedStructureTransformation
The first method is to use the OrderDisorderedStructureTransformation.
End of explanation
from pymatgen.analysis.structure_matcher import StructureMatcher
matcher = StructureMatcher()
groups = matcher.group_structures([d["structure"] for d in ss])
print(len(groups))
print(groups[0][0])
Explanation: Note that the OrderDisorderedTransformation (with a sufficiently large return_ranked_list parameter) returns all orderings, including duplicates without accounting for symmetry. A computed ewald energy is returned together with each structure. To eliminate duplicates, the best way is to use StructureMatcher's group_structures method, as demonstrated below.
End of explanation
from pymatgen.transformations.advanced_transformations import EnumerateStructureTransformation
specie = {"Cu": 0.5, "Au": 0.5}
cuau = Structure.from_spacegroup("Fm-3m", Lattice.cubic(3.677), [specie], [[0, 0, 0]])
trans = EnumerateStructureTransformation(max_cell_size=3)
ss = trans.apply_transformation(cuau, return_ranked_list=1000)
print(len(ss))
print("cell sizes are %s" % ([len(d["structure"]) for d in ss]))
Explanation: Method 2 - Using the EnumerateStructureTransformation
If you have enumlib installed, you can use the EnumerateStructureTransformation. This automatically takes care of symmetrically equivalent orderings and can enumerate supercells, but is much more prone to parameter sensitivity and cannot handle very large structures. The example below shows an enumerate of CuAu up to cell sizes of 4.
End of explanation |
9,356 | Given the following text description, write Python code to implement the functionality described.
Description:
Check whether an array of strings can correspond to a particular number X
Function to find the maximum base possible for the number N ; Function to find the decimal equivalent of the number ; Condition to check if the number is convertible to another base ; Function to check that the array can correspond to a number X ; counter to count the numbers those are convertible to X ; Loop to iterate over the array ; Convert the current string to every base for checking whether it will correspond to X from any base ; Condition to check if every number of the array can be converted to X ; Driver Code ; The set of strings in base from [ 2 , 36 ]
| Python Code:
def val(c ) :
if(c >= '0' and c <= '9' ) :
return int(c )
else :
return c - ' A ' + 10
def toDeci(strr , base ) :
lenn = len(strr )
power = 1
num = 0
for i in range(lenn - 1 , - 1 , - 1 ) :
if(val(strr[i ] ) >= base ) :
return - 1
num += val(strr[i ] ) * power
power = power * base
return num
def checkCorrespond(strr , x ) :
counter = 0
n = len(strr )
for i in range(n ) :
for j in range(2 , 37 ) :
if(toDeci(strr[i ] , j ) == x ) :
counter += 1
break
if(counter == n ) :
print("YES ")
else :
print("NO ")
x = 16
strr =["10000", "20", "16"]
checkCorrespond(strr , x )
|
9,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating symbolic expressions
For larger reaction systems it is preferable to generate the system of ordinary differential equations from some serialized format and then generate the callback using code generation.
In this notebook we will define such a serialized format, and use it load a larger set of reactions. We represent a reaction as length 3 tuple of
Step1: the reaction system is still defined as
Step2: We create a helper class to represent to ODE system.
Step3: The reason for why we went through this trouble is to be able to create a ODEsys instance from conveniently serialized data. Here is a much larger set of reactions, describing water radiolysis at 298 K and a doserate of 300 Gy/s (which is a doserate not far from that of a nuclear reactor)
Step4: Values correspond to SI units, the concentration of water at 298 K is 55400 mol/m³. Neutral water contains [H+] = [HO-] = 10^-4 mol/m³ | Python Code:
reactions = [
('k1', {'A': 1}, {'B': 1, 'A': -1}),
('k2', {'B': 1, 'C': 1}, {'A': 1, 'B': -1}),
('k3', {'B': 2}, {'B': -1, 'C': 1})
]
names, params = 'A B C'.split(), 'k1 k2 k3'.split()
tex_names = ['[%s]' % n for n in names]
Explanation: Generating symbolic expressions
For larger reaction systems it is preferable to generate the system of ordinary differential equations from some serialized format and then generate the callback using code generation.
In this notebook we will define such a serialized format, and use it load a larger set of reactions. We represent a reaction as length 3 tuple of: (rate_const, coeff_powers, net_effect). Representing Robertson's system this way looks like this:
End of explanation
# %load ../scipy2017codegen/chem.py
from operator import mul
from functools import reduce
import sympy as sym
def prod(seq):
return reduce(mul, seq) if seq else 1
def mk_exprs_symbs(rxns, names):
concs = sym.symbols(names, real=True, nonnegative=True)
c_dict = dict(zip(names, concs))
f = {n: 0 for n in names}
for coeff, r_stoich, net_stoich in rxns:
r = sym.S(coeff)*prod([c_dict[rk]**p for rk, p in r_stoich.items()])
for nk, nm in net_stoich.items():
f[nk] += nm*r
return [f[n] for n in names], concs
def mk_rsys(ODEcls, reactions, names, params=(), **kwargs):
f, symbs = mk_exprs_symbs(reactions, names)
return ODEcls(f, symbs, params=map(sym.S, params), **kwargs)
sym.init_printing()
f, symbs = mk_exprs_symbs(reactions, names)
f, symbs
Explanation: the reaction system is still defined as:
$$
A \overset{k_1}{\rightarrow} B \
B + C \overset{k_2}{\rightarrow} A + C \
2 B \overset{k_3}{\rightarrow} B + C
$$
We will now write a small convenience function which takes the above representation and creates symbolic expressions for the ODE system:
End of explanation
# %load ../scipy2017codegen/odesys.py
from itertools import chain # Py 2.7 does not support func(*args1, *args2)
import sympy as sym
from scipy.integrate import odeint
class ODEsys(object):
default_integrator = 'odeint'
def __init__(self, f, y, t=None, params=(), tex_names=None, lambdify=None):
assert len(f) == len(y), 'f is dy/dt'
self.f = tuple(f)
self.y = tuple(y)
self.t = t
self.p = tuple(params)
self.tex_names = tex_names
self.j = sym.Matrix(self.ny, 1, f).jacobian(y)
self.lambdify = lambdify or sym.lambdify
self.setup()
@property
def ny(self):
return len(self.y)
def setup(self):
self.lambdified_f = self.lambdify(self.y + self.p, self.f)
self.lambdified_j = self.lambdify(self.y + self.p, self.j)
def f_eval(self, y, t, *params):
return self.lambdified_f(*chain(y, params))
def j_eval(self, y, t, *params):
return self.lambdified_j(*chain(y, params))
def integrate(self, *args, **kwargs):
integrator = kwargs.pop('integrator', self.default_integrator)
return getattr(self, 'integrate_%s' % integrator)(*args, **kwargs)
def integrate_odeint(self, tout, y0, params=(), rtol=1e-8, atol=1e-8, **kwargs):
return odeint(self.f_eval, y0, tout, args=tuple(params), full_output=True,
Dfun=self.j_eval, rtol=rtol, atol=atol, **kwargs)
def print_info(self, info):
if info is None:
return
nrhs = info.get('num_rhs')
if not nrhs:
nrhs = info['nfe'][-1]
njac = info.get('num_dls_jac_evals')
if not njac:
njac = info['nje'][-1]
print("The rhs was evaluated %d times and the Jacobian %d times" % (nrhs, njac))
def plot_result(self, tout, yout, info=None, ax=None):
ax = ax or plt.subplot(1, 1, 1)
for i, label in enumerate(self.tex_names):
ax.plot(tout, yout[:, i], label='$%s$' % label)
ax.set_ylabel('$\mathrm{concentration\ /\ mol \cdot dm^{-3}}$')
ax.set_xlabel('$\mathrm{time\ /\ s}$')
ax.legend(loc='best')
self.print_info(info)
odesys = ODEsys(f, symbs, params=params, tex_names=tex_names)
import numpy as np
tout = np.logspace(-6, 6)
yout, info = odesys.integrate_odeint(tout, [1, 0, 0], [0.04, 1e4, 3e7], atol=1e-9, rtol=1e-9)
import matplotlib.pyplot as plt
%matplotlib inline
fig, axes = plt.subplots(1, 2, figsize=(14, 4))
odesys.plot_result(tout, yout, info, ax=axes[0])
odesys.plot_result(tout, yout, ax=axes[1])
axes[1].set_xscale('log')
axes[1].set_yscale('log')
Explanation: We create a helper class to represent to ODE system.
End of explanation
import json
watrad_data = json.load(open('../scipy2017codegen/data/radiolysis_300_Gy_s.json'))
watrad = mk_rsys(ODEsys, **watrad_data)
print(len(watrad.f), watrad.y[0], watrad.f[0])
Explanation: The reason for why we went through this trouble is to be able to create a ODEsys instance from conveniently serialized data. Here is a much larger set of reactions, describing water radiolysis at 298 K and a doserate of 300 Gy/s (which is a doserate not far from that of a nuclear reactor):
End of explanation
tout = np.logspace(-6, 3, 200) # close to one hour of operation
c0 = {'H2O': 55.4e3, 'H+': 1e-4, 'OH-': 1e-4}
y0 = [c0.get(symb.name, 0) for symb in watrad.y]
%timeit watrad.integrate_odeint(tout, y0)
fig, ax = plt.subplots(1, 1, figsize=(14, 6))
watrad.plot_result(tout, *watrad.integrate_odeint(tout, y0), ax=ax)
ax.set_xscale('log')
ax.set_yscale('log')
Explanation: Values correspond to SI units, the concentration of water at 298 K is 55400 mol/m³. Neutral water contains [H+] = [HO-] = 10^-4 mol/m³:
End of explanation |
9,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pricing is a DataFrame with the same structure as the return value of history on quantopian.
Step1: Pandas' built-in groupby and apply operations are extremely powerful. For more information on these features, see http
Step2: The result of a groupby computation is a Hierarchichally-Indexed DataFrame where the outermost layer of the index is the groupby key, and the secondary layers are the values from the frame's original index.
Step3: Because our DataFrame is Hierarchically-Indexed, we can query it by our groupby keys.
Step4: If we want to query on the second layer of the index, we have to use .xs with a level argument instead of .loc.
Note that level=1 means the second level of the index, because the levels start at index 0.
Step5: If we just want to work with the original index values, we can drop the extra level from our index. | Python Code:
pricing.head(10)
Explanation: pricing is a DataFrame with the same structure as the return value of history on quantopian.
End of explanation
from pandas.tseries.tools import normalize_date
def my_grouper(ts):
"Function to apply to the index of the DataFrame to break it into groups."
# Returns midnight of the supplied date.
return normalize_date(ts)
def first_thirty_minutes(frame):
"Function to apply to the resulting groups."
return frame.iloc[:30]
Explanation: Pandas' built-in groupby and apply operations are extremely powerful. For more information on these features, see http://pandas.pydata.org/pandas-docs/stable/groupby.html.
End of explanation
data = pricing.groupby(my_grouper).apply(first_thirty_minutes)
data.head(40)
Explanation: The result of a groupby computation is a Hierarchichally-Indexed DataFrame where the outermost layer of the index is the groupby key, and the secondary layers are the values from the frame's original index.
End of explanation
from pandas import Timestamp
# This gives us the first thirty minutes of January 3rd.
data.loc[Timestamp('2014-01-03', tz='UTC')]
Explanation: Because our DataFrame is Hierarchically-Indexed, we can query it by our groupby keys.
End of explanation
data.xs(Timestamp('2014-01-03 14:58:00', tz='UTC'), level=1)
Explanation: If we want to query on the second layer of the index, we have to use .xs with a level argument instead of .loc.
Note that level=1 means the second level of the index, because the levels start at index 0.
End of explanation
data_copy = data.copy()
data_copy.index = data_copy.index.droplevel(0)
data_copy.head()
Explanation: If we just want to work with the original index values, we can drop the extra level from our index.
End of explanation |
9,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
多変量解析
圓川隆夫「多変量のデータ解析」(1988, 朝倉書店)をもとに多変量解析の演習を行います。
Step1: 重回帰分析
以下のように変数を設定する。
$
y =
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{n}
\end{bmatrix}
,
X =
\begin{bmatrix}
x_{11} & \cdots & x_{1i} & \cdots & x_{1k}\
\vdots & \ddots & & & \vdots \
x_{i1} & & x_{ii} & & x_{ik} \
\vdots & & & \ddots & \vdots \
x_{n1} & \cdots & x_{ni} & \cdots & x_{nk}
\end{bmatrix}
,
\beta =
\begin{bmatrix}
\beta_{1} \
\vdots \
\beta_{n}
\end{bmatrix}
,
\epsilon =
\begin{bmatrix}
\epsilon_{1} \
\vdots \
\epsilon_{n}
\end{bmatrix}
$
このとき、最小自乗法により
$\hat \beta = (X'X)^{-1}X'y$となる。
まず教科書p.28の表2.2を読み込む。
Step2: 最初に収益データ(y)に圧力(x1)を説明変数として単回帰モデルをあてはめる。
Step3: 故に、
$\hat y_{i} = 39.1 - 0.645 x_{1i}$
が得られ、寄与率は$R^{2} = 0.29$である。
Step4: 同様にして、温度(x2)を説明変数とした場合は、
Step5: 故に、
$\hat y_{i} = 11.8 +0.184x_{2i}$が得られ、寄与率は$R^{2} = 0.028$である。
Step6: 収率データ(y)に圧力(x1)、温度(x2)、酸度(x3)の3個の説明変数で重回帰分析を行う。 | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: 多変量解析
圓川隆夫「多変量のデータ解析」(1988, 朝倉書店)をもとに多変量解析の演習を行います。
End of explanation
data = pd.read_csv("tab22.csv") # http://shimotsu.web.fc2.com/Site/duo_bian_liang_jie_xi.html
data
Explanation: 重回帰分析
以下のように変数を設定する。
$
y =
\begin{bmatrix}
y_{1} \
y_{2} \
\vdots \
y_{n}
\end{bmatrix}
,
X =
\begin{bmatrix}
x_{11} & \cdots & x_{1i} & \cdots & x_{1k}\
\vdots & \ddots & & & \vdots \
x_{i1} & & x_{ii} & & x_{ik} \
\vdots & & & \ddots & \vdots \
x_{n1} & \cdots & x_{ni} & \cdots & x_{nk}
\end{bmatrix}
,
\beta =
\begin{bmatrix}
\beta_{1} \
\vdots \
\beta_{n}
\end{bmatrix}
,
\epsilon =
\begin{bmatrix}
\epsilon_{1} \
\vdots \
\epsilon_{n}
\end{bmatrix}
$
このとき、最小自乗法により
$\hat \beta = (X'X)^{-1}X'y$となる。
まず教科書p.28の表2.2を読み込む。
End of explanation
y = np.asarray(data["y"])
X = np.c_[np.ones(len(data)), np.asarray(data[' x1'])]
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
e = y - X.dot(beta)
R2 = 1 - e.dot(e.T)/sum((y - np.mean(y))**2)
print('y = ' + str(y))
print('X = \n' + str(X))
print('b = \n' + str(beta))
print('R2 = \n' + str(R2))
Explanation: 最初に収益データ(y)に圧力(x1)を説明変数として単回帰モデルをあてはめる。
End of explanation
plt.scatter(data[' x1'], data['y'])
plt.plot(data[' x1'], X.dot(beta))
plt.text(18, 31,"$R^{2}=%s$" % round(R2, 3), fontsize=17)
Explanation: 故に、
$\hat y_{i} = 39.1 - 0.645 x_{1i}$
が得られ、寄与率は$R^{2} = 0.29$である。
End of explanation
y = np.asarray(data["y"])
X = np.c_[np.ones(len(data)), np.asarray(data[' x2'])]
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
e = y - X.dot(beta)
R2 = 1 - e.dot(e.T)/sum((y - np.mean(y))**2)
print('y = ' + str(y))
print('X = \n' + str(X))
print('b = \n' + str(beta))
print('R2 = \n' + str(R2))
Explanation: 同様にして、温度(x2)を説明変数とした場合は、
End of explanation
plt.scatter(data[' x2'], data['y'])
plt.plot(data[' x2'], X.dot(beta))
plt.text(91, 31,"$R^{2}=%s$" % round(R2, 3), fontsize=17)
Explanation: 故に、
$\hat y_{i} = 11.8 +0.184x_{2i}$が得られ、寄与率は$R^{2} = 0.028$である。
End of explanation
y = np.asarray(data["y"])
X = np.c_[np.ones(len(data)), np.asarray(data[[' x1', ' x2', ' x3']])]
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
e = y - X.dot(beta)
R2 = 1 - e.dot(e.T)/sum((y - np.mean(y))**2)
print('y = ' + str(y))
print('X = \n' + str(X))
print('b = \n' + str(beta))
print('R2 = \n' + str(R2))
Explanation: 収率データ(y)に圧力(x1)、温度(x2)、酸度(x3)の3個の説明変数で重回帰分析を行う。
End of explanation |
9,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tools
Purpose
Step1: Lists
Step2: Loops and List Comprehension
Step3: If, elif, else
Step4: Functions
Step5: NumPy | Python Code:
# python has /types
print(type(1) == int)
print(type(1.) == float)
print(type(1j) == complex)
type(None)
# What happens if you add values of different types?
print(1 + 1.)
Explanation: Tools
Purpose: To introduce and provide resources for the tools used to build and work with <a href="http://simpeg.xyz">SimPEG</a>
In this tutorial, we cover some of the basic tools you will need for scientific computing with Python. This follows the <a href = http://simpegtutorials.readthedocs.org/en/latest/content/tools.html>Tools</a> section of the <a href= http://simpegtutorials.readthedocs.org/>SimPEG Tutorials</a>.
This development environment is the <a href = http://jupyter.org>Jupyter Notebook</a>.
- To run a cell, is Shift + Enter
- To clear your kernel Esc + 00
- other keyboard shortcuts are available through the help
In this notebook, we cover some basics of:
- <a href="https://www.python.org/">Python</a>
- <a href="http://www.numpy.org/">NumPy</a>
- <a href="https://www.scipy.org/">SciPy</a>
- <a href="http://matplotlib.org/">Matplotlib</a>
Jupyter Notebook
<img src="http://blog.jupyter.org/content/images/2015/02/jupyter-sq-text.png" width="80" href="http://jupyter.org">
A <a href="">notebook</a> containing the following examples is available for you to download
and follow along. In the directory where you downloaded the notebook, open up
a <a href="http://jupyter.org">Jupyter Notebook</a> from a terminal
jupyter notebook
and open tools.ipynb. A few things to note
<img src="../../images/notebookpointers.png">
To execute a cell is Shift + Enter
To restart the kernel (clean your slate) is Esc + 00
Throughout this tutorial, we will show a few tips for working with the
notebook.
Python
<img href="https://www.python.org/" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Python-logo-notext.svg/220px-Python-logo-notext.svg.png" width=60>
Python is a high-level interpreted computing language. Here we outline a few
of the basics and common trip-ups. For more information and tutorials, check
out the <a href="https://www.python.org/doc/"Python Documentation</a>.
Types
End of explanation
mylist = [6, 5, 4, 3]
type(mylist)
# length of a list
len(mylist)
# python uses zero based indexing
print(mylist[0])
print(mylist[:2]) # counting up
print(mylist[2:]) # starting from
print(mylist[-1]) # going back
Explanation: Lists
End of explanation
n = 10 # try making this larger --> see which is faster
%%time
a = []
for i in range(n): # for loop assignment
a.append(i)
print(a)
%%time
b = [i for i in range(n)] # list comprehension
print(b)
# Enumerateing
mylist = ['Monty', 'Python', 'Flying', 'Circus']
for i, val in enumerate(mylist):
print(i, val)
Explanation: Loops and List Comprehension
End of explanation
# Pick a random number between 0 and 100
import numpy as np # n-dimensional array package
number = (100.*np.random.rand(1)).round() # make it an integer
if number > 42:
print('{} is too high'.format(number))
elif number < 42:
print('{} is too low'.format(number))
else:
print('you found the secret to life. {}'.format(number))
Explanation: If, elif, else
End of explanation
def pickAnumber(number):
if number > 42:
print('{} is too high'.format(number))
return False
elif number < 42:
print('{} is too low'.format(number))
return False
else:
print('you found the secret to life. {}'.format(number))
return True
print(pickAnumber(10))
Explanation: Functions
End of explanation
import numpy as np
a = np.array(1) # scalar
print(a.shape)
b = np.array([1]) # vector
print(b.shape)
c = np.array([[1]]) # array
print(c.shape)
# vectors
v = np.random.rand(10)
a = v.T * v
print(a.shape)
b = v.dot(v)
b.shape
# arrays
w = np.random.rand(10,1)
w.shape
M = np.random.rand(10,10)
M*w
Explanation: NumPy
End of explanation |
9,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Graph Analysis with networkx
Graph theory deals with various properties and algorithms concerned with Graphs. Although it is very easy to implement a Graph ADT in Python, we will use networkx library for Graph Analysis as it has inbuilt support for visualizing graphs. In future versions of networkx, graph visualization might be removed. When this happens, it is required to modify some parts of this chapter
Standard import statement
Throughout this tutorial, we assume that you have imported networkx as follows
Step1: Creating Graphs
Create an empty graph with no nodes and no edges.
Step2: By definition, a Graph is a collection of nodes (vertices) along with identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can be any hashable object e.g. a text string, an image, an XML object, another Graph, a customized node object, etc. (Note
Step3: add a list of nodes,
Step4: Edges
G can also be grown by adding one edge at a time,
Step5: by adding a list of edges,
Step6: we add new nodes/edges and NetworkX quietly ignores any that are already present.
At this stage the graph G consists of 3 nodes and 3 edges, as can be seen by
Step7: Accessing edges
In addition to the methods Graph.nodes, Graph.edges, and Graph.neighbors, iterator versions (e.g. Graph.edges_iter) can save you from creating large lists when you are just going to iterate through them anyway.
Fast direct access to the graph data structure is also possible using subscript notation.
Warning
Do not change the returned dict--it is part of the graph data structure and direct manipulation may leave the graph in an inconsistent state.
Step8: You can safely set the attributes of an edge using subscript notation if the edge already exists.
Step9: Fast examination of all edges is achieved using adjacency iterators. Note that for undirected graphs this actually looks at each edge twice.
Step10: Convenient access to all edges is achieved with the edges method.
Step11: Adding attributes to graphs, nodes, and edges
Attributes such as weights, labels, colors, or whatever Python object you like, can be attached to graphs, nodes, or edges.
Each graph, node, and edge can hold key/value attribute pairs in an associated attribute dictionary (the keys must be hashable). By default these are empty, but attributes can be added or changed using add_edge, add_node or direct manipulation of the attribute dictionaries named G.graph, G.node and G.edge for a graph G.
Graph attributes
Assign graph attributes when creating a new graph
Step12: Or you can modify attributes later
Step13: Node attributes
Add node attributes using add_node(), add_nodes_from() or G.node
Step14: Note that adding a node to G.node does not add it to the graph, use G.add_node() to add new nodes.
Edge Attributes
Add edge attributes using add_edge(), add_edges_from(), subscript notation, or G.edge.
Step15: Converting Graph to Adjacency matrix
You can use nx.to_numpy_matrix(G) to convert G to numpy matrix. If the graph is weighted, the elements of the matrix are weights. If an edge doesn't exsist, its value will be 0, not Infinity. You have to manually modify those values to Infinity (float('inf'))
Step16: Drawing graphs
NetworkX is not primarily a graph drawing package but basic drawing with Matplotlib as well as an interface to use the open source Graphviz software package are included. These are part of the networkx.drawing package and will be imported if possible
Step17: Now we shall draw the graph using graphviz layout | Python Code:
import networkx as nx
Explanation: Introduction to Graph Analysis with networkx
Graph theory deals with various properties and algorithms concerned with Graphs. Although it is very easy to implement a Graph ADT in Python, we will use networkx library for Graph Analysis as it has inbuilt support for visualizing graphs. In future versions of networkx, graph visualization might be removed. When this happens, it is required to modify some parts of this chapter
Standard import statement
Throughout this tutorial, we assume that you have imported networkx as follows
End of explanation
G = nx.Graph()
Explanation: Creating Graphs
Create an empty graph with no nodes and no edges.
End of explanation
G.add_node(1)
Explanation: By definition, a Graph is a collection of nodes (vertices) along with identified pairs of nodes (called edges, links, etc). In NetworkX, nodes can be any hashable object e.g. a text string, an image, an XML object, another Graph, a customized node object, etc. (Note: Python's None object should not be used as a node as it determines whether optional function arguments have been assigned in many functions.)
Nodes
The graph G can be grown in several ways. NetworkX includes many graph generator functions and facilities to read and write graphs in many formats. To get started, we'll look at simple manipulations. You can add one node at a time,
End of explanation
G.add_nodes_from([2,3])
Explanation: add a list of nodes,
End of explanation
G.add_edge(1,2)
e=(2,3)
G.add_edge(*e) # Unpacking tuple
Explanation: Edges
G can also be grown by adding one edge at a time,
End of explanation
G.add_edges_from([(1,2),(1,3)])
Explanation: by adding a list of edges,
End of explanation
G.number_of_nodes()
G.number_of_edges()
Explanation: we add new nodes/edges and NetworkX quietly ignores any that are already present.
At this stage the graph G consists of 3 nodes and 3 edges, as can be seen by:
End of explanation
G.nodes()
G.edges()
G[1]
G[1][2]
Explanation: Accessing edges
In addition to the methods Graph.nodes, Graph.edges, and Graph.neighbors, iterator versions (e.g. Graph.edges_iter) can save you from creating large lists when you are just going to iterate through them anyway.
Fast direct access to the graph data structure is also possible using subscript notation.
Warning
Do not change the returned dict--it is part of the graph data structure and direct manipulation may leave the graph in an inconsistent state.
End of explanation
G[1][2]['weight'] = 10
G[1][2]
Explanation: You can safely set the attributes of an edge using subscript notation if the edge already exists.
End of explanation
FG=nx.Graph()
FG.add_weighted_edges_from([(1,2,0.125),(1,3,0.75),(2,4,1.2),(3,4,0.375)])
for n,nbrs in FG.adjacency_iter():
for nbr,eattr in nbrs.items():
data=eattr['weight']
if data<0.5: print('(%d, %d, %.3f)' % (n,nbr,data))
list(FG.adjacency_iter())
Explanation: Fast examination of all edges is achieved using adjacency iterators. Note that for undirected graphs this actually looks at each edge twice.
End of explanation
for (u,v,d) in FG.edges(data='weight'):
if d<0.5: print('(%d, %d, %.3f)'%(n,nbr,d))
Explanation: Convenient access to all edges is achieved with the edges method.
End of explanation
G = nx.Graph(day="Friday")
G.graph
Explanation: Adding attributes to graphs, nodes, and edges
Attributes such as weights, labels, colors, or whatever Python object you like, can be attached to graphs, nodes, or edges.
Each graph, node, and edge can hold key/value attribute pairs in an associated attribute dictionary (the keys must be hashable). By default these are empty, but attributes can be added or changed using add_edge, add_node or direct manipulation of the attribute dictionaries named G.graph, G.node and G.edge for a graph G.
Graph attributes
Assign graph attributes when creating a new graph
End of explanation
G.graph['day']='Monday'
G.graph
Explanation: Or you can modify attributes later
End of explanation
G.add_node(1,time = '5pm')
G.add_nodes_from([3], time='2pm')
G.node[1]
G.node[1]['room'] = 714
G.nodes(data=True)
Explanation: Node attributes
Add node attributes using add_node(), add_nodes_from() or G.node
End of explanation
G.add_edge(1, 2, weight=4.7 )
G[1][2]
G.add_edges_from([(3,4),(4,5)], color='red')
G.add_edges_from([(1,2,{'color':'blue'}), (2,3,{'weight':8})])
G[1][2]['weight'] = 4.7
G.edge[1][2]['weight'] = 4
G.edges(data=True)
Explanation: Note that adding a node to G.node does not add it to the graph, use G.add_node() to add new nodes.
Edge Attributes
Add edge attributes using add_edge(), add_edges_from(), subscript notation, or G.edge.
End of explanation
nx.to_numpy_matrix(G)
nx.to_numpy_matrix(FG)
Explanation: Converting Graph to Adjacency matrix
You can use nx.to_numpy_matrix(G) to convert G to numpy matrix. If the graph is weighted, the elements of the matrix are weights. If an edge doesn't exsist, its value will be 0, not Infinity. You have to manually modify those values to Infinity (float('inf'))
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
nx.draw(FG)
Explanation: Drawing graphs
NetworkX is not primarily a graph drawing package but basic drawing with Matplotlib as well as an interface to use the open source Graphviz software package are included. These are part of the networkx.drawing package and will be imported if possible
End of explanation
from networkx.drawing.nx_agraph import graphviz_layout
pos = graphviz_layout(FG)
plt.axis('off')
nx.draw_networkx_nodes(FG,pos,node_color='g',alpha = 0.8) # draws nodes
nx.draw_networkx_edges(FG,pos,edge_color='b',alpha = 0.6) # draws edges
nx.draw_networkx_edge_labels(FG,pos,edge_labels = nx.get_edge_attributes(FG,'weight')) # edge lables
nx.draw_networkx_labels(FG,pos) # node lables
Explanation: Now we shall draw the graph using graphviz layout
End of explanation |
9,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self Employment Data 2015
from OECD
Step1: Solutions with Pandas
Basic Calculations and Statistics
Exercise 1
Calculate for each country the overallselfemployment_rate
Step2: Exercise 2
Calculate
- maximum
- minimum
- sum
- mean
- standard deviation
for/of all selfemployment_rates.
Step3: Exercise 3
Find the Country with the highest selfemployment_rate.
Step4: Exercise 4
Find the the sum of all selfemployment_rates, which are between 10-15.
Step5: Exercise 5
a) Plot a barchart of the selfemployment_rates by country (as in [Basic-Plotting]. Use pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
Step6: b) Plot a barchart of the male vs. female selfemployment_rates by country (as in Basic-Plotting, but using pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
Step7: Aggregetions
Exercise 6
Calculate the mean of the selfemployment-rates per continent. | Python Code:
countries = ['AUS', 'AUT', 'BEL', 'CAN', 'CZE', 'FIN', 'DEU', 'GRC', 'HUN', 'ISL', 'IRL', 'ITA', 'JPN',
'KOR', 'MEX', 'NLD', 'NZL', 'NOR', 'POL', 'PRT', 'SVK', 'ESP', 'SWE', 'CHE', 'TUR', 'GBR',
'USA', 'CHL', 'COL', 'EST', 'ISR', 'RUS', 'SVN', 'EU28', 'EA19', 'LVA']
male_selfemployment_rates = [12.13246, 15.39631, 18.74896, 9.18314, 20.97991, 18.87097,
13.46109, 39.34802, 13.3356, 16.83681, 25.35344, 29.27118,
12.06516, 27.53898, 31.6945, 19.81751, 17.68489, 9.13669,
24.15699, 22.95656, 19.00245, 21.16428, 13.93171, 8.73181,
30.73483, 19.11255, 7.48383, 25.92752, 52.27145, 12.05042,
15.8517, 8.10048, 19.02411, 19.59021, 19.1384, 14.75558]
female_selfemployment_rates = [8.18631, 10.38607, 11.07756, 8.0069, 12.78461,
9.42761, 7.75637, 29.56566, 8.00408, 7.6802, 8.2774, 18.33204,
9.7313, 23.56431, 32.81488, 13.36444, 11.50045, 4.57464,
17.63891, 13.92678, 10.32846, 12.82925, 6.22453, 9.28793,
38.32216, 10.21743, 5.2896, 25.24502, 49.98448, 6.624,
9.0243, 6.26909, 13.46641, 11.99529, 11.34129, 8.88987]
countries_by_continent = {'AUS':'AUS', 'AUT':'EUR', 'BEL':'EUR', 'CAN':'AM',
'CZE':'EUR', 'FIN':'EUR', 'DEU':'EUR', 'GRC':'EUR',
'HUN':'EUR', 'ISL':'EUR', 'IRL':'EUR', 'ITA':'EUR',
'JPN':'AS', 'KOR':'AS', 'MEX':'AM', 'NLD':'EUR',
'NZL':'AUS', 'NOR':'EUR', 'POL':'EUR', 'PRT':'EUR',
'SVK':'EUR', 'ESP':'EUR', 'SWE':'EUR', 'CHE':'EUR',
'TUR':'EUR', 'GBR':'EUR', 'USA':'AM' , 'CHL':'AM',
'COL':'AM' , 'EST':'EUR', 'ISR':'AS', 'RUS':'EUR',
'SVN':'EUR', 'EU28':'EUR','EA19':'AS', 'LVA':'EUR'}
import pandas as pd
df_male_selfemployment_rates = pd.DataFrame({'countries':countries, 'selfemployment_rates':male_selfemployment_rates})
df_male_selfemployment_rates.head()
df_female_selfemployment_rates = pd.DataFrame({'countries':countries, 'selfemployment_rates':female_selfemployment_rates})
df_female_selfemployment_rates.head()
df_country_continent = pd.DataFrame(list(countries_by_continent.items()), columns=['country','continent'])
df_country_continent.head()
Explanation: Self Employment Data 2015
from OECD
End of explanation
# TODO
Explanation: Solutions with Pandas
Basic Calculations and Statistics
Exercise 1
Calculate for each country the overallselfemployment_rate:<br>
df_selfemployment_rate:=(male_selfemployment_rates+female_selfemployment_rates)/2
(assumes that #women ~#men)
End of explanation
# TODO
Explanation: Exercise 2
Calculate
- maximum
- minimum
- sum
- mean
- standard deviation
for/of all selfemployment_rates.
End of explanation
# TODO
Explanation: Exercise 3
Find the Country with the highest selfemployment_rate.
End of explanation
# TODO
Explanation: Exercise 4
Find the the sum of all selfemployment_rates, which are between 10-15.
End of explanation
# TODO
Explanation: Exercise 5
a) Plot a barchart of the selfemployment_rates by country (as in [Basic-Plotting]. Use pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
End of explanation
# TODO
Explanation: b) Plot a barchart of the male vs. female selfemployment_rates by country (as in Basic-Plotting, but using pandas plotting facilities).
Use Pandas reindex to get the labeling in place.
End of explanation
# TODO group by Continent
Explanation: Aggregetions
Exercise 6
Calculate the mean of the selfemployment-rates per continent.
End of explanation |
9,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning Internal Representation by Error Propagation
A example implementation of the following classic paper that changed the history of deep learning
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE.
Related Paper
Varona-Moya, S., & Cobos, P. L. (2012, September). Analogical inferences in the family trees task
Step1: Relationship Representation
Step2: Code | Python Code:
person_1_input = [[1.0 if target == person else 0.0 for target in range(24) ] for person in range(24)]
person_2_output = person_1_input[:] # Data copy - Person 1 is the same data as person 2.
relationship_input = [[1.0 if target == relationship else 0.0 for target in range(12) ] for relationship in range(12)]
Explanation: Learning Internal Representation by Error Propagation
A example implementation of the following classic paper that changed the history of deep learning
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1985). Learning internal representations by error propagation (No. ICS-8506). CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE.
Related Paper
Varona-Moya, S., & Cobos, P. L. (2012, September). Analogical inferences in the family trees task: a review. In International Conference on Artificial Neural Networks (pp. 221-228). Springer Berlin Heidelberg.
Paccanaro, A., & Hinton, G. E. (2001). Learning distributed representations of concepts using linear relational embedding. IEEE Transactions on Knowledge and Data Engineering, 13(2), 232-244.
Network structure
Data Creation
End of explanation
# (colin has-father james)
# (colin has-mother victoria)
# (james has-wife victoria)
# (charlotte has-brother colin)
# (victoria has-brother arthur)
# (charlotte has-uncle arthur)
# 아래의 리스트는 가족관계도에 있는 관계를 위의 예시와 같은 방법으로 나타낸 것입니다.
# [input_person, relationship, output_person]
triple_relationship = [[0, 3, 1], [0, 4, 3], [0, 5, 4],
[1, 2, 0], [1, 4, 3], [1, 5, 4],
[2, 2, 3],
[3, 3, 2], [3, 0, 0], [3, 1, 1], [3, 9, 4], [3, 10, 10], [3, 11, 11],
[4, 2, 5], [4, 0, 0], [4, 1, 1], [4, 5, 3], [4, 4, 10], [4, 5, 11],
[5, 3, 4], [5, 0, 6], [5, 1, 7], [5, 9, 9], [5, 4, 10], [5, 5, 11],
[6, 3, 7], [6, 4, 5], [6, 5, 8],
[7, 2, 6], [7, 4, 5], [7, 5, 8],
[8, 2, 9], [8, 0, 6], [8, 1, 7], [8, 8, 5], [8, 10, 10], [8, 11, 11],
[9, 3, 8],
[10, 0, 5], [10, 1, 4], [10, 9, 11], [10, 6, 3], [10, 7, 8],
[11, 0, 5], [11, 1, 4], [11, 8, 10], [11, 6, 3], [11, 7, 8],
[12, 3, 13], [12, 4, 15], [12, 5, 16],
[13, 2, 12], [13, 4, 15], [13, 5, 16],
[14, 2, 15],
[15, 3, 14], [15, 0, 12], [15, 1, 13], [15, 9, 16], [15, 10, 22], [15, 11, 23],
[16, 2, 17], [16, 0, 12], [16, 1, 15], [16, 5, 15], [16, 4, 22], [16, 5, 23],
[17, 3, 16], [17, 0, 18], [17, 1, 19], [17, 9, 21], [17, 4, 22], [17, 5, 23],
[18, 3, 19], [18, 4, 17], [18, 5, 20],
[19, 2, 18], [19, 4, 17], [19, 5, 20],
[20, 2, 21], [20, 0, 18], [20, 1, 19], [20, 8, 17], [20, 10, 22], [8, 11, 23],
[21, 3, 20],
[22, 0, 17], [22, 1, 16], [22, 9, 23], [22, 6, 15], [22, 7, 20],
[23, 0, 17], [23, 1, 16], [23, 8, 22], [23, 6, 15], [23, 7, 20]]
Explanation: Relationship Representation
End of explanation
import tensorflow as tf
import numpy as np
x1_data = np.array([person_1_input[data[0]] for data in triple_relationship],dtype=np.float32)
x2_data = np.array([relationship_input[data[1]] for data in triple_relationship],dtype=np.float32)
y_data = np.array([person_2_output[data[2]] for data in triple_relationship],dtype=np.float32)
X1 = tf.placeholder(tf.float32, [None, 24])
X2 = tf.placeholder(tf.float32, [None, 12])
Y = tf.placeholder(tf.float32, [None, 24])
# Weights and bias
W11 = tf.Variable(tf.zeros([24, 6]))
W12 = tf.Variable(tf.zeros([12, 6]))
W21 = tf.Variable(tf.zeros([6, 12]))
W22 = tf.Variable(tf.zeros([6, 12]))
W3 = tf.Variable(tf.zeros([12, 24]))
b11 = tf.Variable(tf.zeros([6]))
b12 = tf.Variable(tf.zeros([6]))
b2 = tf.Variable(tf.zeros([12]))
b3 = tf.Variable(tf.zeros([24]))
# Hypothesis
L11 = tf.sigmoid(tf.matmul(X1, W11) + b11) # 24 by 6 mat
L12 = tf.sigmoid(tf.matmul(X2, W12) + b12) # 12 by 6 mat
# L2 = tf.sigmoid(tf.matmul(L11, W21) + tf.matmul(L12, W22) + b2) # Dimensions must be equal, but are 24 and 12 for 'add_22' (op: 'Add') with input shapes: [24,12], [12,12].
L2 = tf.sigmoid(tf.matmul(L11, W21) + tf.matmul(L12, W22) + b2)
hypothesis = tf.nn.softmax(tf.matmul(L2, W3) + b3)
# Minimize cost.
a = tf.Variable(0.01)
# cost = tf.reduce_mean(hypothesis, Y)
cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(hypothesis), reduction_indices=1))
train_step = tf.train.AdamOptimizer(a).minimize(cost)
# Initializa all variables.
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
# Loop
for i in range(1000):
sess.run(train_step, feed_dict={
X1: x1_data,
X2: x2_data,
Y:y_data}
)
if i % 100 == 0:
print(
i,
sess.run(cost, feed_dict={X1:x1_data, X2:x2_data, Y:y_data})
)
correct_prediction = tf.equal(tf.argmax(hypothesis,1), tf.argmax(Y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={X1: x1_data, X2: x2_data, Y:y_data}))
print(sess.run(tf.argmax(hypothesis,1), feed_dict={X1: x1_data, X2: x2_data, Y:y_data}))
print(sess.run(tf.argmax(Y,1), feed_dict={X1: x1_data, X2: x2_data, Y:y_data}))
print()
data = sess.run(W11, feed_dict={X1: x1_data, X2: x2_data, Y:y_data})
data = data.transpose()
data.shape
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
for index, values in enumerate(data):
plt.subplot(2, 4, index + 1)
values.shape = (2,12)
plt.axis('off')
plt.imshow(values, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Case %i' % index)
Explanation: Code
End of explanation |
9,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
layer = tf.layers.batch_normalization(layer, training=is_training)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool, name="is_training")
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs,
labels: batch_ys,
is_training: True})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
9,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-CC
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
9,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hmwk #1
Step1: Represent the following table using a data structure of your choice
Step2: Calculate the mean temperature and mean humidity
Step3: Print outlook and play for those days where the temperature is greater than the average temperature
Step4: Print outlook and play for those days where the humidity is greater than the average humidity
Step5: Convert the temperature to Celsius and add a new column therefore in the table.
Step6: #1
How often do you play tennis independent of the other attributes?
From the out put we can see that we played tennis 9 days w/ a probibilty of 9/14
Step7: #2
How often do you play tennis when it is "sunny"?
From the output we can see that we played when it was sunny on 2 days or 2/14
Step8: #3
*Compare the average, minimum and maximum temperature when you play tennis? *
Step9: #4
Compare the average, minimum and maximum humidity when you play tennis?
Step10: #5
Plot the an scatter plot (x,y diagramm) of humidity (x) and temperature (y) when you play tennis compared to when you do not play tennis.
Step11: The only inferences I can make from the scatter plot above, is that you always play when the humidity is between 70 and 85. Temperature seems to play no part of the decision process when you go out to play as from teh scatter plot the plays and no play poionts are evenly distributed across the y axis (Temperature).
#2
We have a set of 8 files where the first 7 and the last 2 have a different format. First I removed the header information from the files and removed any superfolous line breaks, I then read them into pandas in two respective groups. I then had to normalize the dates of teh second dataset to match the dates of the first. Also I had to normalize the values of the first dataset b/c they were in units of 1000, so I made it in units of 1.
Cleaning & Normalization
Step12: Merging
Step13: Plot CA vs AK
Step14: New England vs Southwest
In order to plot these values I have to do some feature engineering to create columns for the respective regions that are not in the original dataset.
For New England I used
Step15: Greatest Change in Population
We can quantify population growth in direct terms or relativly using percentages
Step16: As you can see from teh table above, CA had the largest growth in terms of raw numbers for the time period. However, we can gain additional insites by looking at percentatge growth.
Relative Growth
We can also measure growth as a percenagte difference
Step17: Some states had no net growth and some had negative growth
Step18: 3
4
We will use a Decision Tree to build a model that will allow us to classify wines into one of three categories.
Step19: Test/Train Split
Split the dat set into 75% for training and 25% for testing
Step20: Train Model
I used the defaults for the DecisionTreeClasifier package in scikit
Step21: Evaluation
Evaluate based on confusion matrix how well the model performed on training vs. testing.
Step22: As you can see from the confusion matrix, inputs of Class 1 & 2 were perfectly classified. There were only 2 mistakes on Class 3.
5
Step23: What are the statistical distributions of variables using no class?
Step24: How much missing data is there?
Step25: How do distributions differ by each gender?
Step26: Describe summary statistics for each attribute.
Step27: Visualize potential difference via the scatter plots.
Are there any ‘high’ correlations between variables?
We can see a correlation between height and weight
Step28: Create a new variable for the weight in lbs
Creating a new column for pounds obviously does not create any new correlations b/c it simply a linear combination w/ the kg weight.
Step29: Add new variable weight + height.
Step30: BMI
There appears to be obese males and females in the dataset
Step31: Split Data By Sport | Python Code:
import pandas as pd
%pylab inline
Explanation: Hmwk #1
End of explanation
df = pd.read_csv("weather.csv", header=0, index_col=0)
df
Explanation: Represent the following table using a data structure of your choice
End of explanation
mean_temp = df["temperature"].mean()
mean_temp
mean_humidity = df["humidity"].mean()
mean_humidity
Explanation: Calculate the mean temperature and mean humidity
End of explanation
temp_selector = df['temperature'] > mean_temp
df[temp_selector][["outlook", "play"]]
Explanation: Print outlook and play for those days where the temperature is greater than the average temperature
End of explanation
humidity_selector = df['humidity'] > mean_humidity
df[humidity_selector][["outlook", "play"]]
Explanation: Print outlook and play for those days where the humidity is greater than the average humidity
End of explanation
df["temp_C"] = ( df["temperature"] - 32 ) * (5/9.0)
df
Explanation: Convert the temperature to Celsius and add a new column therefore in the table.
End of explanation
play_selector = df["play"]=="yes"
play_days = df[play_selector]
len(play_days)
Explanation: #1
How often do you play tennis independent of the other attributes?
From the out put we can see that we played tennis 9 days w/ a probibilty of 9/14
End of explanation
sunny_selector = df["outlook"]=="sunny"
sunny_play_days = df[sunny_selector & play_selector]
len(sunny_play_days)
Explanation: #2
How often do you play tennis when it is "sunny"?
From the output we can see that we played when it was sunny on 2 days or 2/14
End of explanation
print play_days["temperature"].mean()
print play_days["temperature"].min()
print play_days["temperature"].max()
Explanation: #3
*Compare the average, minimum and maximum temperature when you play tennis? *
End of explanation
print play_days["humidity"].mean()
print play_days["humidity"].min()
print play_days["humidity"].max()
Explanation: #4
Compare the average, minimum and maximum humidity when you play tennis?
End of explanation
pyplot.ylabel('Temperature')
pyplot.xlabel("Humidity")
pyplot.scatter(x=play_days["humidity"], y=play_days["temperature"], c='green')
no_play_days = df[df["play"]=="no"]
pyplot.scatter(x=no_play_days["humidity"], y=no_play_days["temperature"], c='red', marker="x")
pyplot.legend(['Play', "No Play"])
Explanation: #5
Plot the an scatter plot (x,y diagramm) of humidity (x) and temperature (y) when you play tennis compared to when you do not play tennis.
End of explanation
#these are in units of thousands, need to scale
df1 = pd.read_fwf("processed/st0009ts.txt", header=0, index_col=0, thousands=",").transpose()
df2 = pd.read_fwf("processed/st1019ts.txt", header=0, index_col=0, thousands=",").transpose()
df3 = pd.read_fwf("processed/st2029ts.txt", header=0, index_col=0, thousands=",").transpose()
df4 = pd.read_fwf("processed/st3039ts.txt", header=0, index_col=0, thousands=",").transpose()
df5 = pd.read_fwf("processed/st4049ts.txt", header=0, index_col=0, thousands=",").transpose()
df6 = pd.read_fwf("processed/st5060ts.txt", header=0, index_col=0, thousands=",").transpose()
df7 = pd.read_fwf("processed/st6070ts.txt", header=0, index_col=0, thousands=",").transpose()
df = pd.concat([df1, df2, df3, df4, df5, df6, df7])
#scale up to unit of 1
df = df.apply(lambda x: x*1000)
#for some reason, this dataset format uses '.'s in U.S. but doesn't for anything else. We'll normalize it here
df[["U.S."]]
df.rename(columns={'U.S.': 'US'}, inplace=True)
#the file format changes here
transform = lambda x: "19"+x[2:4]
df_9 = pd.read_fwf("processed/st7080ts.txt", header=0, index_col=0, thousands=",").transpose()
df_9.index = df_9.index.map(transform)
df_10 = pd.read_fwf("processed/st8090ts.txt", header=0, index_col=0, thousands=",").transpose()
df_10.index = df_10.index.map(transform)
df_10
df_2 = pd.concat([df_9, df_10])
Explanation: The only inferences I can make from the scatter plot above, is that you always play when the humidity is between 70 and 85. Temperature seems to play no part of the decision process when you go out to play as from teh scatter plot the plays and no play poionts are evenly distributed across the y axis (Temperature).
#2
We have a set of 8 files where the first 7 and the last 2 have a different format. First I removed the header information from the files and removed any superfolous line breaks, I then read them into pandas in two respective groups. I then had to normalize the dates of teh second dataset to match the dates of the first. Also I had to normalize the values of the first dataset b/c they were in units of 1000, so I made it in units of 1.
Cleaning & Normalization
End of explanation
# now merge the two together to get the compleete mergered df
df = pd.concat([df, df_2])
df=df.sort_index() #sort
Explanation: Merging
End of explanation
df[["CA", "AK"]].plot()
Explanation: Plot CA vs AK
End of explanation
df["New England"] = df[["CT", "ME", "MA", "NH", "RI", "VT"]].sum(axis=1)
df["Southwest"] = df[["AZ", "CA", "CO", "NV", "NM", "TX", "UT"]].sum(axis=1)
df[["New England", "Southwest"]].plot()
Explanation: New England vs Southwest
In order to plot these values I have to do some feature engineering to create columns for the respective regions that are not in the original dataset.
For New England I used: CT, ME, MA, NH, RI, VT
For the Southwest, I used: AZ, CA, CO, NV, NM, TX, UT
Feature Engineering
End of explanation
#remove a few composite columns:
df.drop('US', axis=1, inplace=True)
df.drop('Southwest', axis=1, inplace=True)
df.drop('New England', axis=1, inplace=True)
delta = {}
rel_delta={}
for state in df.columns:
delta[state]=df[state].iloc[-1] - df[state].iloc[50]
rel_delta[state] = (df[state].iloc[-1] - df[state].iloc[50]) / df[state].iloc[50]*1. * 100
ddf=pd.DataFrame(delta, index=["delta"]).transpose()
ddf = ddf.sort(["delta"], ascending=False)
ddf.head()
Explanation: Greatest Change in Population
We can quantify population growth in direct terms or relativly using percentages:
Magnitude Delta
We don't have measurements for Alaska until 1950, so if we compare growth from 1950 in terms of pure magnitude, the top states are shown below:
End of explanation
ddp=pd.DataFrame(rel_delta, index=["% change"]).transpose()
ddp = ddp.sort(["% change"], ascending=False)
ddp.head()
Explanation: As you can see from teh table above, CA had the largest growth in terms of raw numbers for the time period. However, we can gain additional insites by looking at percentatge growth.
Relative Growth
We can also measure growth as a percenagte difference: As you can see Nevada had the largest percent growth from 1950 to 1900
End of explanation
ddp.tail(n=10)
Explanation: Some states had no net growth and some had negative growth:
End of explanation
from sklearn import tree
import numpy as np
wine = np.loadtxt("wine.data", delimiter=',')
#Get the targets (first column of file)
Y = wine[:, 0]
#Remove targets from input data
X = wine[:, 1:]
Explanation: 3
4
We will use a Decision Tree to build a model that will allow us to classify wines into one of three categories.
End of explanation
#lets split into a test and training set
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, random_state=9)
Explanation: Test/Train Split
Split the dat set into 75% for training and 25% for testing
End of explanation
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X_train, Y_train)
clf.score(X_test, Y_test)
Explanation: Train Model
I used the defaults for the DecisionTreeClasifier package in scikit
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(3)
plt.xticks(tick_marks, ["1", "2", "3"], rotation=45)
plt.yticks(tick_marks, ["1", "2", "3"])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
from sklearn.metrics import confusion_matrix
y_true = Y_test
y_pred = clf.predict(X_test)
cm = confusion_matrix(y_true, y_pred)
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
plt.show()
Explanation: Evaluation
Evaluate based on confusion matrix how well the model performed on training vs. testing.
End of explanation
odf = pd.read_csv("hmwk_1_data/AHW_1.csv")
odf.head()
Explanation: As you can see from the confusion matrix, inputs of Class 1 & 2 were perfectly classified. There were only 2 mistakes on Class 3.
5
End of explanation
odf["Age"].plot(kind="hist")
odf["Age"].describe()
odf["Weight"].plot(kind="hist")
odf["Weight"].describe()
odf["Height"].plot(kind="hist")
odf["Height"].describe()
Explanation: What are the statistical distributions of variables using no class?
End of explanation
odf.isnull().sum()
Explanation: How much missing data is there?
End of explanation
male = odf["Sex"]=="M"
female = odf["Sex"]=="F"
odf[male]["Age"].plot(kind="hist")
odf[female]["Age"].plot(kind="hist")
odf[male]["Weight"].plot(kind="hist")
odf[female]["Weight"].plot(kind="hist")
odf[male]["Height"].plot(kind="hist")
odf[female]["Height"].plot(kind="hist")
Explanation: How do distributions differ by each gender?
End of explanation
odf.describe()
Explanation: Describe summary statistics for each attribute.
End of explanation
from pandas.tools.plotting import scatter_matrix
pd.scatter_matrix(odf, alpha=0.2, figsize=(10, 10), diagonal='kde')
Explanation: Visualize potential difference via the scatter plots.
Are there any ‘high’ correlations between variables?
We can see a correlation between height and weight
End of explanation
odf["lbs"] = odf["Weight"] * 2.20462
odf.head()
pd.scatter_matrix(odf, alpha=0.2, figsize=(10, 10), diagonal='kde')
Explanation: Create a new variable for the weight in lbs
Creating a new column for pounds obviously does not create any new correlations b/c it simply a linear combination w/ the kg weight.
End of explanation
odf["w+h"] = odf["Weight"] + odf["Height"]
odf.drop('lbs', axis=1, inplace=True)
odf.head()
pd.scatter_matrix(odf, alpha=0.2, figsize=(10, 10), diagonal='kde')
Explanation: Add new variable weight + height.
End of explanation
odf["BMI"] = odf["Weight"] / ((odf["Height"]*0.01)**2)
odf.head()
odf[male]["BMI"].plot(kind="hist")
odf[female]["BMI"].plot(kind="hist")
print odf[male]["BMI"].describe()
print
print odf[female]["BMI"].describe()
Explanation: BMI
There appears to be obese males and females in the dataset
End of explanation
sports = list(set(odf["Sport"]))
sports
# choose 3 random sports
sports
import random
random_sports = random.sample(sports, 3)
for sport in random_sports:
sport_selector = odf["Sport"] == sport
odf[sport_selector].plot(kind="scatter", x="Height", y="Weight", marker='x')
Explanation: Split Data By Sport
End of explanation |
9,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to objects and classes in Python
We will touch upon some basic aspects, including
- code reuse
- abstraction
- Encapsulation
- subclasses and hierarchies
Step1: In the above examples, it becomes clear that there is much repetition, and we can make the code more compact. Let us abstract common functionality into an abstract class.
Step3: Iterators
How do I add an iteration facility to my own objects so that they can be used with for loops?
Step4: Generators
Use functions to create iterators, instead of classes
Step5: Summing series
We want to efficiently find the sum
$$
\sum_{n = 0}^N \frac{1}{n^2}
$$
As $N$ becomes larger, this becomes $\frac{\pi^2}{6}$.
Step6: The key thing to note is that this is much more efficient than generating a list of terms in memory and summing it. That is, more efficient than
Step7: Decorators
Alter the behaviour of functions (somewhat)
Example | Python Code:
class Dog:
def __init__(self, name):
self.age = 0
self.name = name
self.noise = "Woof!"
self.food = "dog biscuits"
def make_sound(self):
print(self.noise)
def eat_food(self):
print("Eating " + self.food + ".")
def increase_age(self, n = 1):
self.age = self.age + n
d1 = Dog('Buster')
d1.make_sound()
d2 = Dog('Tiger')
d2.noise = 'Bark'
d2.make_sound()
d1.make_sound()
d1.eat_food()
d1.increase_age(3)
print(d1.age)
class Cat:
def __init__(self, name):
self.age = 0
self.name = name
self.noise = "Meow!"
self.food = "cat food"
def make_sound(self):
print(self.noise)
def eat_food(self):
print("Eating " + self.food + ".")
def increase_age(self, n = 1):
self.age = self.age + n
c1 = Cat('Harvey')
c1.make_sound()
c1.eat_food()
Explanation: Introduction to objects and classes in Python
We will touch upon some basic aspects, including
- code reuse
- abstraction
- Encapsulation
- subclasses and hierarchies
End of explanation
from abc import ABCMeta, abstractmethod
class Mammal(metaclass=ABCMeta):
@abstractmethod
def __init__(self, name):
self.age = 0
self.name = name
self.noise = "None!"
self.food = "none"
def make_sound(self):
print(self.name + " says " + self.noise)
def eat_food(self):
print(self.name + " is eating " + self.food + ".")
def increase_age(self, n = 1):
self.age = self.age + n
class Dog(Mammal):
def __init__(self, name):
super(Dog, self).__init__(name)
self.noise = "Bark!"
self.food = "dog biscuits"
class Cat(Mammal):
def __init__(self, name):
super(Cat, self).__init__(name)
self.noise = "Meow!"
self.food = "cat food"
d = Dog("Buster")
c = Cat("Harvey")
d.make_sound()
c.make_sound()
c.eat_food()
m = Mammal("Name")
m.make_sound()
m.eat_food()
import sys
print(sys.version)
animal_house = [Dog("MyDog" + str(i))
for i in range(1, 5)]
animal_house.extend([Cat("MyCat" + str(i))
for i in range(1, 5)])
for i in animal_house:
i.make_sound()
Explanation: In the above examples, it becomes clear that there is much repetition, and we can make the code more compact. Let us abstract common functionality into an abstract class.
End of explanation
class Reverse:
Iterator for looping over a sequence backwards.
def __init__(self, data):
self.data = data
self.index = len(data)
def __iter__(self):
return self
def next(self): # def next(self): in Python 2!
if self.index == 0:
raise StopIteration
self.index = self.index - 1
return self.data[self.index]
rev = iter(Reverse([10, 30, 200, 0.0, 'ABC']))
for i in rev:
print(i)
Explanation: Iterators
How do I add an iteration facility to my own objects so that they can be used with for loops?
End of explanation
def reverse(data):
for index in range(len(data)-1, -1, -1):
yield data[index]
for char in reverse("Madam, I'm Adam"):
print(char)
Explanation: Generators
Use functions to create iterators, instead of classes
End of explanation
import math
def series_sum(max_terms=1000):
n = 0
while n < max_terms:
n = n + 1
yield 1.0 / n**2
print(sum(series_sum(100000)) - math.pi**2 / 6)
Explanation: Summing series
We want to efficiently find the sum
$$
\sum_{n = 0}^N \frac{1}{n^2}
$$
As $N$ becomes larger, this becomes $\frac{\pi^2}{6}$.
End of explanation
print(sum([1.0 / i**2 for i in range(1, 10000)]))
Explanation: The key thing to note is that this is much more efficient than generating a list of terms in memory and summing it. That is, more efficient than
End of explanation
def add_numbers(a, b):
return a + b
def arg_wrapper(f, *args, **kwargs):
print("The function arguments are:")
print(args)
print(kwargs)
print("Now running the function!")
return f(*args, **kwargs)
#print(add_numbers(1, 2))
#print(arg_wrapper(add_numbers, 1, 2))
def myfunction(name='Test', age=30):
print("Name: %s, Age: %d" % (name, age))
arg_wrapper(myfunction, name='Harvey', age=3)
import time
def timing_function(some_function):
def wrapper():
t1 = time.time()
some_function()
t2 = time.time()
return "Time it took to run the function: " + str((t2 - t1)) + "\n"
return wrapper
@timing_function
def my_function():
num_list = []
for num in (range(0, 10000)):
num_list.append(num)
print("\nSum of all the numbers: " + str((sum(num_list))))
my_function()
Explanation: Decorators
Alter the behaviour of functions (somewhat)
Example:
I wish to print the arguments to a function before I call it.
One way: Edit the function. But it's clumsy!
Better way, wrap it in another function:
End of explanation |
9,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a function map for substring comparison
~~~
d = {'a'
Step1: Changing value changes the function behavior as well
Step2: Solution 2 - function factory using closure
Step3: Changing value doesn't affect the functions
Step4: Solution 3 - partial
Step5: Changing value | Python Code:
d = {'a': 'AB', 'b': 'C'}
funcs = {}
for key, value in d.items():
funcs[key] = lambda v: v in value
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
print(funcs['b']('AB'), funcs['b']('C'))
Explanation: Create a function map for substring comparison
~~~
d = {'a': 'AB', 'b': 'C'}
funcs = ....
funcs'a' --> True
funcs'b' --> False
funcs'b' --> False
~~~
Solution 1 - naive solution doesn't work as expected
End of explanation
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
Explanation: Changing value changes the function behavior as well
End of explanation
d = {'a': 'AB', 'b': 'C'}
def foo(value):
def bar(v):
return v in value
return bar
funcs = {}
for key, value in d.items():
funcs[key] = foo(value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
d = {'a': 'AB', 'b': 'C'}
def foo(value):
def bar(v):
return v in value
return bar
funcs = {}
for key, value in d.items():
funcs[key] = foo(value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
print(funcs['b']('AB'), funcs['b']('C'))
Explanation: Solution 2 - function factory using closure
End of explanation
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
Explanation: Changing value doesn't affect the functions
End of explanation
from functools import partial
d = {'a': 'AB', 'b': 'C'}
funcs = {}
for key, value in d.items():
funcs[key] = partial(lambda v, value: v in value, value=value)
# True, True, False ?
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'))
# False, True ?
print(funcs['b']('AB'), funcs['b']('C'))
Explanation: Solution 3 - partial
End of explanation
value = "ABCD"
print(funcs['a']('AB'), funcs['a']('A'), funcs['a']('C'), funcs['a']('D'))
Explanation: Changing value
End of explanation |
9,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
Aaron Gonzales
CS529, Machine Learning
Project 3
Instructor
Step1: I had previously extracted the ffts and mcfts from the data; all of them live in the /data/ folder of this package. As such, loading them in is easy. Methods to extract them are in the fft.py and mfcc.py files; both were ran from an IPython environment. The FFT data was scaled to {0, 1} and the mfcs were scaled via z-score. the data is the variables (fft, mfcs), the labels are hopefully labeled clearly enough, and the dictionary is just a mapping of label ID to actual english word.
Note
Step2: The classifer is implemented as a class and holds its metrics and data information internally after calls to its methods. It is initialized with the data as given
Step3: FFT components
Now that we have our data loaded, we can go ahead and fit the logistic regression model to it. Internally it is performing 10-fold cross validation with shuffling via Sklearn's cross_validated module.
Gradient Descent
I went with the vectorized version of gradient descent discussed in Piazza. My learning rate was adaptive to the custom 'error' rate defined as the max value from the dot product between the
$$\Delta
Step4: These are not the best scores i've ever seen.
Selecting only the features that have moderate variance gives us a set of 200 to test.
Step5: Those scores went down, so I presume that I did something wrong or that ther is incredible bias or multicoliniarty in this model. I'll try PCA and see how that goes.
Step6: Not much better.
Results are holding steady around 30%.
On to the MFC features. | Python Code:
%load_ext autoreload
%autoreload 2
import numpy as np
import sklearn.metrics as metrics
import utils as utils
from LogisticRegressionClassifier import LogisticRegressionClassifier
%pylab inline
Explanation: Logistic Regression
Aaron Gonzales
CS529, Machine Learning
Project 3
Instructor: Trilce Estrada
Overview of the project
I have a working logistic regression classifier built in python and numpy. It seems to work somewhat quickly but may not be that robust or as stable as I'd like it to be. There are a great number of things to look at with respect to optimizing gradient descent and the various fiddly bits of the program.
This is an ipython notebook; code can be executed direclty from here.
When you run the program, navigate to the root of this project and then to /src.
python3 ./main.py
will make the program work.
End of explanation
fft_dict, fft_labels, ffts = utils.read_features(feature='fft')
mfc_dict, mfc_labels, mfcs = utils.read_features(feature='mfc')
Explanation: I had previously extracted the ffts and mcfts from the data; all of them live in the /data/ folder of this package. As such, loading them in is easy. Methods to extract them are in the fft.py and mfcc.py files; both were ran from an IPython environment. The FFT data was scaled to {0, 1} and the mfcs were scaled via z-score. the data is the variables (fft, mfcs), the labels are hopefully labeled clearly enough, and the dictionary is just a mapping of label ID to actual english word.
Note: I used the full 1000 song dataset for this; not the reduced set from Trilce.
End of explanation
lrc_fft = LogisticRegressionClassifier(ffts, fft_labels, fft_dict)
lrc_mfc = LogisticRegressionClassifier(mfcs, mfc_labels, mfc_dict)
Explanation: The classifer is implemented as a class and holds its metrics and data information internally after calls to its methods. It is initialized with the data as given:
End of explanation
lrc_fft.cross_validate()
Explanation: FFT components
Now that we have our data loaded, we can go ahead and fit the logistic regression model to it. Internally it is performing 10-fold cross validation with shuffling via Sklearn's cross_validated module.
Gradient Descent
I went with the vectorized version of gradient descent discussed in Piazza. My learning rate was adaptive to the custom 'error' rate defined as the max value from the dot product between the
$$\Delta
End of explanation
from sklearn.feature_selection import VarianceThreshold
sel = VarianceThreshold(0.01150)
a = sel.fit_transform(ffts)
a.shape
lr = LogisticRegressionClassifier(a, fft_labels, fft_dict)
lr.cross_validate()
Explanation: These are not the best scores i've ever seen.
Selecting only the features that have moderate variance gives us a set of 200 to test.
End of explanation
from sklearn.decomposition import PCA
p = PCA(n_components=200)
pcad = p.fit_transform(ffts)
pcalrc = LogisticRegressionClassifier(pcad, fft_labels, fft_dict)
pcalrc.cross_validate(3)
Explanation: Those scores went down, so I presume that I did something wrong or that ther is incredible bias or multicoliniarty in this model. I'll try PCA and see how that goes.
End of explanation
# this was already fit
lrc_mfc.cross_validate(10)
_ = utils.plot_confusion_matrix(lrc_mfc.metrics['cv_average'])
Explanation: Not much better.
Results are holding steady around 30%.
On to the MFC features.
End of explanation |
9,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 自定义联合算法,第 2 部分:实现联合平均
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 实现联合平均
与图像分类联合学习一样,我们将使用 MNIST 示例,但由于这是一个低级教程,我们将绕过 Keras API 和 tff.simulation,编写原始模型代码,并从头开始构造联合数据集。
准备联合数据集
为了进行演示,我们将模拟一个场景,其中有来自 10 个用户的数据,每个用户都会提供如何识别不同数字的知识。这是能够得到的最非独立同分布的情况。
首先,加载标准 MNIST 数据:
Step3: 数据以 Numpy 数组的形式出现,一个带有图像,另一个带有数字标签,其中第一个维度都遍历各个样本。我们来编写一个辅助函数,并使用与将联合序列馈送到 TFF 计算的方式相兼容的方式(即作为列表的列表,外部列表包括用户(数字),内部列表包括每个客户端序列中的数据批次)对其进行格式化。按照惯例,我们将每个批次构造为一对名为 x 和 y 的张量,每个张量都具有与首个批次相同的维度。同时,我们还将每个图像展平为一个具有 784 个元素的向量,并将其中的像素重新缩放到 0..1 范围内,这样我们就不必在模型逻辑上进行数据转换了。
Step4: 作为快速的健全性检查,我们来看一下第五个客户端(对应数字 5)所贡献的最后一个数据批次中的 Y 张量。
Step5: 保险起见,我们再检查一下该批次最后一个元素对应的图像。
Step6: 关于 TensorFlow 与 TFF 的结合
在本教程中,出于紧凑考虑,我们使用 tff.tf_computation 对引入 TensorFlow 逻辑的函数进行了直接装饰。但对于更复杂的逻辑,我们不建议使用这种模式。调试 TensorFlow 本身就是一种挑战,如果在 TensorFlow 完全序列化并重新导入后再对其进行调试,必然会丢失部分元数据并限制交互性,这会使调试面临更大挑战。
因此,我们强烈建议将复杂的 TF 逻辑编写为独立的 Python 函数(即不使用 tff.tf_computation 装饰)。这样,在序列化 TFF 计算之前(例如,通过将 Python 函数用作参数调用 tff.tf_computation),可以使用 TF 最佳做法和工具(如 Eager 模式)对 TensorFlow 逻辑进行开发和测试。
定义损失函数
现在有了数据,我们来定义一个可以用于训练的损失函数。首先,将输入类型定义为 TFF 命名元组。由于数据批次的大小可能会有所不同,因此我们将批次维度设置为 None,表示该维度的大小未知。
Step7: 您可能想知道为什么我们不能只定义普通的 Python 类型。回想一下第 1 部分中讨论的内容,我们解释了虽然可以使用 Python 来表达 TFF 计算的逻辑,但实际上 TFF 计算不是 Python。上面定义的符号 BATCH_TYPE 表示抽象的 TFF 类型规范。区分这种抽象的 TFF 类型与具体的 Python 表示 类型(可用来表示 Python 函数主体中 TFF 类型的容器,如 dict 或 collections.namedtuple)很重要。与 Python 不同,针对类似元组的容器,TFF 具有单个抽象类型构造函数 tff.StructType,其元素可以单独命名或不命名。这种类型还用于对计算的形式化参数进行建模,因为 TFF 计算形式上只能声明一个参数和一个结果(稍后您将看到相关示例)。
现在,我们来定义模型参数的 TFF 类型,仍然将其定义为权重和偏差的 TFF 命名元组。
Step8: 有了这些定义,现在我们可以在单个批次上定义给定模型的损失。请注意 @tf.function 装饰器在 @tff.tf_computation 装饰器内部的用法。通过这种用法,即使在由 tff.tf_computation 装饰器创建的 tf.Graph 上下文中,我们也可以使用类似 Python 的语义来编写 TF。
Step9: 和预期一样,在给定模型和单个数据批次的情况下,计算 batch_loss 返回 float32 损失。请注意 MODEL_TYPE 和 BATCH_TYPE 合并为形式参数的二维元组的方式;您可以将 batch_loss 的类型识别为 (<MODEL_TYPE,BATCH_TYPE> -> float32)。
Step10: 作为健全性检查,我们来构造一个用零填充的初始模型,并计算上文中可视化的那批数据的损失。
Step11: 请注意,我们使用定义为 dict 的初始模型为 TFF 计算馈送数据,即便定义它的 Python 函数的主体将模型参数用作 model['weight'] 和 model['bias'] 。batch_loss 调用的参数并不是简单地传递给该函数的主体。
当我们调用 batch_loss 时会发生什么情况?batch_loss 的 Python 主体已在上面的单元格中(在对其进行定义的位置)进行了跟踪和序列化。TFF 在计算定义时充当 batch_loss 的调用者,并在 batch_loss 被调用时充当调用的目标。在这两个角色中,TFF 均充当 TFF 的抽象类型系统和 Python 表示类型之间的桥梁。在调用时,TFF 将接受大多数标准 Python 容器类型(dict、list、tuple、collections.namedtuple 等),以将其作为抽象 TFF 元组的具体表示。虽然我们在上文中提到,TFF 计算在形式上仅接受单个参数,但如果参数的类型是元组,则可以将熟悉的 Python 调用语法与位置和/或关键字参数一起使用,它会按预期工作。
单个批次上的梯度下降
现在,我们来定义一个使用下面的损失函数来执行单步梯度下降的计算。请注意我们在定义此函数时,如何将 batch_loss 用作子组件。您可以在另一个计算的主体内部调用使用 tff.tf_computation 构造的计算,但正如我们在上文中提到的,您通常没有必要进行此操作。这是因为,序列化会丢失部分调试信息,因此对于不使用 tff.tf_computation 装饰器来编写和测试所有 TensorFlow 的更复杂的计算来说,这种方式更加可取。
Step12: 当您在另一个此类函数的主体中调用使用 tff.tf_computation 装饰的 Python 函数时,内部 TFF 计算的逻辑会嵌入(本质上为内嵌)到外部计算的逻辑中。如上所述,如果要编写这两个计算,最好将内部函数(在本例中为 batch_loss)设置为常规 Python 或 tf.function 函数,而非 tff.tf_computation 函数。但这里我们演示了,在 tff.tf_computation 内部调用与其相同的函数基本上可以按预期工作。例如,如果您没有定义 batch_loss 的 Python 代码,而只有它的序列化 TFF 表示,则可能必须进行此操作。
现在,将这个函数在初始模型上应用几次,以查看损失是否会减少。
Step13: 本地数据序列上的梯度下降
现在,由于 batch_train 似乎可以正常工作,我们来编写一个类似的训练函数 local_train,它会使用一个用户所有批次的整个序列,而不仅仅是一个批次。现在,新的计算将需要使用 tff.SequenceType(BATCH_TYPE) 而不是 BATCH_TYPE。
Step14: 这段简短的代码中包含了很多细节,我们将逐一进行介绍。
首先,虽然我们完全可以用 TensorFlow 实现此逻辑,像之前那样利用 tf.data.Dataset.reduce 来处理序列,但这次我们选择用胶水语言将此逻辑表达为 tff.federated_computation。我们已使用联合算子 tff.sequence_reduce 来执行归约。
算子 tff.sequence_reduce 的用法类似于 tf.data.Dataset.reduce。您可以认为它在本质上与 tf.data.Dataset.reduce 相同,但是前者用于联合计算内部(您也许还记得,它不能包含 TensorFlow 代码)。它是一个模板算子,其形式参数三维元组由 T 型元素的序列、某种类型 U 的归约初始状态(我们将其抽象地称为零),以及类型 (<U,T> -> U) 的归约算子(通过处理单个元素改变归约状态)组成。得到的结果是按顺序处理所有元素后归约的最终状态。在我们的示例中,归约状态是在数据前缀上训练的模型,且元素是数据批次。
其次,请注意,我们再次将一个计算(batch_train)用作了另一个计算(local_train)中的组件,而非直接使用。我们不能将其用作归约算子,因为它需要一个额外参数,即学习率。为了解决这个问题,我们定义一个嵌入式联合计算 batch_fn,该计算绑定到其主体中 local_train 的参数 learning_rate。因此,以这种方式定义的子计算可以捕获其父级的形式参数,只要子计算未在其父级的主体之外调用。您可以将此模式视为 Python 中 functools.partial 的等效项。
当然,以这种方式捕获 learning_rate 的实际含义是,在所有批次中都使用相同的学习率值。
现在,我们在整个数据序列上尝试新定义的本地训练函数,该数据序列由贡献了样本批次的同一用户(数字 5)提供。
Step15: 有效果吗?为了回答这个问题,我们需要实现评估。
本地评估
下面是一种通过将所有数据批次的损失加总起来实现本地评估的方法(也可以算出平均值;我们将把它作为练习留给读者)。
Step16: 同样,此代码演示了一些新的元素,我们将逐一进行介绍。
首先,我们使用了两个新的联合算子来处理序列:一个是 tff.sequence_map,它接受映射函数 T->U 和 T 的序列,然后发出通过逐点应用映射函数获得的 U 的序列;另一个是 tff.sequence_sum,它只是把所有元素加总起来。在这里,我们将每个数据批次映射到损失值,然后将生成的损失值加总以计算总损失。
请注意,我们可以再次使用 tff.sequence_reduce,但这不是最佳选择,根据定义,归约过程是顺序的,而映射和求和可以并行计算。如果有选择的话,最好坚持使用不限制实现选择的算子,这样,当将来编译 TFF 计算以部署到特定环境时,就可以充分利用所有潜在机会,实现更快、扩展性更强、更节省资源的执行。
其次,请注意,正如在 local_train 中一样,我们需要的组件函数(batch_loss)接受的参数比联合算子(tff.sequence_map)所期望的参数要多,因此我们再次定义了部分参数(内嵌),这次是通过直接将 lambda 封装为 tff.federated_computation。如果要使用 tff.tf_computation 将 TensorFlow 逻辑嵌入 TFF,建议将封装容器与函数一起作为参数内嵌使用。
现在,看看我们的训练是否有效。
Step17: 确实,损失减少了。但如果我们根据其他用户的数据对其进行评估,会发生什么呢?
Step18: 情况果然变得更糟了。该模型经过训练可以识别 5,但从未看到 0。这就出现了一个问题,即从全局角度来看,本地训练会对模型质量产生什么影响?
联合评估
至此,我们终于回到了联合类型和联合计算,即我们最开始讨论的主题。下面是一对源自服务器的模型的 TFF 类型定义,以及保留在客户端上的数据。
Step19: 根据目前为止介绍的所有定义,在 TFF 中对联合评估的表达均为一行式,我们将模型分发给客户端,让每个客户端在其本地数据部分上调用本地评估,然后对损失进行平均。下面是一种编写方法。
Step20: 我们已经在更简单的场景中看到了 tff.federated_mean 和 tff.federated_map 的示例,直观来看,他们可以按照预期工作,但这部分代码并不像看上去那么简单,下面我们来仔细研究一下。
首先,我们来分解一下让每个客户端在其本地数据部分上调用本地评估这个部分。您可能还记得前几部分的内容,local_eval 具有形式为 (<MODEL_TYPE, LOCAL_DATA_TYPE> -> float32) 的类型签名。
联合算子 tff.federated_map 是一个模版,它接受二维元组作为参数,该二维元组由某种类型 T->U 的映射函数和类型 {T}@CLIENTS 的联合值(即,具有与映射函数的参数相同类型的成员组成)组成,并返回 {U}@CLIENTS 类型的结果。
由于我们将 local_eval 作为映射函数馈送给每个客户端,因此第二个参数应为联合类型 {<MODEL_TYPE, LOCAL_DATA_TYPE>}@CLIENTS(即,根据前几部分的命名,它应该是一个联合元组)。每个客户端应将 local_eval 的完整参数集作为成员组成。相反,我们向它馈送的是 2 个元素的 Python list。这是什么情况?
实际上,这是 TFF 中隐式类型转换的示例,它类似于您可能在其他地方遇到的隐式类型转换(例如,当您向接受 float 的函数馈送 int时)。目前很少使用隐式转换,但我们计划使它在 TFF 中更加普遍,以尽量减少样板文件。
在这种情况下,应用的隐式转换在形式为 {<X,Y>}@Z 的联合元组和联合值为 <{X}@Z,{Y}@Z> 的元组之间等效。虽然二者是不同的类型签名,从程序员的角度来看,Z 中的每个设备都包含数据 X 和 Y 的两个单元。这里发生的情况与 Python 中的 zip 没什么区别,实际上,我们提供了一种算子 tff.federated_zip,使您可以显式地执行此类转换。当 tff.federated_map 遇到作为第二个参数的元组时,它将为您直接调用 tff.federated_zip。
根据上述信息,您现在应该能够将表达式 tff.federated_broadcast(model) 识别为表示 TFF 类型 {MODEL_TYPE}@CLIENTS 的值,并将 data 识别为 TFF 类型 {LOCAL_DATA_TYPE}@CLIENTS(或简写为 CLIENT_DATA_TYPE)的值,两者通过隐式 tff.federated_zip 一起筛选,以形成 tff.federated_map 的第二个参数。
如您所料,算子 tff.federated_broadcast 只是将数据从服务器传输到客户端。
现在,我们来看看本地训练如何影响系统的平均损失。
Step21: 确实,和预期一样,损失增加了。为了改进所有用户的模型,我们需要用每个用户自己的数据进行训练。
联合训练
实现联合训练的最简单方法是进行本地训练,然后对模型进行平均。这会用到我们讨论过的相同构建块和模式,如下所示。
Step22: 请注意,在 tff.learning 所提供的联合平均的全功能实现中,由于多种原因(例如,裁剪更新范数的能力、用于压缩等),我们更喜欢对模型增量进行平均,而不是对模型进行平均。
让我们通过进行几轮训练并比较前后的平均损失,来看看训练是否有效。
Step23: 现在,为了完整起见,我们也在测试数据上运行一下,以确认我们的模型能够很好地泛化。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated-nightly
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
# TODO(b/148678573,b/148685415): must use the reference context because it
# supports unbounded references and tff.sequence_* intrinsics.
tff.backends.reference.set_reference_context()
@tff.federated_computation
def hello_world():
return 'Hello, World!'
hello_world()
Explanation: 自定义联合算法,第 2 部分:实现联合平均
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/federated/tutorials/custom_federated_algorithms_2"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/federated/tutorials/custom_federated_algorithms_2.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a> </td>
</table>
本系列教程包括两个部分,此为第二部分。该系列演示了如何使用 Federated Core (FC) 在 TFF 中实现自定义类型的联合算法,它是联合学习 (FL) 层(tff.learning)的基础。
我们建议您先阅读本系列的第一部分,其中介绍了此处使用的一些关键概念和编程抽象。
本系列的第二部分使用第一部分中介绍的机制来实现简单版本的联合训练和评估算法。
我们建议您查看图像分类和文本生成教程,以获得对 TFF 的 Federated Learning API 更高级和更循序渐进的介绍,因为它们将帮助您在上下文中理解我们在此描述的概念。
准备工作
在开始之前,请尝试运行以下“Hello World”示例,以确保您的环境已正确配置。如果无法正常运行,请参阅安装指南查看说明。
End of explanation
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
[(x.dtype, x.shape) for x in mnist_train]
Explanation: 实现联合平均
与图像分类联合学习一样,我们将使用 MNIST 示例,但由于这是一个低级教程,我们将绕过 Keras API 和 tff.simulation,编写原始模型代码,并从头开始构造联合数据集。
准备联合数据集
为了进行演示,我们将模拟一个场景,其中有来自 10 个用户的数据,每个用户都会提供如何识别不同数字的知识。这是能够得到的最非独立同分布的情况。
首先,加载标准 MNIST 数据:
End of explanation
NUM_EXAMPLES_PER_USER = 1000
BATCH_SIZE = 100
def get_data_for_digit(source, digit):
output_sequence = []
all_samples = [i for i, d in enumerate(source[1]) if d == digit]
for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):
batch_samples = all_samples[i:i + BATCH_SIZE]
output_sequence.append({
'x':
np.array([source[0][i].flatten() / 255.0 for i in batch_samples],
dtype=np.float32),
'y':
np.array([source[1][i] for i in batch_samples], dtype=np.int32)
})
return output_sequence
federated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]
federated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]
Explanation: 数据以 Numpy 数组的形式出现,一个带有图像,另一个带有数字标签,其中第一个维度都遍历各个样本。我们来编写一个辅助函数,并使用与将联合序列馈送到 TFF 计算的方式相兼容的方式(即作为列表的列表,外部列表包括用户(数字),内部列表包括每个客户端序列中的数据批次)对其进行格式化。按照惯例,我们将每个批次构造为一对名为 x 和 y 的张量,每个张量都具有与首个批次相同的维度。同时,我们还将每个图像展平为一个具有 784 个元素的向量,并将其中的像素重新缩放到 0..1 范围内,这样我们就不必在模型逻辑上进行数据转换了。
End of explanation
federated_train_data[5][-1]['y']
Explanation: 作为快速的健全性检查,我们来看一下第五个客户端(对应数字 5)所贡献的最后一个数据批次中的 Y 张量。
End of explanation
from matplotlib import pyplot as plt
plt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')
plt.grid(False)
plt.show()
Explanation: 保险起见,我们再检查一下该批次最后一个元素对应的图像。
End of explanation
BATCH_SPEC = collections.OrderedDict(
x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),
y=tf.TensorSpec(shape=[None], dtype=tf.int32))
BATCH_TYPE = tff.to_type(BATCH_SPEC)
str(BATCH_TYPE)
Explanation: 关于 TensorFlow 与 TFF 的结合
在本教程中,出于紧凑考虑,我们使用 tff.tf_computation 对引入 TensorFlow 逻辑的函数进行了直接装饰。但对于更复杂的逻辑,我们不建议使用这种模式。调试 TensorFlow 本身就是一种挑战,如果在 TensorFlow 完全序列化并重新导入后再对其进行调试,必然会丢失部分元数据并限制交互性,这会使调试面临更大挑战。
因此,我们强烈建议将复杂的 TF 逻辑编写为独立的 Python 函数(即不使用 tff.tf_computation 装饰)。这样,在序列化 TFF 计算之前(例如,通过将 Python 函数用作参数调用 tff.tf_computation),可以使用 TF 最佳做法和工具(如 Eager 模式)对 TensorFlow 逻辑进行开发和测试。
定义损失函数
现在有了数据,我们来定义一个可以用于训练的损失函数。首先,将输入类型定义为 TFF 命名元组。由于数据批次的大小可能会有所不同,因此我们将批次维度设置为 None,表示该维度的大小未知。
End of explanation
MODEL_SPEC = collections.OrderedDict(
weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),
bias=tf.TensorSpec(shape=[10], dtype=tf.float32))
MODEL_TYPE = tff.to_type(MODEL_SPEC)
print(MODEL_TYPE)
Explanation: 您可能想知道为什么我们不能只定义普通的 Python 类型。回想一下第 1 部分中讨论的内容,我们解释了虽然可以使用 Python 来表达 TFF 计算的逻辑,但实际上 TFF 计算不是 Python。上面定义的符号 BATCH_TYPE 表示抽象的 TFF 类型规范。区分这种抽象的 TFF 类型与具体的 Python 表示 类型(可用来表示 Python 函数主体中 TFF 类型的容器,如 dict 或 collections.namedtuple)很重要。与 Python 不同,针对类似元组的容器,TFF 具有单个抽象类型构造函数 tff.StructType,其元素可以单独命名或不命名。这种类型还用于对计算的形式化参数进行建模,因为 TFF 计算形式上只能声明一个参数和一个结果(稍后您将看到相关示例)。
现在,我们来定义模型参数的 TFF 类型,仍然将其定义为权重和偏差的 TFF 命名元组。
End of explanation
# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can
# be later called from within another tf.function. Necessary because a
# @tf.function decorated method cannot invoke a @tff.tf_computation.
@tf.function
def forward_pass(model, batch):
predicted_y = tf.nn.softmax(
tf.matmul(batch['x'], model['weights']) + model['bias'])
return -tf.reduce_mean(
tf.reduce_sum(
tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE)
def batch_loss(model, batch):
return forward_pass(model, batch)
Explanation: 有了这些定义,现在我们可以在单个批次上定义给定模型的损失。请注意 @tf.function 装饰器在 @tff.tf_computation 装饰器内部的用法。通过这种用法,即使在由 tff.tf_computation 装饰器创建的 tf.Graph 上下文中,我们也可以使用类似 Python 的语义来编写 TF。
End of explanation
str(batch_loss.type_signature)
Explanation: 和预期一样,在给定模型和单个数据批次的情况下,计算 batch_loss 返回 float32 损失。请注意 MODEL_TYPE 和 BATCH_TYPE 合并为形式参数的二维元组的方式;您可以将 batch_loss 的类型识别为 (<MODEL_TYPE,BATCH_TYPE> -> float32)。
End of explanation
initial_model = collections.OrderedDict(
weights=np.zeros([784, 10], dtype=np.float32),
bias=np.zeros([10], dtype=np.float32))
sample_batch = federated_train_data[5][-1]
batch_loss(initial_model, sample_batch)
Explanation: 作为健全性检查,我们来构造一个用零填充的初始模型,并计算上文中可视化的那批数据的损失。
End of explanation
@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)
def batch_train(initial_model, batch, learning_rate):
# Define a group of model variables and set them to `initial_model`. Must
# be defined outside the @tf.function.
model_vars = collections.OrderedDict([
(name, tf.Variable(name=name, initial_value=value))
for name, value in initial_model.items()
])
optimizer = tf.keras.optimizers.SGD(learning_rate)
@tf.function
def _train_on_batch(model_vars, batch):
# Perform one step of gradient descent using loss from `batch_loss`.
with tf.GradientTape() as tape:
loss = forward_pass(model_vars, batch)
grads = tape.gradient(loss, model_vars)
optimizer.apply_gradients(
zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))
return model_vars
return _train_on_batch(model_vars, batch)
str(batch_train.type_signature)
Explanation: 请注意,我们使用定义为 dict 的初始模型为 TFF 计算馈送数据,即便定义它的 Python 函数的主体将模型参数用作 model['weight'] 和 model['bias'] 。batch_loss 调用的参数并不是简单地传递给该函数的主体。
当我们调用 batch_loss 时会发生什么情况?batch_loss 的 Python 主体已在上面的单元格中(在对其进行定义的位置)进行了跟踪和序列化。TFF 在计算定义时充当 batch_loss 的调用者,并在 batch_loss 被调用时充当调用的目标。在这两个角色中,TFF 均充当 TFF 的抽象类型系统和 Python 表示类型之间的桥梁。在调用时,TFF 将接受大多数标准 Python 容器类型(dict、list、tuple、collections.namedtuple 等),以将其作为抽象 TFF 元组的具体表示。虽然我们在上文中提到,TFF 计算在形式上仅接受单个参数,但如果参数的类型是元组,则可以将熟悉的 Python 调用语法与位置和/或关键字参数一起使用,它会按预期工作。
单个批次上的梯度下降
现在,我们来定义一个使用下面的损失函数来执行单步梯度下降的计算。请注意我们在定义此函数时,如何将 batch_loss 用作子组件。您可以在另一个计算的主体内部调用使用 tff.tf_computation 构造的计算,但正如我们在上文中提到的,您通常没有必要进行此操作。这是因为,序列化会丢失部分调试信息,因此对于不使用 tff.tf_computation 装饰器来编写和测试所有 TensorFlow 的更复杂的计算来说,这种方式更加可取。
End of explanation
model = initial_model
losses = []
for _ in range(5):
model = batch_train(model, sample_batch, 0.1)
losses.append(batch_loss(model, sample_batch))
losses
Explanation: 当您在另一个此类函数的主体中调用使用 tff.tf_computation 装饰的 Python 函数时,内部 TFF 计算的逻辑会嵌入(本质上为内嵌)到外部计算的逻辑中。如上所述,如果要编写这两个计算,最好将内部函数(在本例中为 batch_loss)设置为常规 Python 或 tf.function 函数,而非 tff.tf_computation 函数。但这里我们演示了,在 tff.tf_computation 内部调用与其相同的函数基本上可以按预期工作。例如,如果您没有定义 batch_loss 的 Python 代码,而只有它的序列化 TFF 表示,则可能必须进行此操作。
现在,将这个函数在初始模型上应用几次,以查看损失是否会减少。
End of explanation
LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)
@tff.federated_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)
def local_train(initial_model, learning_rate, all_batches):
# Mapping function to apply to each batch.
@tff.federated_computation(MODEL_TYPE, BATCH_TYPE)
def batch_fn(model, batch):
return batch_train(model, batch, learning_rate)
return tff.sequence_reduce(all_batches, initial_model, batch_fn)
str(local_train.type_signature)
Explanation: 本地数据序列上的梯度下降
现在,由于 batch_train 似乎可以正常工作,我们来编写一个类似的训练函数 local_train,它会使用一个用户所有批次的整个序列,而不仅仅是一个批次。现在,新的计算将需要使用 tff.SequenceType(BATCH_TYPE) 而不是 BATCH_TYPE。
End of explanation
locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])
Explanation: 这段简短的代码中包含了很多细节,我们将逐一进行介绍。
首先,虽然我们完全可以用 TensorFlow 实现此逻辑,像之前那样利用 tf.data.Dataset.reduce 来处理序列,但这次我们选择用胶水语言将此逻辑表达为 tff.federated_computation。我们已使用联合算子 tff.sequence_reduce 来执行归约。
算子 tff.sequence_reduce 的用法类似于 tf.data.Dataset.reduce。您可以认为它在本质上与 tf.data.Dataset.reduce 相同,但是前者用于联合计算内部(您也许还记得,它不能包含 TensorFlow 代码)。它是一个模板算子,其形式参数三维元组由 T 型元素的序列、某种类型 U 的归约初始状态(我们将其抽象地称为零),以及类型 (<U,T> -> U) 的归约算子(通过处理单个元素改变归约状态)组成。得到的结果是按顺序处理所有元素后归约的最终状态。在我们的示例中,归约状态是在数据前缀上训练的模型,且元素是数据批次。
其次,请注意,我们再次将一个计算(batch_train)用作了另一个计算(local_train)中的组件,而非直接使用。我们不能将其用作归约算子,因为它需要一个额外参数,即学习率。为了解决这个问题,我们定义一个嵌入式联合计算 batch_fn,该计算绑定到其主体中 local_train 的参数 learning_rate。因此,以这种方式定义的子计算可以捕获其父级的形式参数,只要子计算未在其父级的主体之外调用。您可以将此模式视为 Python 中 functools.partial 的等效项。
当然,以这种方式捕获 learning_rate 的实际含义是,在所有批次中都使用相同的学习率值。
现在,我们在整个数据序列上尝试新定义的本地训练函数,该数据序列由贡献了样本批次的同一用户(数字 5)提供。
End of explanation
@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)
def local_eval(model, all_batches):
# TODO(b/120157713): Replace with `tff.sequence_average()` once implemented.
return tff.sequence_sum(
tff.sequence_map(
tff.federated_computation(lambda b: batch_loss(model, b), BATCH_TYPE),
all_batches))
str(local_eval.type_signature)
Explanation: 有效果吗?为了回答这个问题,我们需要实现评估。
本地评估
下面是一种通过将所有数据批次的损失加总起来实现本地评估的方法(也可以算出平均值;我们将把它作为练习留给读者)。
End of explanation
print('initial_model loss =', local_eval(initial_model,
federated_train_data[5]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[5]))
Explanation: 同样,此代码演示了一些新的元素,我们将逐一进行介绍。
首先,我们使用了两个新的联合算子来处理序列:一个是 tff.sequence_map,它接受映射函数 T->U 和 T 的序列,然后发出通过逐点应用映射函数获得的 U 的序列;另一个是 tff.sequence_sum,它只是把所有元素加总起来。在这里,我们将每个数据批次映射到损失值,然后将生成的损失值加总以计算总损失。
请注意,我们可以再次使用 tff.sequence_reduce,但这不是最佳选择,根据定义,归约过程是顺序的,而映射和求和可以并行计算。如果有选择的话,最好坚持使用不限制实现选择的算子,这样,当将来编译 TFF 计算以部署到特定环境时,就可以充分利用所有潜在机会,实现更快、扩展性更强、更节省资源的执行。
其次,请注意,正如在 local_train 中一样,我们需要的组件函数(batch_loss)接受的参数比联合算子(tff.sequence_map)所期望的参数要多,因此我们再次定义了部分参数(内嵌),这次是通过直接将 lambda 封装为 tff.federated_computation。如果要使用 tff.tf_computation 将 TensorFlow 逻辑嵌入 TFF,建议将封装容器与函数一起作为参数内嵌使用。
现在,看看我们的训练是否有效。
End of explanation
print('initial_model loss =', local_eval(initial_model,
federated_train_data[0]))
print('locally_trained_model loss =',
local_eval(locally_trained_model, federated_train_data[0]))
Explanation: 确实,损失减少了。但如果我们根据其他用户的数据对其进行评估,会发生什么呢?
End of explanation
SERVER_MODEL_TYPE = tff.type_at_server(MODEL_TYPE)
CLIENT_DATA_TYPE = tff.type_at_clients(LOCAL_DATA_TYPE)
Explanation: 情况果然变得更糟了。该模型经过训练可以识别 5,但从未看到 0。这就出现了一个问题,即从全局角度来看,本地训练会对模型质量产生什么影响?
联合评估
至此,我们终于回到了联合类型和联合计算,即我们最开始讨论的主题。下面是一对源自服务器的模型的 TFF 类型定义,以及保留在客户端上的数据。
End of explanation
@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)
def federated_eval(model, data):
return tff.federated_mean(
tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))
Explanation: 根据目前为止介绍的所有定义,在 TFF 中对联合评估的表达均为一行式,我们将模型分发给客户端,让每个客户端在其本地数据部分上调用本地评估,然后对损失进行平均。下面是一种编写方法。
End of explanation
print('initial_model loss =', federated_eval(initial_model,
federated_train_data))
print('locally_trained_model loss =',
federated_eval(locally_trained_model, federated_train_data))
Explanation: 我们已经在更简单的场景中看到了 tff.federated_mean 和 tff.federated_map 的示例,直观来看,他们可以按照预期工作,但这部分代码并不像看上去那么简单,下面我们来仔细研究一下。
首先,我们来分解一下让每个客户端在其本地数据部分上调用本地评估这个部分。您可能还记得前几部分的内容,local_eval 具有形式为 (<MODEL_TYPE, LOCAL_DATA_TYPE> -> float32) 的类型签名。
联合算子 tff.federated_map 是一个模版,它接受二维元组作为参数,该二维元组由某种类型 T->U 的映射函数和类型 {T}@CLIENTS 的联合值(即,具有与映射函数的参数相同类型的成员组成)组成,并返回 {U}@CLIENTS 类型的结果。
由于我们将 local_eval 作为映射函数馈送给每个客户端,因此第二个参数应为联合类型 {<MODEL_TYPE, LOCAL_DATA_TYPE>}@CLIENTS(即,根据前几部分的命名,它应该是一个联合元组)。每个客户端应将 local_eval 的完整参数集作为成员组成。相反,我们向它馈送的是 2 个元素的 Python list。这是什么情况?
实际上,这是 TFF 中隐式类型转换的示例,它类似于您可能在其他地方遇到的隐式类型转换(例如,当您向接受 float 的函数馈送 int时)。目前很少使用隐式转换,但我们计划使它在 TFF 中更加普遍,以尽量减少样板文件。
在这种情况下,应用的隐式转换在形式为 {<X,Y>}@Z 的联合元组和联合值为 <{X}@Z,{Y}@Z> 的元组之间等效。虽然二者是不同的类型签名,从程序员的角度来看,Z 中的每个设备都包含数据 X 和 Y 的两个单元。这里发生的情况与 Python 中的 zip 没什么区别,实际上,我们提供了一种算子 tff.federated_zip,使您可以显式地执行此类转换。当 tff.federated_map 遇到作为第二个参数的元组时,它将为您直接调用 tff.federated_zip。
根据上述信息,您现在应该能够将表达式 tff.federated_broadcast(model) 识别为表示 TFF 类型 {MODEL_TYPE}@CLIENTS 的值,并将 data 识别为 TFF 类型 {LOCAL_DATA_TYPE}@CLIENTS(或简写为 CLIENT_DATA_TYPE)的值,两者通过隐式 tff.federated_zip 一起筛选,以形成 tff.federated_map 的第二个参数。
如您所料,算子 tff.federated_broadcast 只是将数据从服务器传输到客户端。
现在,我们来看看本地训练如何影响系统的平均损失。
End of explanation
SERVER_FLOAT_TYPE = tff.type_at_server(tf.float32)
@tff.federated_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,
CLIENT_DATA_TYPE)
def federated_train(model, learning_rate, data):
return tff.federated_mean(
tff.federated_map(local_train, [
tff.federated_broadcast(model),
tff.federated_broadcast(learning_rate), data
]))
Explanation: 确实,和预期一样,损失增加了。为了改进所有用户的模型,我们需要用每个用户自己的数据进行训练。
联合训练
实现联合训练的最简单方法是进行本地训练,然后对模型进行平均。这会用到我们讨论过的相同构建块和模式,如下所示。
End of explanation
model = initial_model
learning_rate = 0.1
for round_num in range(5):
model = federated_train(model, learning_rate, federated_train_data)
learning_rate = learning_rate * 0.9
loss = federated_eval(model, federated_train_data)
print('round {}, loss={}'.format(round_num, loss))
Explanation: 请注意,在 tff.learning 所提供的联合平均的全功能实现中,由于多种原因(例如,裁剪更新范数的能力、用于压缩等),我们更喜欢对模型增量进行平均,而不是对模型进行平均。
让我们通过进行几轮训练并比较前后的平均损失,来看看训练是否有效。
End of explanation
print('initial_model test loss =',
federated_eval(initial_model, federated_test_data))
print('trained_model test loss =', federated_eval(model, federated_test_data))
Explanation: 现在,为了完整起见,我们也在测试数据上运行一下,以确认我们的模型能够很好地泛化。
End of explanation |
9,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021
Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 📦 Assortment Quality - Product and Brand coverage monitoring
This is not an officially supported Google product.
Assortment Quality is an open-source solution that gives you an overview of the product and brand coverage of your
Google Merchant center account.
This notebook will guide you through the deployment process of this solution.
Clone the Github Repository
First we need to retrieve the code from our Github repo using Git.
Step2: Install dependencies
We can now install all the required project dependencies.
Step3: Project configuration
The next step is to enter all the parameters relative to your project.
project_id
Step4: From Api & Services > Credentials, click on + CREATE CREDENTIALS and select "OAuth client ID".
In Application type, select "Desktop app" and choose a name. Back on the Credentials page, download the JSON file of your client.
You can then rename it to client_secret.json and upload it in the assortment-quality-for-shopping-ads/ directory.
Run the project
Warning | Python Code:
#@title Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021
Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!apt-get install git
!rm -rf assortment-quality-for-shopping-ads/
!git clone https://github.com/google/assortment-quality-for-shopping-ads.git
Explanation: 📦 Assortment Quality - Product and Brand coverage monitoring
This is not an officially supported Google product.
Assortment Quality is an open-source solution that gives you an overview of the product and brand coverage of your
Google Merchant center account.
This notebook will guide you through the deployment process of this solution.
Clone the Github Repository
First we need to retrieve the code from our Github repo using Git.
End of explanation
!pip install -r assortment-quality-for-shopping-ads/requirements.txt
Explanation: Install dependencies
We can now install all the required project dependencies.
End of explanation
project_id = "fraperez-assortiment" #@param {type:"string"}
merchant_id = "1234567890" #@param {type:"string"}
region_name = "eu" #@param ["EU", "US"] {allow-input: true}
dataset_name = "merchant_center" #@param {type:"string"}
language = "en-US" #@param {type:"string"}
country = "US" #@param {type:"string"}
expiration_time = 7 #@param {type:"integer"}
Explanation: Project configuration
The next step is to enter all the parameters relative to your project.
project_id: Is the ID if your Google Cloud Platform project. (Please note that it can be different from your project name.)
merchantid_: Is your Google Merchant Center's account ID.
regionname_: Is the GCP region where the BigQuery dataset should be created (it will be created there if you have not created it already) .
datasetname_: The name of the dataset that will be used to store the tables needed by this solution. This name will also be used to create the dataset if it does not already exist.
language: The language used to display the Shopping category names (ISO format)
country: The country used for market analysis
expirationtime_: The number of days after which the table partitions will expire
End of explanation
!cd assortment-quality-for-shopping-ads/ && python main.py -p $project_id -m $merchant_id -r $region_name -d $dataset_name -l $language -c $country -e $expiration_time
Explanation: From Api & Services > Credentials, click on + CREATE CREDENTIALS and select "OAuth client ID".
In Application type, select "Desktop app" and choose a name. Back on the Credentials page, download the JSON file of your client.
You can then rename it to client_secret.json and upload it in the assortment-quality-for-shopping-ads/ directory.
Run the project
Warning : If this is the first time you ever create a Scheduled Query, you should first create a dummy one from the UI, so that it triggers a OAuth screen. If you try running this script without doing this step you will run into an authentication error.
End of explanation |
9,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use Xarray accessor
This example shows how to use the SpatialData accessor to extend the capabilities of xarray.
To extend xarray.DataArray you need only to load also pymepps with "import pymepps". The extensions could be used with the property xarray.DataArray.pp
Step1: To use the full power of pymepps, you have to set a grid. If you load the data with the xarray functions you have to set the grid afterwards. So the next step is to load a NetCDF model file with xarray. There are also pymepps functions to load model data. These are shown in another example.
Step2: The grid definition is inspired by the climate data operators. So you could either generate your own grid (done in this example), or you could load a cdo-conform grid file.
We could see that the grid is a structured latitude and longitude grid with a resolution of 0.25 degree.
Step3: We created our grid dict with the information. Now we have to build the grid. In pymepps you could use the GridBuilder to build the grid with given grid_dict.
Step4: The next step is to set the grid for our dataset. For this we could use the set_grid method of the SpatialAccessor.
Step5: Now we set the grid. It is also possible to normalize the coordinates to allow a consistent processing of the model data.
Step6: We could see that the height_above_ground and the time variable are renamed to a more common name. The ensemble member is set to the default value 'det', while the runtime is set to the missing value None. Now lets plot the data with the xarray internal plot method.
Step7: Lets make use of the SpatialAccessor to slice an area over germany. We would also transform the temperature unit to degree celsius. For this we could use the normal xarray.DataArray mathematical operations. After the transformation lets plot the temperature.
Step8: If we use a xarray.DataArray method where the DataArray instance is copied, we have to set a new grid. This behaviour coud seen in the following code block. | Python Code:
import matplotlib.pyplot as plt
import xarray as xr
import pymepps
Explanation: How to use Xarray accessor
This example shows how to use the SpatialData accessor to extend the capabilities of xarray.
To extend xarray.DataArray you need only to load also pymepps with "import pymepps". The extensions could be used with the property xarray.DataArray.pp
End of explanation
ds = xr.open_dataset('../data/model/GFS_Global_0p25deg_20161219_0600.nc')
t2m_max = ds['Maximum_temperature_height_above_ground_Mixed_intervals_Maximum']
print(t2m_max)
Explanation: To use the full power of pymepps, you have to set a grid. If you load the data with the xarray functions you have to set the grid afterwards. So the next step is to load a NetCDF model file with xarray. There are also pymepps functions to load model data. These are shown in another example.
End of explanation
grid_dict = dict(
gridtype='lonlat',
xsize=t2m_max['lon'].size,
ysize=t2m_max['lat'].size,
xfirst=t2m_max['lon'].values[0],
xinc=0.25,
yfirst=t2m_max['lat'].values[0],
yinc=-0.25,
)
Explanation: The grid definition is inspired by the climate data operators. So you could either generate your own grid (done in this example), or you could load a cdo-conform grid file.
We could see that the grid is a structured latitude and longitude grid with a resolution of 0.25 degree.
End of explanation
builder = pymepps.GridBuilder(grid_dict)
grid = builder.build_grid()
print(grid)
Explanation: We created our grid dict with the information. Now we have to build the grid. In pymepps you could use the GridBuilder to build the grid with given grid_dict.
End of explanation
t2m_max = t2m_max.pp.set_grid(grid)
print(t2m_max.pp.grid)
Explanation: The next step is to set the grid for our dataset. For this we could use the set_grid method of the SpatialAccessor.
End of explanation
# Before normalization
print('Before:\n{0:s}\n'.format(str(t2m_max)))
t2m_max = t2m_max.pp.normalize_coords()
# After normalization
print('After:\n{0:s}'.format(str(t2m_max)))
Explanation: Now we set the grid. It is also possible to normalize the coordinates to allow a consistent processing of the model data.
End of explanation
t2m_max.plot()
plt.show()
Explanation: We could see that the height_above_ground and the time variable are renamed to a more common name. The ensemble member is set to the default value 'det', while the runtime is set to the missing value None. Now lets plot the data with the xarray internal plot method.
End of explanation
# sphinx_gallery_thumbnail_number = 2
ger_t2m_max = t2m_max.pp.sellonlatbox([5, 55, 15, 45])
# K to deg C
ger_t2m_max -= 273.15
ger_t2m_max.plot()
plt.show()
Explanation: Lets make use of the SpatialAccessor to slice an area over germany. We would also transform the temperature unit to degree celsius. For this we could use the normal xarray.DataArray mathematical operations. After the transformation lets plot the temperature.
End of explanation
stacked_array = t2m_max.stack(stacked=('runtime', 'validtime'))
# we have to catch the error for sphinx documentation
try:
print(stacked_array.pp.grid)
except TypeError:
print('This DataArray has no grid defined!')
Explanation: If we use a xarray.DataArray method where the DataArray instance is copied, we have to set a new grid. This behaviour coud seen in the following code block.
End of explanation |
9,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we demonstrate how to create and modify a Titan graph in python, and then visualize the result using Graphistry's visual graph explorer.
We assume the gremlin server for our Titan graph is hosted locally on port 8182
- This notebook utilizes the python modules aiogremlin and asyncio.
- The GremlinClient class of aiogremlin communicates asynchronously with the gremlin server using websockets via asyncio coroutines.
- This implementation allows you to submit additional requests to the server before any responses are recieved, which is much faster than synchronous request / response cycles.
- For more information about these modules, please visit
Step1: Functions for graph modification
Step2: Functions for translating a graph to node and edge lists
Step3: Let's start with an empty graph
Step4: And then populate it with the Graphistry team members and some of thier relationships
Step5: Now, let's convert our graph database to a pandas DataFrame, so it can be uploaded into our tool
Step6: And color the nodes based on their "type" property
Step7: Finally, let's vizualize the results! | Python Code:
import asyncio
import aiogremlin
# Create event loop and initialize gremlin client
loop = asyncio.get_event_loop()
client = aiogremlin.GremlinClient(url='ws://localhost:8182/', loop=loop) # Default url
Explanation: In this notebook, we demonstrate how to create and modify a Titan graph in python, and then visualize the result using Graphistry's visual graph explorer.
We assume the gremlin server for our Titan graph is hosted locally on port 8182
- This notebook utilizes the python modules aiogremlin and asyncio.
- The GremlinClient class of aiogremlin communicates asynchronously with the gremlin server using websockets via asyncio coroutines.
- This implementation allows you to submit additional requests to the server before any responses are recieved, which is much faster than synchronous request / response cycles.
- For more information about these modules, please visit:
- aiogremlin: http://aiogremlin.readthedocs.org/en/latest/index.html
- asyncio: https://pypi.python.org/pypi/asyncio
End of explanation
@asyncio.coroutine
def add_vertex_routine(name, label):
yield from client.execute("graph.addVertex(label, l, 'name', n)", bindings={"l":label, "n":name})
def add_vertex(name, label):
loop.run_until_complete(add_vertex_routine(name, label))
@asyncio.coroutine
def add_relationship_routine(who, relationship, whom):
yield from client.execute("g.V().has('name', p1).next().addEdge(r, g.V().has('name', p2).next())", bindings={"p1":who, "p2":whom, "r":relationship})
def add_relationship(who, relationship, whom):
loop.run_until_complete(add_relationship_routine(who, relationship, whom))
@asyncio.coroutine
def remove_all_vertices_routine():
resp = yield from client.submit("g.V()")
results = []
while True:
msg = yield from resp.stream.read();
if msg is None:
break
if msg.data is None:
break
for vertex in msg.data:
yield from client.submit("g.V(" + str(vertex['id']) + ").next().remove()")
def remove_all_vertices():
results = loop.run_until_complete(remove_all_vertices_routine())
@asyncio.coroutine
def remove_vertex_routine(name):
return client.execute("g.V().has('name', n).next().remove()", bindings={"n":name})
def remove_vertex(name):
return loop.run_until_complete(remove_vertex_routine(name));
Explanation: Functions for graph modification
End of explanation
@asyncio.coroutine
def get_node_list_routine():
resp = yield from client.submit("g.V().as('node')\
.label().as('type')\
.select('node').values('name').as('name')\
.select('name', 'type')")
results = [];
while True:
msg = yield from resp.stream.read();
if msg is None:
break;
if msg.data is None:
break;
else:
results.extend(msg.data)
return results
def get_node_list():
results = loop.run_until_complete(get_node_list_routine())
return results
@asyncio.coroutine
def get_edge_list_routine():
resp = yield from client.submit("g.E().as('edge')\
.label().as('relationship')\
.select('edge').outV().values('name').as('source')\
.select('edge').inV().values('name').as('dest')\
.select('source', 'relationship', 'dest')")
results = [];
while True:
msg = yield from resp.stream.read();
if msg is None:
break;
if msg.data is None:
break;
else:
results.extend(msg.data)
return results
def get_edge_list():
results = loop.run_until_complete(get_edge_list_routine())
return results
Explanation: Functions for translating a graph to node and edge lists:
- Currently, our API can only upload data from a pandas DataFrame, but we plan to implement more flexible uploads in the future.
- For now, we can rely on the following functions to create the necessary DataFrames from our graph.
End of explanation
remove_all_vertices()
Explanation: Let's start with an empty graph:
End of explanation
add_vertex("Paden", "Person")
add_vertex("Thibaud", "Person")
add_vertex("Leo", "Person")
add_vertex("Matt", "Person")
add_vertex("Brian", "Person")
add_vertex("Quinn", "Person")
add_vertex("Paul", "Person")
add_vertex("Lee", "Person")
add_vertex("San Francisco", "Place")
add_vertex("Oakland", "Place")
add_vertex("Berkeley", "Place")
add_vertex("Turkey", "Thing")
add_vertex("Rocks", "Thing")
add_vertex("Motorcycles", "Thing")
add_relationship("Paden", "lives in", "Oakland")
add_relationship("Quinn", "lives in", "Oakland")
add_relationship("Thibaud", "lives in", "Berkeley")
add_relationship("Matt", "lives in", "Berkeley")
add_relationship("Leo", "lives in", "San Francisco")
add_relationship("Paul", "lives in", "San Francisco")
add_relationship("Brian", "lives in", "Oakland")
add_relationship("Paden", "eats", "Turkey")
add_relationship("Quinn", "cooks", "Turkey")
add_relationship("Thibaud", "climbs", "Rocks")
add_relationship("Matt", "climbs", "Rocks")
add_relationship("Brian", "rides", "Motorcycles")
add_vertex("Graphistry", "Work")
add_relationship("Paden", "works at", "Graphistry")
add_relationship("Thibaud", "works at", "Graphistry")
add_relationship("Matt", "co-founded", "Graphistry")
add_relationship("Leo", "co-founded", "Graphistry")
add_relationship("Paul", "works at", "Graphistry")
add_relationship("Quinn", "works at", "Graphistry")
add_relationship("Brian", "works at", "Graphistry")
Explanation: And then populate it with the Graphistry team members and some of thier relationships:
End of explanation
import pandas
nodes = pandas.DataFrame(get_node_list())
edges = pandas.DataFrame(get_edge_list())
Explanation: Now, let's convert our graph database to a pandas DataFrame, so it can be uploaded into our tool:
End of explanation
# Assign different color to each type in a round robin fashion.
# For more information and coloring options please visit: https://graphistry.github.io/docs/legacy/api/0.9.2/api.html
unique_types = list(nodes['type'].unique())
nodes['color'] = nodes['type'].apply(lambda x: unique_types.index(x) % 11)
nodes
edges
Explanation: And color the nodes based on their "type" property:
End of explanation
import graphistry
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
g = graphistry.bind(source="source", destination="dest", node='name', point_color='color', edge_title='relationship')
g.plot(edges, nodes)
Explanation: Finally, let's vizualize the results!
End of explanation |
9,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Curious Case of Economic Growth in Asia
A research project at NYU's Stern School of Business.
Written by Hanjo Kim ([email protected]), Simon (Seon Mok) Lee ([email protected]) under the direction of David Backus, May 2016.
Abstract
During the later half of 20th century, some countries in Asia saw tremendous growth but some did not. Especially, South Korea was one of the poorest countries in the world by 1960 but its economy grew rapidly and became the 10th biggest economy in the world. Its fast development has been often attributed to the strong exports growth over the last few decades. Under this light, We explore the reasons for such divergence in their economic development by examining their exports data.
Developed vs. Developing Countries in Asia
We first picked 3 developed countries--Japan, Singapore, and South Korea and 3 developing countries--Thailand, Phillines, and Vietnam.
We begin by examining how the overall economies of the countries have been developing by looking at the data for GDP per Capital adjusted for Purchasing Power Parity.
Group 1 consists of developed countries--Japan, Korea, and Singapore.
Group 2 consists of developing countries--Thailand, Philippines, and Vietnam.
Step1: In Figure 1, we have plotted Gross Domestic Product (GDP) per capita adjusted for Purchasing Power Parity (PPP). We see from here that there is a clear difference between the top 3 countries (Singapore, Japan, Korea) and bottom 3 countries (Thailand, Vietnam, Philippines). Particularly, Korea began as a member of the Group 2 but gradually merged into Group 1. We will now look at the data for the percent change of exports for each country since 1980.
Step2: In Figure 2, we have plotted 3 time series that plots the Percent Change in the Volume of Exports of three developed Asian countries (Japan, Korea, Singapore).
First, Japan shows lower increase in exports compared to that of Korea and Singapore. This is probably because Japan's economy was already developed before 1990s, while Korea and Singapore were still developing.
Second, after the Asian Financial Crisis in 1997, the three economies begin to show almost identical patterns. This may imply that Korea and Singapor have caught up in terms of economic development and now they share similar economic climate. For example, if Singapore does poorly in terms of exports in one year, it may be reasonable to infer that Japan and Korea may follow the similar path.
Furthermore, we can infer that since Japan has influence over the world economy, the global financial crisis in 2008 impacted them much harder than other countries in this category, which is why their exports fluctuated more than the exports from Korea and Singapore during the same period.
Now we consider the exports data for Group 2.
Step3: In Figure 3, first thing to notice is the wild fluctuations of Vietnam before 1995, because they were just beginning to open their economy and noticeable decline of Phillipines during the Asian fianancial crisis in 1997.
There are a few interesting observations about the data. First, their exports were not as negatively affected by the financial crisis in 2008 as the exports of the developed economies. Figure 2 suggests that the 2008 crisis had almost no impact on their exports. Second, overall, Figure 2 exhibits that the exports change less in magnitude than the countries shown in Figure 1. This may be because of the fact that they are less dependent on trade, or the items they exports are not sensitive to the sentimates of the global economies, i.e. basic necessities.
Based on these findings, we can see that the developed countries in Figure 2 are more heavily dependent on global economic climate while the developing countries in Figure 3 are much less so. It seems that the developed countries are more sensitive to the global demand.
Now we arrive at another question | Python Code:
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import numpy as np # foundation for Pandas
%matplotlib inline
'''
We downloaded the data from our sources (specific in the bottom) and uploaded them to our github accounts,
so that anyone can run the code and get the same results without having to download it and set the directory.
'''
url3='https://raw.githubusercontent.com/hjkim1304/databootcampdata/master/weoppp.csv'
df_gdp_usd=pd.read_csv(url3)
fig,ax=plt.subplots(figsize=(18,5))
new_df_gdp_usd=df_gdp_usd.head(n=6).drop(['Units','Scale','Estimates Start After'],axis=1).set_index('Country').T
#new_df_gdp_usd['Japan']=new_df_gdp_usd['Japan'].str.replace(',','').astype(float)
#new_df_gdp_usd['Korea']=new_df_gdp_usd['Korea'].str.replace(',','').astype(float)
#new_df_gdp_usd['Philippines']=new_df_gdp_usd['Philippines'].astype(float)
for i in (new_df_gdp_usd.columns):
new_df_gdp_usd[i]=new_df_gdp_usd[i].str.replace(',','').astype(float)
new_df_gdp_usd.plot(ax=ax)
ax.set_title('Figure 1: GDP per capita adjusted for PPP')
Explanation: Curious Case of Economic Growth in Asia
A research project at NYU's Stern School of Business.
Written by Hanjo Kim ([email protected]), Simon (Seon Mok) Lee ([email protected]) under the direction of David Backus, May 2016.
Abstract
During the later half of 20th century, some countries in Asia saw tremendous growth but some did not. Especially, South Korea was one of the poorest countries in the world by 1960 but its economy grew rapidly and became the 10th biggest economy in the world. Its fast development has been often attributed to the strong exports growth over the last few decades. Under this light, We explore the reasons for such divergence in their economic development by examining their exports data.
Developed vs. Developing Countries in Asia
We first picked 3 developed countries--Japan, Singapore, and South Korea and 3 developing countries--Thailand, Phillines, and Vietnam.
We begin by examining how the overall economies of the countries have been developing by looking at the data for GDP per Capital adjusted for Purchasing Power Parity.
Group 1 consists of developed countries--Japan, Korea, and Singapore.
Group 2 consists of developing countries--Thailand, Philippines, and Vietnam.
End of explanation
url='https://raw.githubusercontent.com/hjkim1304/databootcampdata/master/weo_data.csv'
df_1=pd.read_csv(url)
df_1=df_1.drop(['Scale','Country/Series-specific Notes','Estimates Start After'],axis=1).head(n=12).set_index('Country')
df_imports=df_1.iloc[[0,2,4,6,8,10]]
df_exports=df_1.iloc[[1,3,5,7,9,11]]
df_exports_group1=df_1.iloc[[1,3,7]]
df_exports_group2=df_1.iloc[[5,9,11]]
df_imports_group1=df_1.iloc[[0,2,6]]
df_imports_group2=df_1.iloc[[4,8,10]]
fig ,ax=plt.subplots(figsize=(18,8))
df_exports_group1.drop(['Subject Descriptor','Units'],axis=1).T.plot(ax=ax,color=['b','g','c'])
ax.set_title("Figure 2: Percent Change of Exports (Group 1)")
Explanation: In Figure 1, we have plotted Gross Domestic Product (GDP) per capita adjusted for Purchasing Power Parity (PPP). We see from here that there is a clear difference between the top 3 countries (Singapore, Japan, Korea) and bottom 3 countries (Thailand, Vietnam, Philippines). Particularly, Korea began as a member of the Group 2 but gradually merged into Group 1. We will now look at the data for the percent change of exports for each country since 1980.
End of explanation
fig ,ax=plt.subplots(figsize=(18,8))
df_exports_group2.drop(['Subject Descriptor','Units'],axis=1).T.plot(ax=ax, color=['r','m','y'])
ax.set_title("Figure 3: Percent Change of Exports (Group 2)")
Explanation: In Figure 2, we have plotted 3 time series that plots the Percent Change in the Volume of Exports of three developed Asian countries (Japan, Korea, Singapore).
First, Japan shows lower increase in exports compared to that of Korea and Singapore. This is probably because Japan's economy was already developed before 1990s, while Korea and Singapore were still developing.
Second, after the Asian Financial Crisis in 1997, the three economies begin to show almost identical patterns. This may imply that Korea and Singapor have caught up in terms of economic development and now they share similar economic climate. For example, if Singapore does poorly in terms of exports in one year, it may be reasonable to infer that Japan and Korea may follow the similar path.
Furthermore, we can infer that since Japan has influence over the world economy, the global financial crisis in 2008 impacted them much harder than other countries in this category, which is why their exports fluctuated more than the exports from Korea and Singapore during the same period.
Now we consider the exports data for Group 2.
End of explanation
url4='https://raw.githubusercontent.com/hjkim1304/databootcampdata/master/tradepercentoriginal.csv'
df_gdp_trade=pd.read_csv(url4,na_values=['..'])
df_gdp_trade=df_gdp_trade.drop(['Series Name','Series Code','Country Code','2015 [YR2015]'],axis=1).head(n=6).set_index('Country Name')
df_gdp_trade=df_gdp_trade.rename(columns={'1980 [YR1980]':'1980', '1981 [YR1981]':'1981', '1982 [YR1982]':'1982', '1983 [YR1983]':'1983',
'1984 [YR1984]':'1984', '1985 [YR1985]':'1985', '1986 [YR1986]':'1986', '1987 [YR1987]':'1987',
'1988 [YR1988]':'1988', '1989 [YR1989]':'1989', '1990 [YR1990]':'1990', '1991 [YR1991]':'1991',
'1992 [YR1992]':'1992', '1993 [YR1993]':'1993', '1994 [YR1994]':'1994', '1995 [YR1995]':'1995',
'1996 [YR1996]':'1996', '1997 [YR1997]':'1997', '1998 [YR1998]':'1998', '1999 [YR1999]':'1999',
'2000 [YR2000]':'2000', '2001 [YR2001]':'2001', '2002 [YR2002]':'2002', '2003 [YR2003]':'2003',
'2004 [YR2004]':'2004', '2005 [YR2005]':'2005', '2006 [YR2006]':'2006', '2007 [YR2007]':'2007',
'2008 [YR2008]':'2008', '2009 [YR2009]':'2009', '2010 [YR2010]':'2010', '2011 [YR2011]':'2011',
'2012 [YR2012]':'2012', '2013 [YR2013]':'2013', '2014 [YR2014]':'2014'})
df_gdp_trade=df_gdp_trade.T
#df_gdp_trade.dtypes
fig,ax=plt.subplots(figsize=(18,8))
df_gdp_trade.plot(ax=ax, color=['g','b','r','y','m','c'])
ax.set_title('Figure 4: Trade percent of GDP')
Explanation: In Figure 3, first thing to notice is the wild fluctuations of Vietnam before 1995, because they were just beginning to open their economy and noticeable decline of Phillipines during the Asian fianancial crisis in 1997.
There are a few interesting observations about the data. First, their exports were not as negatively affected by the financial crisis in 2008 as the exports of the developed economies. Figure 2 suggests that the 2008 crisis had almost no impact on their exports. Second, overall, Figure 2 exhibits that the exports change less in magnitude than the countries shown in Figure 1. This may be because of the fact that they are less dependent on trade, or the items they exports are not sensitive to the sentimates of the global economies, i.e. basic necessities.
Based on these findings, we can see that the developed countries in Figure 2 are more heavily dependent on global economic climate while the developing countries in Figure 3 are much less so. It seems that the developed countries are more sensitive to the global demand.
Now we arrive at another question:
What accounts for the difference in the magnitude of fluctuations between Group 1 and Group 2?
Is it simply because of the difference in the amount of exports? Or are there more qualitative reasons?
In order to exlpore the question, we examine the data for trade percent of GDP for each country.
End of explanation |
9,375 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given two sets of points in n-dimensional space, how can one map points from one set to the other, such that each point is only used once and the total Manhattan distance between the pairs of points is minimized? | Problem:
import numpy as np
import scipy.spatial
import scipy.optimize
points1 = np.array([(x, y) for x in np.linspace(-1,1,7) for y in np.linspace(-1,1,7)])
N = points1.shape[0]
points2 = 2*np.random.rand(N,2)-1
C = scipy.spatial.distance.cdist(points1, points2, metric='minkowski', p=1)
_, result = scipy.optimize.linear_sum_assignment(C) |
9,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
Step1: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is
Step3: In this equation
Step4: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
Step5: Use interact with plot_fermidist to explore the distribution | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
Image('fermidist.png')
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
def fermidist(energy, mu, kT):
Compute the Fermi distribution at energy, mu and kT.
F = 1/(np.exp((energy-mu)/kT)+1)
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = \frac{1}{e^{(\epsilon-\mu)/kT}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
def plot_fermidist(mu, kT):
e = np.linspace(0,10.0,100)
fermdata = fermidist(e,mu,kT)
f = plt.figure(figsize=(10,7))
plt.plot(e,fermdata, color='red')
plt.xlim(0,10)
plt.ylim(0,1)
plt.ylabel('Fermi distribution')
plt.xlabel('single particle energy')
plt.title('Fermi distribution vs. single particle energy')
plt.tick_params(top=False,right=False, direction = 'out')
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
interact(plot_fermidist, mu=(0.0,5.0,0.1), kT=(0.1,10.0,0.1));
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation |
9,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hi-C quality check
The file is organized in 4 lines per read
Step1: Count the number of lines in the file (4 times the number of reads)
Step2: There are 400 M lines in the file, which means 100M reads in total.
Quality check before mapping
Check restriction-enzyme used
Most important to analyze Hi-C dataset is the restriction enzyme used in the experiment. TADbit provides a simple function to check for it
Step3: Plot PHRED score and ligation/digestion sites
In order to quickly assess the quality of the HiC experiment (before mapping), and given that we know the restriction enzyme used, we can check the proportion of reads with ligation sites as well as the number of reads starting by a cut-site.
These numbers will give us a first hint on the efficiencies of two critical steps in the HiC experiment, the digestion and the ligation.
Step4: The plot on the top represents the typical per nucleotide quality profile of NGS reads, with, in addition, the proportion of N found at each position.
The second plot, is specific to Hi-C experiments. Given a restriction enzyme the function searches for the presence of ligation sites and of undigested restriction enzyme sites. Depending on the enzyme used the function can differentiate between dangling-ends and undigested sites.
From these proportions some quality statistics can be inferred before mapping | Python Code:
%%bash
dsrc d -s FASTQs/mouse_B_rep1_1.fastq.dsrc | head -n 8
Explanation: Hi-C quality check
The file is organized in 4 lines per read:
1. starting with @, the header of the DNA sequence with the read id (plus optional fields)
2. the DNA sequence
3. starting with +, the header of the sequence quality (this line could be either a repetition of first line or empty)
4. the sequence quality (it is provided as PHRED score and it is not human readable. Check https://en.wikipedia.org/wiki/Phred_quality_score for more details)
End of explanation
%%bash
dsrc d -s FASTQs/mouse_B_rep1_1.fastq.dsrc | wc -l
Explanation: Count the number of lines in the file (4 times the number of reads)
End of explanation
from pytadbit.mapping.restriction_enzymes import identify_re
pat, enz, pv = identify_re('FASTQs/mouse_B_rep1_1.fastq.dsrc')
print('- Most probable pattern: %s, matching enzymes: %s' % (pat, ','.join(enz)))
Explanation: There are 400 M lines in the file, which means 100M reads in total.
Quality check before mapping
Check restriction-enzyme used
Most important to analyze Hi-C dataset is the restriction enzyme used in the experiment. TADbit provides a simple function to check for it:
End of explanation
from pytadbit.utils.fastq_utils import quality_plot
r_enz = 'MboI'
cell = 'B'
repl = 'rep1'
Explanation: Plot PHRED score and ligation/digestion sites
In order to quickly assess the quality of the HiC experiment (before mapping), and given that we know the restriction enzyme used, we can check the proportion of reads with ligation sites as well as the number of reads starting by a cut-site.
These numbers will give us a first hint on the efficiencies of two critical steps in the HiC experiment, the digestion and the ligation.
End of explanation
quality_plot('FASTQs/mouse_{0}_{1}_1.fastq.dsrc'.format(cell, repl), r_enz=r_enz, nreads=1000000)
Explanation: The plot on the top represents the typical per nucleotide quality profile of NGS reads, with, in addition, the proportion of N found at each position.
The second plot, is specific to Hi-C experiments. Given a restriction enzyme the function searches for the presence of ligation sites and of undigested restriction enzyme sites. Depending on the enzyme used the function can differentiate between dangling-ends and undigested sites.
From these proportions some quality statistics can be inferred before mapping:
- The PHRED score and the number of unidentified nucleotides (Ns) in the read sequence, which are routinely computed to address the quality of high-throughput sequencing experiments.
- The numbers of undigested and unligated RE sites per-nucleotide along the read to assess the quality of the Hi-C experiment.
- The overall percentage of digested sites, which relates directly to the RE efficiency.
- The percentage of non-ligated digested (dangling-ends), which relates to the ligation efficiency.
- The percentage of read-ends with a ligation site, which is negatively correlated with the percentage of dangling-ends.
End of explanation |
9,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix Methods Example - Frame 1
This is the same frame as solved using the method of slope-deflection
here. All of the data are provided
in CSV form directly in the cells, below.
Step1:
Step2: Use very large areas so that axial deformations will be very small so as to more closely replicate
the slope deflection analysis.
Step3: Compare to slope-deflection solution
Step4: and the lateral deflection of nodes $b$ and $c$ computed by slope-deflection, in $mm$, was
Step5: which agrees with the displayed result, above. (note that units in the slope-deflection example were
$kN$ and $m$ and here they are $N$ and $mm$).
$P-\Delta$ Analysis
We wouldn't expect much difference as sidesway is pretty small | Python Code:
from __future__ import division, print_function
from IPython import display
import salib.nbloader # so that we can directly import other notebooks
import Frame2D_v03 as f2d
Explanation: Matrix Methods Example - Frame 1
This is the same frame as solved using the method of slope-deflection
here. All of the data are provided
in CSV form directly in the cells, below.
End of explanation
frame = f2d.Frame2D()
%%frame_data frame nodes
ID,X,Y
a,0,0
b,0,3000
c,6000,3000
d,6000,1000
%%frame_data frame members
ID,NODEJ,NODEK
ab,a,b
bc,b,c
cd,c,d
%%frame_data frame supports
ID,C0,C1,C2
a,FX,FY,
d,FX,FY,MZ
%%frame_data frame releases
ID,R
Explanation:
End of explanation
%%frame_data frame properties
ID,SIZE,Ix,A
bc,,200E6,100E10
ab,,100E6,
cd,,,
%%frame_data frame node_loads
ID,DIRN,F
b,FX,60000
%%frame_data frame member_loads
ID,TYPE,W1,W2,A,B,C
bc,UDL,-36,,,,
frame.doall()
Explanation: Use very large areas so that axial deformations will be very small so as to more closely replicate
the slope deflection analysis.
End of explanation
EI = 200000. * 100E6 / (1000*1000**2)
EI
Explanation: Compare to slope-deflection solution:
The solutions as given in the
slope-deflection example:
Member end forces:
{'Mab': 0,
'Mba': 54.36,
'Mbc': -54.36,
'Mcb': 97.02,
'Mcd': -97.02,
'Mdc': -59.22,
'Vab': -18.12,
'Vdc': 78.12}
Except for a sign change, these seem consistent. We might have a different sign convention here - I'll check into that.
Reactions:
[v.subs(soln).n(4) for v in [Ra,Ha,Rd,Hd,Md]]
[100.9,18.12,115.1,−78.12,−59.22]
and except for sign, these are OK as well.
As for deflection, in $kN m^2$, the product $EI$ used here is:
End of explanation
(3528/(247*EI)) * 1000
Explanation: and the lateral deflection of nodes $b$ and $c$ computed by slope-deflection, in $mm$, was:
End of explanation
frame.doall(pdelta=True)
Explanation: which agrees with the displayed result, above. (note that units in the slope-deflection example were
$kN$ and $m$ and here they are $N$ and $mm$).
$P-\Delta$ Analysis
We wouldn't expect much difference as sidesway is pretty small:
End of explanation |
9,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reusable Plates
First, we show how to create a reusable plate and fill it with materials.
Step1: Once a plate is created, it can be used in multiple experiments as demonstrated below.
This process will generate N_experiments picklists, all using a single source plate. In the initial few experiments, the initially filled wells will be drained. Instructions will then appear asking the user to fill additional wells. After running many times, the source plate will run out of wells and an error will be thrown.
Step2: Refilling Wells
Sometimes, especially with valuable reagents, it makes sense to refill wells in a source plate. This can be done provided that the same material with the same concentration is put into an old well. Below, we refill all the wells A1, A2, and A3 filled to their maximum amounts. If one attempts to overfill a well or add a different material (or a different concentration of the same material) to a well which has already been filled, an error is thrown. | Python Code:
import murraylab_tools.echo as mt_echo
import os.path
import numpy as np
# Relevant input and output files. Check these out for examples of input file format.
dilution_inputs = os.path.join("reusable_plate_examples", "inputs")
dilution_outputs = os.path.join("reusable_plate_examples", "outputs")
plate_file = os.path.join(dilution_inputs, "reusable_plate.dat") # Keeps track of wells used
output_name = os.path.join(dilution_outputs, "reusable_plate_example") # Output
#Create a Reusable Plate by setting reuse_wells=True
#NOTE: This can also be done by hand by creating a CSV with the columns: well,name,concentration,volume,date
reusable_plate = mt_echo.SourcePlate(filename = plate_file, reuse_wells = True)
#We will add three arbitary materials to the plate CSV
#Create three materials
concentrations = [200, 500]
mat1 = mt_echo.EchoSourceMaterial('Mat1', concentrations[0], 0, reusable_plate)
mat2 = mt_echo.EchoSourceMaterial('Mat2', concentrations[1], 0, reusable_plate)
water= mt_echo.EchoSourceMaterial('water', 1, 0, reusable_plate)
print("reusable_plate.materials_to_add", reusable_plate.materials_to_add)
try:
reusable_plate.add_material_to_well("A1", mat1, mt_echo.max_volume)
reusable_plate.add_material_to_well("A2", mat2, mt_echo.max_volume)
except ValueError:
print("Suppressing an Error due to overfull plates and filling to the maximum allowed instead.")
reusable_plate.add_material_to_well("A1", mat1, mt_echo.max_volume-reusable_plate.wells_used["A1"][2])
reusable_plate.add_material_to_well("A2", mat2, mt_echo.max_volume-reusable_plate.wells_used["A2"][2])
print("reusable_plate.materials_to_add", reusable_plate.materials_to_add)
reusable_plate.write_to_file() #Save the file
print("reusable_plate.materials_to_add", reusable_plate.materials_to_add)
print("Plate Volume Variables (nL):")
print("Plate max fill volume:", mt_echo.max_volume)
print("Plate dead volume:", mt_echo.dead_volume)
print("Max Usable volume of each material", mt_echo.max_volume-mt_echo.dead_volume)
Explanation: Reusable Plates
First, we show how to create a reusable plate and fill it with materials.
End of explanation
echo_calc = mt_echo.EchoRun(plate = reusable_plate)
N_experiments = 3
for n in range(N_experiments):
#Generate a random experiment in three wells
random_concentrations = np.random.rand(2)*100
well1 = "A"+str(n+1)
well2 = "B"+str(n+1)
echo_calc.add_material_to_well(mat1, random_concentrations[0], well1)
echo_calc.add_material_to_well(mat2, random_concentrations[1], well1)
echo_calc.fill_well_with(well1, water) #Fill the rest of well1 with water
echo_calc.add_material_to_well(mat1, random_concentrations[1], well2)
echo_calc.add_material_to_well(mat2, random_concentrations[0], well2)
echo_calc.fill_well_with(well2, water) #Fill the rest of well2 with water
# Write results
echo_calc.write_picklist(output_name+str(n))
print("Source Plate Volumes:")
for well in reusable_plate.wells_used:
name, conc, vol, date = reusable_plate.wells_used[well]
print("\tWell:", well, "[", name,"]=", conc, "volume = ", vol, "filled on", date)
Explanation: Once a plate is created, it can be used in multiple experiments as demonstrated below.
This process will generate N_experiments picklists, all using a single source plate. In the initial few experiments, the initially filled wells will be drained. Instructions will then appear asking the user to fill additional wells. After running many times, the source plate will run out of wells and an error will be thrown.
End of explanation
materials = [mat1, mat2]
wells = ["A1", "A2"]
print("Initial Well Volumes:")
for well in ["A1", "A2"]:
name, conc, vol, date = reusable_plate.wells_used[well]
print("\tWell:", well, "[", name,"]=", conc, "volume = ", vol, "filled on", date)
print("\nRefilling...")
#Refill wells
for i in range(len(wells)):
well = wells[i]
mat = materials[i]
name, conc, vol, date = reusable_plate.wells_used[well]
reusable_plate.add_material_to_well(well, mat, mt_echo.max_volume - vol)
print("\nFinal Volumes after being refilled:")
for well in ["A1", "A2"]:
name, conc, vol, date = reusable_plate.wells_used[well]
print("\tWell:", well, "[", name,"]=", conc, "volume = ", vol, "filled on", date)
reusable_plate.write_to_file()
Explanation: Refilling Wells
Sometimes, especially with valuable reagents, it makes sense to refill wells in a source plate. This can be done provided that the same material with the same concentration is put into an old well. Below, we refill all the wells A1, A2, and A3 filled to their maximum amounts. If one attempts to overfill a well or add a different material (or a different concentration of the same material) to a well which has already been filled, an error is thrown.
End of explanation |
9,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wright-Fisher model of mutation, selection and random genetic drift
A Wright-Fisher model has a fixed population size N and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur. Mutations may increase or decrease individual's fitness, which affects the chances of that individual's offspring in subsequent generations.
Here, I'm using a fitness model where some proportion of the time a mutation will have a fixed fitness effect, increasing or decreasing fitness by a fixed amount.
Setup
Step1: Make population dynamic model
Basic parameters
Step2: Population of haplotypes maps to counts and fitnesses
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
Step3: Map haplotype string to fitness float.
Step4: Add mutation
Step5: Mutations have fitness effects
Step6: If a mutation event creates a new haplotype, assign it a random fitness.
Step7: Genetic drift and fitness affect which haplotypes make it to the next generation
Fitness weights the multinomial draw.
Step8: Combine and iterate
Step9: Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
Step10: Analyze trajectories
Calculate diversity
Step11: Plot diversity
Step12: Analyze and plot divergence
Step13: Plot haplotype trajectories
Step14: Plot SNP trajectories
Step15: Find all variable sites.
Step16: Scale up
Here, we scale up to more interesting parameter values.
Step17: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
Step18: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1. | Python Code:
import numpy as np
import itertools
Explanation: Wright-Fisher model of mutation, selection and random genetic drift
A Wright-Fisher model has a fixed population size N and discrete non-overlapping generations. Each generation, each individual has a random number of offspring whose mean is proportional to the individual's fitness. Each generation, mutation may occur. Mutations may increase or decrease individual's fitness, which affects the chances of that individual's offspring in subsequent generations.
Here, I'm using a fitness model where some proportion of the time a mutation will have a fixed fitness effect, increasing or decreasing fitness by a fixed amount.
Setup
End of explanation
pop_size = 100
seq_length = 10
alphabet = ['A', 'T']
base_haplotype = "AAAAAAAAAA"
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
Explanation: Make population dynamic model
Basic parameters
End of explanation
pop = {}
pop["AAAAAAAAAA"] = 40
pop["AAATAAAAAA"] = 30
pop["AATTTAAAAA"] = 30
Explanation: Population of haplotypes maps to counts and fitnesses
Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
End of explanation
fitness = {}
fitness["AAAAAAAAAA"] = 1.0
fitness["AAATAAAAAA"] = 1.05
fitness["AATTTAAAAA"] = 1.10
pop["AAATAAAAAA"]
fitness["AAATAAAAAA"]
Explanation: Map haplotype string to fitness float.
End of explanation
mutation_rate = 0.005 # per gen per individual per site
def get_mutation_count():
mean = mutation_rate * pop_size * seq_length
return np.random.poisson(mean)
def get_random_haplotype():
haplotypes = pop.keys()
frequencies = [x/float(pop_size) for x in pop.values()]
total = sum(frequencies)
frequencies = [x / total for x in frequencies]
return np.random.choice(haplotypes, p=frequencies)
def get_mutant(haplotype):
site = np.random.randint(seq_length)
possible_mutations = list(alphabet)
possible_mutations.remove(haplotype[site])
mutation = np.random.choice(possible_mutations)
new_haplotype = haplotype[:site] + mutation + haplotype[site+1:]
return new_haplotype
Explanation: Add mutation
End of explanation
def get_fitness(haplotype):
old_fitness = fitness[haplotype]
if (np.random.random() < fitness_chance):
return old_fitness * fitness_effect
else:
return old_fitness
get_fitness("AAAAAAAAAA")
Explanation: Mutations have fitness effects
End of explanation
def mutation_event():
haplotype = get_random_haplotype()
if pop[haplotype] > 1:
pop[haplotype] -= 1
new_haplotype = get_mutant(haplotype)
if new_haplotype in pop:
pop[new_haplotype] += 1
else:
pop[new_haplotype] = 1
if new_haplotype not in fitness:
fitness[new_haplotype] = get_fitness(haplotype)
mutation_event()
pop
fitness
def mutation_step():
mutation_count = get_mutation_count()
for i in range(mutation_count):
mutation_event()
Explanation: If a mutation event creates a new haplotype, assign it a random fitness.
End of explanation
def get_offspring_counts():
haplotypes = pop.keys()
frequencies = [pop[haplotype]/float(pop_size) for haplotype in haplotypes]
fitnesses = [fitness[haplotype] for haplotype in haplotypes]
weights = [x * y for x,y in zip(frequencies, fitnesses)]
total = sum(weights)
weights = [x / total for x in weights]
return list(np.random.multinomial(pop_size, weights))
get_offspring_counts()
def offspring_step():
counts = get_offspring_counts()
for (haplotype, count) in zip(pop.keys(), counts):
if (count > 0):
pop[haplotype] = count
else:
del pop[haplotype]
Explanation: Genetic drift and fitness affect which haplotypes make it to the next generation
Fitness weights the multinomial draw.
End of explanation
def time_step():
mutation_step()
offspring_step()
generations = 5
def simulate():
for i in range(generations):
time_step()
Explanation: Combine and iterate
End of explanation
history = []
def simulate():
clone_pop = dict(pop)
history.append(clone_pop)
for i in range(generations):
time_step()
clone_pop = dict(pop)
history.append(clone_pop)
simulate()
Explanation: Record
We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
End of explanation
def get_distance(seq_a, seq_b):
diffs = 0
length = len(seq_a)
assert len(seq_a) == len(seq_b)
for chr_a, chr_b in zip(seq_a, seq_b):
if chr_a != chr_b:
diffs += 1
return diffs / float(length)
def get_diversity(population):
haplotypes = population.keys()
haplotype_count = len(haplotypes)
diversity = 0
for i in range(haplotype_count):
for j in range(haplotype_count):
haplotype_a = haplotypes[i]
haplotype_b = haplotypes[j]
frequency_a = population[haplotype_a] / float(pop_size)
frequency_b = population[haplotype_b] / float(pop_size)
frequency_pair = frequency_a * frequency_b
diversity += frequency_pair * get_distance(haplotype_a, haplotype_b)
return diversity
def get_diversity_trajectory():
trajectory = [get_diversity(generation) for generation in history]
return trajectory
Explanation: Analyze trajectories
Calculate diversity
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
def diversity_plot():
mpl.rcParams['font.size']=14
trajectory = get_diversity_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("diversity")
plt.xlabel("generation")
Explanation: Plot diversity
End of explanation
def get_divergence(population):
haplotypes = population.keys()
divergence = 0
for haplotype in haplotypes:
frequency = population[haplotype] / float(pop_size)
divergence += frequency * get_distance(base_haplotype, haplotype)
return divergence
def get_divergence_trajectory():
trajectory = [get_divergence(generation) for generation in history]
return trajectory
def divergence_plot():
mpl.rcParams['font.size']=14
trajectory = get_divergence_trajectory()
plt.plot(trajectory, "#447CCD")
plt.ylabel("divergence")
plt.xlabel("generation")
Explanation: Analyze and plot divergence
End of explanation
def get_frequency(haplotype, generation):
pop_at_generation = history[generation]
if haplotype in pop_at_generation:
return pop_at_generation[haplotype]/float(pop_size)
else:
return 0
def get_trajectory(haplotype):
trajectory = [get_frequency(haplotype, gen) for gen in range(generations)]
return trajectory
def get_all_haplotypes():
haplotypes = set()
for generation in history:
for haplotype in generation:
haplotypes.add(haplotype)
return haplotypes
colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"]
colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
def stacked_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
haplotypes = get_all_haplotypes()
trajectories = [get_trajectory(haplotype) for haplotype in haplotypes]
plt.stackplot(range(generations), trajectories, colors=colors_lighter)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
Explanation: Plot haplotype trajectories
End of explanation
def get_snp_frequency(site, generation):
minor_allele_frequency = 0.0
pop_at_generation = history[generation]
for haplotype in pop_at_generation.keys():
allele = haplotype[site]
frequency = pop_at_generation[haplotype] / float(pop_size)
if allele != "A":
minor_allele_frequency += frequency
return minor_allele_frequency
def get_snp_trajectory(site):
trajectory = [get_snp_frequency(site, gen) for gen in range(generations)]
return trajectory
Explanation: Plot SNP trajectories
End of explanation
def get_all_snps():
snps = set()
for generation in history:
for haplotype in generation:
for site in range(seq_length):
if haplotype[site] != "A":
snps.add(site)
return snps
def snp_trajectory_plot(xlabel="generation"):
mpl.rcParams['font.size']=18
snps = get_all_snps()
trajectories = [get_snp_trajectory(snp) for snp in snps]
data = []
for trajectory, color in itertools.izip(trajectories, itertools.cycle(colors)):
data.append(range(generations))
data.append(trajectory)
data.append(color)
fig = plt.plot(*data)
plt.ylim(0, 1)
plt.ylabel("frequency")
plt.xlabel(xlabel)
Explanation: Find all variable sites.
End of explanation
pop_size = 50
seq_length = 100
generations = 500
mutation_rate = 0.0001 # per gen per individual per site
fitness_effect = 1.1 # fitness effect if a functional mutation occurs
fitness_chance = 0.1 # chance that a mutation has a fitness effect
Explanation: Scale up
Here, we scale up to more interesting parameter values.
End of explanation
seq_length * mutation_rate
Explanation: In this case there are $\mu$ = 0.01 mutations entering the population every generation.
End of explanation
2 * pop_size * seq_length * mutation_rate
base_haplotype = ''.join(["A" for i in range(seq_length)])
pop.clear()
fitness.clear()
del history[:]
pop[base_haplotype] = pop_size
fitness[base_haplotype] = 1.0
simulate()
plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k')
plt.subplot2grid((3,2), (0,0), colspan=2)
stacked_trajectory_plot()
plt.subplot2grid((3,2), (1,0), colspan=2)
snp_trajectory_plot()
plt.subplot2grid((3,2), (2,0))
diversity_plot()
plt.subplot2grid((3,2), (2,1))
divergence_plot()
Explanation: And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
End of explanation |
9,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology
Step1: From 1D to 2D acoustic finite difference modelling
The 1D acoustic wave equation is very useful to introduce the general concept and problems related to FD modelling. However, for realistic modelling and seismic imaging/inversion applications we have to solve at least the 2D acoustic wave equation.
In the class we will develop a 2D acoustic FD modelling code based on the 1D code. I strongly recommend that you do this step by yourself, starting from this notebook containing only the 1D code.
Finite difference solution of 2D acoustic wave equation
As derived in this and this lecture, the acoustic wave equation in 2D with constant density is
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial t^2} \ = \ vp(x,z)^2 \biggl(\frac{\partial^2 p(x,z,t)}{\partial x^2}+\frac{\partial^2 p(x,z,t)}{\partial z^2}\biggr) + f(x,z,t) \nonumber
\end{equation}
with pressure $p$, acoustic velocity $vp$ and source term $f$. We can split the source term into a spatial and temporal part. Spatially, we assume that the source is localized at one point ($x_s, z_s$). Therefore, the spatial source contribution consists of two Dirac $\delta$-functions $\delta(x-x_s)$ and $\delta(z-z_s)$. The temporal source part is an arbitrary source wavelet $s(t)$
Step2: Comparison of 2D finite difference with analytical solution
In the function below we solve the homogeneous 2D acoustic wave equation by the 3-point spatial/temporal difference operator and compare the numerical results with the analytical solution | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from this Jupyter notebook by Heiner Igel (@heinerigel), Lion Krischer (@krischer) and Taufiqurrahman (@git-taufiqurrahman) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, additional modifications by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
xmax = 500.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction(m)
dx = 1.0 # grid point distance in x-direction
dz = dx # grid point distance in z-direction
tmax = 0.502 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 580. # P-wave speed in medium (m/s)
# acquisition geometry
xr = 330.0 # x-receiver position (m)
zr = xr # z-receiver position (m)
xsrc = 250.0 # x-source position (m)
zsrc = 250.0 # z-source position (m)
f0 = 40. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
Explanation: From 1D to 2D acoustic finite difference modelling
The 1D acoustic wave equation is very useful to introduce the general concept and problems related to FD modelling. However, for realistic modelling and seismic imaging/inversion applications we have to solve at least the 2D acoustic wave equation.
In the class we will develop a 2D acoustic FD modelling code based on the 1D code. I strongly recommend that you do this step by yourself, starting from this notebook containing only the 1D code.
Finite difference solution of 2D acoustic wave equation
As derived in this and this lecture, the acoustic wave equation in 2D with constant density is
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial t^2} \ = \ vp(x,z)^2 \biggl(\frac{\partial^2 p(x,z,t)}{\partial x^2}+\frac{\partial^2 p(x,z,t)}{\partial z^2}\biggr) + f(x,z,t) \nonumber
\end{equation}
with pressure $p$, acoustic velocity $vp$ and source term $f$. We can split the source term into a spatial and temporal part. Spatially, we assume that the source is localized at one point ($x_s, z_s$). Therefore, the spatial source contribution consists of two Dirac $\delta$-functions $\delta(x-x_s)$ and $\delta(z-z_s)$. The temporal source part is an arbitrary source wavelet $s(t)$:
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial t^2} \ = \ vp(x,z)^2 \biggl(\frac{\partial^2 p(x,z,t)}{\partial x^2}+\frac{\partial^2 p(x,z,t)}{\partial z^2}\biggr) + \delta(x-x_s)\delta(z-z_s)s(t) \nonumber
\end{equation}
Both second derivatives can be approximated by a 3-point difference formula. For example for the time derivative, we get:
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial t^2} \ \approx \ \frac{p(x,z,t+dt) - 2 p(x,z,t) + p(x,z,t-dt)}{dt^2}, \nonumber
\end{equation}
and similar for the spatial derivatives:
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p(x+dx,z,t) - 2 p(x,z,t) + p(x-dx,z,t)}{dx^2}, \nonumber
\end{equation}
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p(x,z+dz,t) - 2 p(x,z,t) + p(x,z-dz,t)}{dz^2}, \nonumber
\end{equation}
Injecting these approximations into the wave equation allows us to formulate the pressure p(x) for the time step $t+dt$ (the future) as a function of the pressure at time $t$ (now) and $t-dt$ (the past). This is called an explicit time integration scheme allowing the $extrapolation$ of the space-dependent field into the future only looking at the nearest neighbourhood.
In the next step, we discretize the P-wave velocity and pressure wavefield at the discrete spatial grid points
\begin{align}
x &= idx\nonumber\
z &= jdz\nonumber\
\end{align}
with $i = 0, 1, 2, ..., nx$, $j = 0, 1, 2, ..., nz$ on a 2D Cartesian grid.
<img src="images/2D-grid_cart_ac.png" width="75%">
Using the discrete time steps
\begin{align}
t &= n*dt\nonumber
\end{align}
with $n = 0, 1, 2, ..., nt$ and time step $dt$, we can replace the time-dependent part (upper index time, lower indices space) by
\begin{equation}
\frac{p_{i,j}^{n+1} - 2 p_{i,j}^n + p_{i,j}^{n-1}}{\mathrm{d}t^2} \ = \ vp_{i,j}^2 \biggl( \frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial z^2}\biggr) \ + \frac{s_{i,j}^n}{dx\;dz}. \nonumber
\end{equation}
The spatial $\delta$-functions $\delta(x-x_s)$ and $\delta(z-z_s)$ in the source term are approximated by the boxcar function:
$$
\delta_{bc}(x) = \left{
\begin{array}{ll}
1/dx &|x|\leq dx/2 \
0 &\text{elsewhere}
\end{array}
\right.
$$
Solving for $p_{i,j}^{n+1}$ leads to the extrapolation scheme:
\begin{equation}
p_{i,j}^{n+1} \ = \ vp_{i,j}^2 \mathrm{d}t^2 \left( \frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial z^2} \right) + 2p_{i,j}^n - p_{i,j}^{n-1} + \frac{\mathrm{d}t^2}{dx\; dz} s_{i,j}^n.
\end{equation}
The spatial derivatives are determined by
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial x^2} \ \approx \ \frac{p_{i+1,j}^{n} - 2 p_{i,j}^n + p_{i-1,j}^{n}}{\mathrm{d}x^2} \nonumber
\end{equation}
and
\begin{equation}
\frac{\partial^2 p(x,z,t)}{\partial z^2} \ \approx \ \frac{p_{i,j+1}^{n} - 2 p_{i,j}^n + p_{i,j-1}^{n}}{\mathrm{d}z^2}. \nonumber
\end{equation}
Eq. (1) is the essential core of the 2D FD modelling code. Because we derived analytical solutions for wave propagation in a homogeneous medium, we should test our first code implementation for a similar medium, by setting
\begin{equation}
vp_{i,j} = vp0\notag
\end{equation}
at each spatial grid point $i = 0, 1, 2, ..., nx$; $j = 0, 1, 2, ..., nz$, in order to compare the numerical with the analytical solution. For a complete description of the problem we also have to define initial and boundary conditions. The initial condition is
\begin{equation}
p_{i,j}^0 = 0, \nonumber
\end{equation}
so the modelling starts with zero pressure amplitude at each spatial grid point $i, j$. As boundary conditions, we assume
\begin{align}
p_{0,j}^n = 0, \nonumber\
p_{nx,j}^n = 0, \nonumber\
p_{i,0}^n = 0, \nonumber\
p_{i,nz}^n = 0, \nonumber
\end{align}
for all time steps n. This Dirichlet boundary condition, leads to artifical boundary reflections which would obviously not describe a homogeneous medium. For now, we simply extend the model, so that boundary reflections are not recorded at the receiver positions.
Let's implement it ...
End of explanation
# 2D Wave Propagation (Finite Difference Solution)
# ------------------------------------------------
def FD_2D_acoustic(dt,dx,dz):
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
jr = (int)(zr/dz) # receiver location in grid in z-direction
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
z = np.arange(nz)
z = z * dz # coordinates in z-direction (m)
# calculate source-receiver distance
r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vp0) >= 0:
G[it] = 1. / (2 * np.pi * vp0**2) * (1. / np.sqrt(time[it]**2 - (r/vp0)**2))
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
lim = Gc.max() # get limit value from the maximum amplitude
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initialize model (assume homogeneous model)
# -------------------------------------------
vp = np.zeros((nx,nz))
vp = vp + vp0 # initialize wave velocity in model
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx ** 2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz ** 2
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp ** 2 * dt ** 2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Output of Seismogram
# -----------------
seis[it] = p[ir,jr]
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
# Define figure size
rcParams['figure.figsize'] = 12, 5
plt.plot(time, seis, 'b-',lw=3,label="FD solution") # plot FD seismogram
Analy_seis = plt.plot(time,Gc,'r--',lw=3,label="Analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.ylim(-lim, lim)
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
%%time
dx = 1.0 # grid point distance in x-direction (m)
dx = dz # grid point distance in z-direction (m)
dt = 0.0010 # time step (s)
FD_2D_acoustic(dt,dx,dz)
Explanation: Comparison of 2D finite difference with analytical solution
In the function below we solve the homogeneous 2D acoustic wave equation by the 3-point spatial/temporal difference operator and compare the numerical results with the analytical solution:
\begin{equation}
G_{analy}(x,z,t) = G_{2D} * S \nonumber
\end{equation}
with the 2D Green's function:
\begin{equation}
G_{2D}(x,z,t) = \dfrac{1}{2\pi V_{p0}^2}\dfrac{H\biggl((t-t_s)-\dfrac{|r|}{V_{p0}}\biggr)}{\sqrt{(t-t_s)^2-\dfrac{r^2}{V_{p0}^2}}}, \nonumber
\end{equation}
where $H$ denotes the Heaviside function, $r = \sqrt{(x-x_s)^2+(z-z_s)^2}$ the source-receiver distance (offset) and $S$ the source wavelet.
To play a little bit more with the modelling parameters, I restricted the input parameters to dt and dx. The number of spatial grid points and time steps, as well as the discrete source and receiver positions are estimated within this function.
End of explanation |
9,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q4
Now we'll start working with some basic Python data structures.
A
In this question, you'll implement a cumulative product method. Given a list, you'll compute a list that's the same length, but which contains the cumulative product of the current number with all the previous numbers.
Assume the initial product is 1, and I provide a list with the numbers 2, 1, 4, and 3. I multiply my initial product (1) by 2, and put that new product into another list. Then I repeat
Step1: B
This time, you'll implement a method for computing the average value of a list of numbers. Remember how to compute an average
Step2: C
In this question, you'll write a method that takes a list of numbers [0-9] and returns a corresponding list with the "ordinal" versions. That is, if you see a 1 in the list, you'll create a string "1st". If you see a 3, you'll create a string "3rd", and so on.
Example Input | Python Code:
def cumulative_product(start_list):
out_list = []
### BEGIN SOLUTION
### END SOLUTION
return out_list
inlist = [89, 22, 3, 24, 8, 59, 43, 97, 30, 88]
outlist = [89, 1958, 5874, 140976, 1127808, 66540672, 2861248896, 277541142912, 8326234287360, 732708617287680]
assert set(cumulative_product(inlist)) == set(outlist)
inlist = [56, 22, 81, 65, 40, 44, 95, 48, 45, 26]
outlist = [56, 1232, 99792, 6486480, 259459200, 11416204800, 1084539456000, 52057893888000, 2342605224960000, 60907735848960000]
assert set(cumulative_product(inlist)) == set(outlist)
Explanation: Q4
Now we'll start working with some basic Python data structures.
A
In this question, you'll implement a cumulative product method. Given a list, you'll compute a list that's the same length, but which contains the cumulative product of the current number with all the previous numbers.
Assume the initial product is 1, and I provide a list with the numbers 2, 1, 4, and 3. I multiply my initial product (1) by 2, and put that new product into another list. Then I repeat: multiply my new product by the next element of the list (1), and store that new-new product (2) in the second list. Repeat again: multiply my new-new product (2) with the next number in the list (4), and store that new-new-new product (8) in the second list.
Example Input: [2, 1, 4, 3]
Example Output: [2, 2, 8, 24]
End of explanation
def average(numbers):
avg_val = 0.0
### BEGIN SOLUTION
### END SOLUTION
return avg_val
import numpy as np
inlist = np.random.randint(10, 100, 10).tolist()
np.testing.assert_allclose(average(inlist), np.mean(inlist))
inlist = np.random.randint(10, 1000, 10).tolist()
np.testing.assert_allclose(average(inlist), np.mean(inlist))
Explanation: B
This time, you'll implement a method for computing the average value of a list of numbers. Remember how to compute an average: add up all the values, then divide that sum by the number of values you initially added together.
Example Input: [2, 1, 4, 3]
Example Output: 2.5
End of explanation
def return_ordinals(numbers):
out_list = []
### BEGIN SOLUTION
### END SOLUTION
return out_list
inlist = [5, 6, 1, 9, 5, 5, 3, 3, 9, 4]
outlist = ["5th", "6th", "1st", "9th", "5th", "5th", "3rd", "3rd", "9th", "4th"]
for y_true, y_pred in zip(outlist, return_ordinals(inlist)):
assert y_true == y_pred.lower()
inlist = [7, 5, 6, 6, 3, 5, 1, 0, 5, 2]
outlist = ["7th", "5th", "6th", "6th", "3rd", "5th", "1st", "0th", "5th", "2nd"]
for y_true, y_pred in zip(outlist, return_ordinals(inlist)):
assert y_true == y_pred.lower()
Explanation: C
In this question, you'll write a method that takes a list of numbers [0-9] and returns a corresponding list with the "ordinal" versions. That is, if you see a 1 in the list, you'll create a string "1st". If you see a 3, you'll create a string "3rd", and so on.
Example Input: [2, 1, 4, 3, 4]
Example Output: ["2nd", "1st", "4th", "3rd", "4th"]
End of explanation |
9,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Raspberry Electrophoresis
1 Tutorial Outline
Welcome to the raspberry electrophoresis ESPResSo tutorial! This tutorial assumes some basic knowledge of ESPResSo.
The first step is compiling ESPResSo with the appropriate flags, as listed in Sec. 2.
The tutorial starts by discussing how to build a colloid out of a series of MD beads. These particles typically
resemble a raspberry as can be seen in Fig. 1. After covering the construction of a raspberry colloid, we then
briefly discuss the inclusion of hydrodynamic interactions via a lattice-Boltzmann fluid. Finally we will cover
including ions via the restrictive primitive model (hard sphere ions) and the addition of an electric field
to measure the electrokinetic properties. This script will run a raspberry electrophoresis simulation and write the time and position of the colloid out to a file named <tt>posVsTime.dat</tt> in the same directory.
A sample set of data is included in the file <tt>posVsTime_sample.dat</tt>.
2 Compiling ESPResSo for this Tutorial
To run this tutorial, you will need to enable the following features in the myconfig.hpp file when compiling ESPResSo
Step1: The parameter <tt>box_l</tt> sets the size of the simulation box. In general, one should check for finite
size effects which can be surprisingly large in simulations using hydrodynamic interactions. They
also generally scale as <tt>box_l</tt>$^{-1}$ or <tt>box_l</tt>$^{-3}$ depending on the transport mechanism
which sometimes allows for the infinite box limit to be extrapolated to, instead of using an
excessively large simulation box. As a rule of thumb, the box size should be five times greater than the characteristic
length scale of the object. Note that this example uses a small box
to provide a shorter simulation time.
Step2: The skin is used for constructing
the Verlet lists and is purely an optimization parameter. Whatever value provides the fastest
integration speed should be used. For the type of simulations covered in this tutorial, this value turns out
to be <tt>skin</tt>$\ \approx 0.3$.
Step3: The <tt>periodicity</tt> parameter indicates that the system is periodic in all three
dimensions. Note that the lattice-Boltzmann algorithm requires periodicity in all three directions (although
this can be modified using boundaries, a topic not covered in this tutorial).
4 Setting up the Raspberry
Setting up the raspberry is a non-trivial task. The main problem lies in creating a relatively
uniform distribution of beads on the surface of the colloid. In general one should take about 1 bead per lattice-Boltzmann grid
point on the surface to ensure that there are no holes in the surface. The behavior of the colloid can be further improved by placing
beads inside the colloid, though this is not done in this example script. In our example
we first define a harmonic interaction causing the surface beads to be attracted
to the center, and a Lennard-Jones interaction preventing the beads from entering the colloid. There is also a Lennard-Jones
potential between the surface beads to get them to distribute evenly on the surface.
Step4: We set up the central bead and the other beads are initialized at random positions on the surface of the colloid. The beads are then allowed to relax using
an integration loop where the forces between the beads are capped.
Step5: The best way to ensure a relatively uniform distribution
of the beads on the surface is to simply take a look at a VMD snapshot of the system after this integration. Such a snapshot is shown in Fig. 1.
<figure>
<img src='figures/raspberry_snapshot.png' alt='missing' style="width
Step6: Now that the beads are arranged in the shape of a raspberry, the surface beads are made virtual particles
using the VirtualSitesRelative scheme. This converts the raspberry to a rigid body
in which the surface particles follow the translation and rotation of the central particle.
Newton's equations of motion are only integrated for the central particle.
It is given an appropriate mass and moment of inertia tensor (note that the inertia tensor
is given in the frame in which it is diagonal.)
Step7: 5 Inserting Counterions and Salt Ions
Next we insert enough ions at random positions (outside the radius of the colloid) with opposite charge to the colloid such that the system is electro-neutral. In addition, ions
of both signs are added to represent the salt in the solution.
Step8: We then check that charge neutrality is maintained
Step9: A WCA potential acts between all of the ions. This potential represents a purely repulsive
version of the Lennard-Jones potential, which approximates hard spheres of diameter $\sigma$. The ions also interact through a WCA potential
with the central bead of the colloid, using an offset of around $\mathrm{radius_col}-\sigma +a_\mathrm{grid}/2$. This makes
the colloid appear as a hard sphere of radius roughly $\mathrm{radius_col}+a_\mathrm{grid}/2$ to the ions, which is approximately equal to the
hydrodynamic radius of the colloid
Step10: After inserting the ions, again a short integration is performed with a force cap to
prevent strong overlaps between the ions.
Step11: 6 Electrostatics
Electrostatics are simulated using the Particle-Particle Particle-Mesh (P3M) algorithm. In ESPResSo this can be added to the simulation rather trivially
Step12: Generally a Bjerrum length of $2$ is appropriate when using WCA interactions with $\sigma=1$, since a typical ion has a radius of $0.35\ \mathrm{nm}$, while the Bjerrum
length in water is around $0.7\ \mathrm{nm}$.
The external electric field is simulated by simply adding a constant force equal to the simulated field times the particle charge. Generally the electric field is set to $0.1$ in MD units,
which is the maximum field before the response becomes nonlinear. Smaller fields are also possible, but the required simulation time is considerably larger. Sometimes, Green-Kubo methods
are also used, but these are generally only feasible in cases where there is either no salt or a very low salt concentration.
Step13: 7 Lattice-Boltzmann
Before creating the LB fluid it is a good idea to set all of the particle velocities to zero.
This is necessary to set the total momentum of the system to zero. Failing to do so will lead to an unphysical drift of the system, which
will change the values of the measured velocities.
Step14: The important parameters for the LB fluid are the density, the viscosity, the time step,
and the friction coefficient used to couple the particle motion to the fluid.
The time step should generally be comparable to the MD time step. While
large time steps are possible, a time step of $0.01$ turns out to provide more reasonable values for the root mean squared particle velocities. Both density and viscosity
should be around $1$, while the friction should be set around $20.$ The grid spacing should be comparable to the ions' size.
Step15: A logical way of picking a specific set of parameters is to choose them such that the hydrodynamic radius of an ion roughly matches its physical radius determined by the
WCA potential ($R=0.5\sigma$). Using the following equation
Step16: 8 Simulating Electrophoresis
Now the main simulation can begin! The only important thing is to make sure the system has enough time to equilibrate. There are two separate equilibration times
Step17: Plot the raspberry trajectory with <tt>matplotlib</tt> | Python Code:
import espressomd
espressomd.assert_features(["ELECTROSTATICS", "ROTATION", "ROTATIONAL_INERTIA", "EXTERNAL_FORCES",
"MASS", "VIRTUAL_SITES_RELATIVE", "CUDA", "LENNARD_JONES"])
from espressomd import interactions
from espressomd import electrostatics
from espressomd import lb
from espressomd.virtual_sites import VirtualSitesRelative
import numpy as np
# System parameters
#############################################################
box_l = 40. # size of the simulation box
skin = 0.3 # Skin parameter for the Verlet lists
time_step = 0.01
eq_tstep = 0.001
n_cycle = 1000
integ_steps = 150
# Interaction parameters (Lennard-Jones for raspberry)
#############################################################
radius_col = 3.
harmonic_radius = 3.0
# the subscript c is for colloid and s is for salt (also used for the surface beads)
eps_ss = 1. # LJ epsilon between the colloid's surface particles.
sig_ss = 1. # LJ sigma between the colloid's surface particles.
eps_cs = 48. # LJ epsilon between the colloid's central particle and surface particles.
sig_cs = radius_col # LJ sigma between the colloid's central particle and surface particles (colloid's radius).
a_eff = 0.32 # effective hydrodynamic radius of a bead due to the discreteness of LB.
# System setup
#############################################################
system = espressomd.System(box_l=[box_l] * 3)
system.time_step = time_step
Explanation: Raspberry Electrophoresis
1 Tutorial Outline
Welcome to the raspberry electrophoresis ESPResSo tutorial! This tutorial assumes some basic knowledge of ESPResSo.
The first step is compiling ESPResSo with the appropriate flags, as listed in Sec. 2.
The tutorial starts by discussing how to build a colloid out of a series of MD beads. These particles typically
resemble a raspberry as can be seen in Fig. 1. After covering the construction of a raspberry colloid, we then
briefly discuss the inclusion of hydrodynamic interactions via a lattice-Boltzmann fluid. Finally we will cover
including ions via the restrictive primitive model (hard sphere ions) and the addition of an electric field
to measure the electrokinetic properties. This script will run a raspberry electrophoresis simulation and write the time and position of the colloid out to a file named <tt>posVsTime.dat</tt> in the same directory.
A sample set of data is included in the file <tt>posVsTime_sample.dat</tt>.
2 Compiling ESPResSo for this Tutorial
To run this tutorial, you will need to enable the following features in the myconfig.hpp file when compiling ESPResSo:
```c++
define ELECTROSTATICS
define ROTATION
define ROTATIONAL_INERTIA
define EXTERNAL_FORCES
define MASS
define VIRTUAL_SITES_RELATIVE
define LENNARD_JONES
```
3 Global MD Variables
The first thing to do in any ESPResSo simulation is to import our espressomd features and set a few global simulation parameters:
End of explanation
system.cell_system.skin = skin
Explanation: The parameter <tt>box_l</tt> sets the size of the simulation box. In general, one should check for finite
size effects which can be surprisingly large in simulations using hydrodynamic interactions. They
also generally scale as <tt>box_l</tt>$^{-1}$ or <tt>box_l</tt>$^{-3}$ depending on the transport mechanism
which sometimes allows for the infinite box limit to be extrapolated to, instead of using an
excessively large simulation box. As a rule of thumb, the box size should be five times greater than the characteristic
length scale of the object. Note that this example uses a small box
to provide a shorter simulation time.
End of explanation
system.periodicity = [True, True, True]
Explanation: The skin is used for constructing
the Verlet lists and is purely an optimization parameter. Whatever value provides the fastest
integration speed should be used. For the type of simulations covered in this tutorial, this value turns out
to be <tt>skin</tt>$\ \approx 0.3$.
End of explanation
# the LJ potential with the central bead keeps all the beads from simply collapsing into the center
system.non_bonded_inter[1, 0].wca.set_params(epsilon=eps_cs, sigma=sig_cs)
# the LJ potential (WCA potential) between surface beads causes them to be roughly equidistant on the
# colloid surface
system.non_bonded_inter[1, 1].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
# the harmonic potential pulls surface beads towards the central colloid bead
col_center_surface_bond = interactions.HarmonicBond(k=3000., r_0=harmonic_radius)
system.bonded_inter.add(col_center_surface_bond)
Explanation: The <tt>periodicity</tt> parameter indicates that the system is periodic in all three
dimensions. Note that the lattice-Boltzmann algorithm requires periodicity in all three directions (although
this can be modified using boundaries, a topic not covered in this tutorial).
4 Setting up the Raspberry
Setting up the raspberry is a non-trivial task. The main problem lies in creating a relatively
uniform distribution of beads on the surface of the colloid. In general one should take about 1 bead per lattice-Boltzmann grid
point on the surface to ensure that there are no holes in the surface. The behavior of the colloid can be further improved by placing
beads inside the colloid, though this is not done in this example script. In our example
we first define a harmonic interaction causing the surface beads to be attracted
to the center, and a Lennard-Jones interaction preventing the beads from entering the colloid. There is also a Lennard-Jones
potential between the surface beads to get them to distribute evenly on the surface.
End of explanation
# for the warmup we use a Langevin thermostat with an extremely low temperature and high friction coefficient
# such that the trajectories roughly follow the gradient of the potential while not accelerating too much
system.thermostat.set_langevin(kT=0.00001, gamma=40., seed=42)
print("# Creating raspberry")
center = system.box_l / 2
colPos = center
# Charge of the colloid
q_col = -40
# Number of particles making up the raspberry (surface particles + the central particle).
n_col_part = int(4 * np.pi * np.power(radius_col, 2) + 1)
# Place the central particle
system.part.add(id=0, pos=colPos, type=0, q=q_col, fix=(True, True, True),
rotation=(1, 1, 1)) # Create central particle
# Create surface beads uniformly distributed over the surface of the central particle
for i in range(1, n_col_part):
colSurfPos = np.random.randn(3)
colSurfPos = colSurfPos / np.linalg.norm(colSurfPos) * radius_col + colPos
system.part.add(id=i, pos=colSurfPos, type=1)
system.part[i].add_bond((col_center_surface_bond, 0))
print("# Number of colloid beads = {}".format(n_col_part))
# Relax bead positions. The LJ potential with the central bead combined with the
# harmonic bond keep the monomers roughly radius_col away from the central bead. The LJ
# between the surface beads cause them to distribute more or less evenly on the surface.
system.force_cap = 1000
system.time_step = eq_tstep
print("Relaxation of the raspberry surface particles")
for i in range(n_cycle):
system.integrator.run(integ_steps)
# Restore time step
system.time_step = time_step
Explanation: We set up the central bead and the other beads are initialized at random positions on the surface of the colloid. The beads are then allowed to relax using
an integration loop where the forces between the beads are capped.
End of explanation
# this loop moves the surface beads such that they are once again exactly radius_col away from the center
# For the scalar distance, we use system.distance() which considers periodic boundaries
# and the minimum image convention
colPos = system.part[0].pos
for p in system.part[1:]:
p.pos = (p.pos - colPos) / np.linalg.norm(system.distance(p, system.part[0])) * radius_col + colPos
p.pos = (p.pos - colPos) / np.linalg.norm(p.pos - colPos) * radius_col + colPos
Explanation: The best way to ensure a relatively uniform distribution
of the beads on the surface is to simply take a look at a VMD snapshot of the system after this integration. Such a snapshot is shown in Fig. 1.
<figure>
<img src='figures/raspberry_snapshot.png' alt='missing' style="width: 600px;"/>
<center>
<figcaption>Figure 1: A snapshot of the simulation consisting of positive salt ions (yellow spheres), negative salt ions (grey spheres) and surface beads (blue spheres). There is also a central bead in the middle of the colloid bearing a large negative charge.</figcaption>
</center>
</figure>
In order to make the colloid perfectly round, we now adjust the bead's positions to be exactly <tt>radius_col</tt> away
from the central bead.
End of explanation
# Select the desired implementation for virtual sites
system.virtual_sites = VirtualSitesRelative()
# Setting min_global_cut is necessary when there is no interaction defined with a range larger than
# the colloid such that the virtual particles are able to communicate their forces to the real particle
# at the center of the colloid
system.min_global_cut = radius_col
# Calculate the center of mass position (com) and the moment of inertia (momI) of the colloid
com = np.average(system.part[1:].pos, 0) # system.part[:].pos returns an n-by-3 array
momI = 0
for i in range(n_col_part):
momI += np.power(np.linalg.norm(com - system.part[i].pos), 2)
# note that the real particle must be at the center of mass of the colloid because of the integrator
print("\n# moving central particle from {} to {}".format(system.part[0].pos, com))
system.part[0].fix = [False, False, False]
system.part[0].pos = com
system.part[0].mass = n_col_part
system.part[0].rinertia = np.ones(3) * momI
# Convert the surface particles to virtual sites related to the central particle
# The id of the central particles is 0, the ids of the surface particles start at 1.
for p in system.part[1:]:
p.vs_auto_relate_to(0)
Explanation: Now that the beads are arranged in the shape of a raspberry, the surface beads are made virtual particles
using the VirtualSitesRelative scheme. This converts the raspberry to a rigid body
in which the surface particles follow the translation and rotation of the central particle.
Newton's equations of motion are only integrated for the central particle.
It is given an appropriate mass and moment of inertia tensor (note that the inertia tensor
is given in the frame in which it is diagonal.)
End of explanation
print("# Adding the positive ions")
salt_rho = 0.001 # Number density of ions
volume = system.volume()
N_counter_ions = int(round((volume * salt_rho) + abs(q_col)))
i = 0
while i < N_counter_ions:
pos = np.random.random(3) * system.box_l
# make sure the ion is placed outside of the colloid
if (np.power(np.linalg.norm(pos - center), 2) > np.power(radius_col, 2) + 1):
system.part.add(pos=pos, type=2, q=1)
i += 1
print("# Added {} positive ions".format(N_counter_ions))
print("\n# Adding the negative ions")
N_co_ions = N_counter_ions - abs(q_col)
i = 0
while i < N_co_ions:
pos = np.random.random(3) * system.box_l
# make sure the ion is placed outside of the colloid
if (np.power(np.linalg.norm(pos - center), 2) > np.power(radius_col, 2) + 1):
system.part.add(pos=pos, type=3, q=-1)
i += 1
print("# Added {} negative ions".format(N_co_ions))
Explanation: 5 Inserting Counterions and Salt Ions
Next we insert enough ions at random positions (outside the radius of the colloid) with opposite charge to the colloid such that the system is electro-neutral. In addition, ions
of both signs are added to represent the salt in the solution.
End of explanation
# Check charge neutrality
assert np.abs(np.sum(system.part[:].q)) < 1E-10
Explanation: We then check that charge neutrality is maintained
End of explanation
# WCA interactions for the ions, essentially giving them a finite volume
system.non_bonded_inter[0, 2].lennard_jones.set_params(
epsilon=eps_ss, sigma=sig_ss,
cutoff=sig_ss * pow(2., 1. / 6.), shift="auto", offset=sig_cs - 1 + a_eff)
system.non_bonded_inter[0, 3].lennard_jones.set_params(
epsilon=eps_ss, sigma=sig_ss,
cutoff=sig_ss * pow(2., 1. / 6.), shift="auto", offset=sig_cs - 1 + a_eff)
system.non_bonded_inter[2, 2].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
system.non_bonded_inter[2, 3].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
system.non_bonded_inter[3, 3].wca.set_params(epsilon=eps_ss, sigma=sig_ss)
Explanation: A WCA potential acts between all of the ions. This potential represents a purely repulsive
version of the Lennard-Jones potential, which approximates hard spheres of diameter $\sigma$. The ions also interact through a WCA potential
with the central bead of the colloid, using an offset of around $\mathrm{radius_col}-\sigma +a_\mathrm{grid}/2$. This makes
the colloid appear as a hard sphere of radius roughly $\mathrm{radius_col}+a_\mathrm{grid}/2$ to the ions, which is approximately equal to the
hydrodynamic radius of the colloid
End of explanation
print("\n# Equilibrating the ions (without electrostatics):")
# Langevin thermostat for warmup before turning on the LB.
temperature = 1.0
system.thermostat.set_langevin(kT=temperature, gamma=1.)
print("Removing overlap between ions")
ljcap = 100
CapSteps = 100
for i in range(CapSteps):
system.force_cap = ljcap
system.integrator.run(integ_steps)
ljcap += 5
system.force_cap = 0
Explanation: After inserting the ions, again a short integration is performed with a force cap to
prevent strong overlaps between the ions.
End of explanation
# Turning on the electrostatics
# Note: Production runs would typically use a target accuracy of 10^-4
print("\n# Tuning P3M parameters...")
bjerrum = 2.
p3m = electrostatics.P3M(prefactor=bjerrum * temperature, accuracy=0.001)
system.actors.add(p3m)
print("# Tuning complete")
Explanation: 6 Electrostatics
Electrostatics are simulated using the Particle-Particle Particle-Mesh (P3M) algorithm. In ESPResSo this can be added to the simulation rather trivially:
End of explanation
E = 0.1 # an electric field of 0.1 is the upper limit of the linear response regime for this model
Efield = np.array([E, 0, 0])
for p in system.part:
p.ext_force = p.q * Efield
Explanation: Generally a Bjerrum length of $2$ is appropriate when using WCA interactions with $\sigma=1$, since a typical ion has a radius of $0.35\ \mathrm{nm}$, while the Bjerrum
length in water is around $0.7\ \mathrm{nm}$.
The external electric field is simulated by simply adding a constant force equal to the simulated field times the particle charge. Generally the electric field is set to $0.1$ in MD units,
which is the maximum field before the response becomes nonlinear. Smaller fields are also possible, but the required simulation time is considerably larger. Sometimes, Green-Kubo methods
are also used, but these are generally only feasible in cases where there is either no salt or a very low salt concentration.
End of explanation
system.part[:].v = (0, 0, 0)
Explanation: 7 Lattice-Boltzmann
Before creating the LB fluid it is a good idea to set all of the particle velocities to zero.
This is necessary to set the total momentum of the system to zero. Failing to do so will lead to an unphysical drift of the system, which
will change the values of the measured velocities.
End of explanation
lb = espressomd.lb.LBFluidGPU(kT=temperature, seed=42, dens=1., visc=3., agrid=1., tau=system.time_step)
system.actors.add(lb)
Explanation: The important parameters for the LB fluid are the density, the viscosity, the time step,
and the friction coefficient used to couple the particle motion to the fluid.
The time step should generally be comparable to the MD time step. While
large time steps are possible, a time step of $0.01$ turns out to provide more reasonable values for the root mean squared particle velocities. Both density and viscosity
should be around $1$, while the friction should be set around $20.$ The grid spacing should be comparable to the ions' size.
End of explanation
system.thermostat.turn_off()
system.thermostat.set_lb(LB_fluid=lb, seed=123, gamma=20.0)
Explanation: A logical way of picking a specific set of parameters is to choose them such that the hydrodynamic radius of an ion roughly matches its physical radius determined by the
WCA potential ($R=0.5\sigma$). Using the following equation:
\begin{equation}
\frac{1}{\Gamma}=\frac{1}{6\pi \eta R_{\mathrm{H0}}}=\frac{1}{\Gamma_0}
+\frac{1}{g\eta a}
\label{effectiveGammaEq}
\end{equation}
one can see that the set of parameters grid spacing $a=1\sigma$, fluid density $\rho=1$, a
kinematic viscosity of $\nu=3 $ and a friction of $\Gamma_0=50$ leads to a hydrodynamic radius
of approximately $0.5\sigma$.
The last step is to first turn off all other thermostats, followed by turning on the LB thermostat. The temperature is typically set to 1, which is equivalent to setting
$k_\mathrm{B}T=1$ in molecular dynamics units.
End of explanation
# Reset the simulation clock
system.time = 0
initial_pos = system.part[0].pos
num_iterations = 1000
num_steps_per_iteration = 1000
with open('posVsTime.dat', 'w') as f: # file where the raspberry trajectory will be written to
for i in range(num_iterations):
system.integrator.run(num_steps_per_iteration)
pos = system.part[0].pos - initial_pos
f.write("%.2f %.4f %.4f %.4f\n" % (system.time, pos[0], pos[1], pos[2]))
print("# time: {:.0f} ({:.0f}%), col_pos: {}".format(
system.time, (i + 1) * 100. / num_iterations, np.around(pos, 1), end='\r'))
print("\n# Finished")
Explanation: 8 Simulating Electrophoresis
Now the main simulation can begin! The only important thing is to make sure the system has enough time to equilibrate. There are two separate equilibration times: 1) the time for the ion distribution to stabilize, and 2) the time
needed for the fluid flow profile to equilibrate. In general, the ion distribution equilibrates fast, so the needed warmup time is largely determined by the fluid relaxation time, which can be calculated via $\tau_\mathrm{relax} = \mathrm{box_length}^2/\nu$. This means for a box of size 40 with a kinematic viscosity of 3 as in our example script, the relaxation time is $\tau_\mathrm{relax} = 40^2/3 = 533 \tau_\mathrm{MD}$, or 53300 integration steps. In general it is a good idea to run for many relaxation times before starting to use the simulation results for averaging observables. To be on the safe side $10^6$ integration steps is a reasonable equilibration time. Please feel free to modify the provided script and try and get some interesting results!
End of explanation
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
trajectory_file = 'posVsTime.dat'
trajectory = np.loadtxt(trajectory_file)[:, 1:4]
# optional: trajectory smoothing with a running average
N = 6
trajectory = np.array(
[np.convolve(trajectory[:, i], np.ones((N,)) / N, mode='valid') for i in range(3)])
# calculate bounding box (cubic box to preserve scaling)
trajectory_range = np.max(trajectory, axis=1) - np.min(trajectory, axis=1)
mid_range = np.median(trajectory, axis=1)
max_range = 1.01 * np.max(np.abs(trajectory_range))
bbox = np.array([mid_range - max_range / 2, mid_range + max_range / 2])
# 3D plot
fig = plt.figure(figsize=(9, 6))
ax = fig.add_subplot(111, projection='3d')
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
ax.set_xlim(*bbox[:, 0])
ax.set_ylim(*bbox[:, 1])
ax.set_zlim(*bbox[:, 2])
ax.text(*trajectory[:, 0], '\u2190 start', 'y')
ax.scatter(*trajectory[:, 0])
ax.plot(*trajectory)
plt.tight_layout()
plt.rcParams.update({'font.size': 14})
Explanation: Plot the raspberry trajectory with <tt>matplotlib</tt>:
End of explanation |
9,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Filters
Step1: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
Step2: Baxter-King approximate band-pass filter
Step3: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
Step4: Christiano-Fitzgerald approximate band-pass filter | Python Code:
%matplotlib inline
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
print(index)
dta.index = index
del dta['year']
del dta['quarter']
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax);
legend = ax.legend(loc = 'upper left');
legend.prop.set_size(20);
Explanation: Time Series Filters
End of explanation
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[['realgdp']].copy()
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16);
legend = ax.get_legend()
legend.prop.set_size(20);
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]])
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=['r--', 'b-']);
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])
print(sm.tsa.stattools.adfuller(dta['infl'])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=['r--','b-']);
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation |
9,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation of a Radix-2 Fast Fourier Transform
Import standard modules
Step3: This assignment is to implement a python-based Fast Fourier Transform (FFT). Building on $\S$ 2.8 ➞ we will implement a 1-D radix-2 Cooley-Tukey-based FFT using decimation in time (DIT) an $N = 2^n$ input function, and then generalize the function to take any input.
From $\S$ 2.8.2 ➞ the discrete Fourier transform (DFT) is defined as
Step5: In $\S$ 2.8.6 ➞ the fast Fourier transform was introduced as using recursion to implement a Fourier transform in $\mathcal{O}(N\log_2N)$ computations, significantly reducing the computational cost of computing the Fourier transform, especially for large $N$. A 'one layer' fast Fourier transform was presented which split the input function into two, and applied the twiddle factor to all values in the layer before calling the matrix-based DFT. This code is replicated below.
Step6: We can easily show that each of these functions produce the same results by introducting a discrete test function $x$ and showing that the same results are reported by each function call
Step7: We can also time each function to report of the amount of time is takes to return a finished spectrum.
Step8: As we can see the matrix DFT is significatly faster than the double loop DFT, this is because of the fast vectorization functions in numpy. And, the 'one-layer' FFT is about twice as fast as the matrix DFT because of the FFT architecture. We can go one fast and use the built-in numpy FFT
Step10: The numpy FFT is very fast, in part because of the low-level programing implementation, but fundamentally because it uses an FFT architecture. Our goal for this assignment is to implement such an architecture.
Decimation-in-Time (DIT) FFT (12 Points)
The computational efficiency of the FFT comes from the recursive design of the algorithm which takes advantage of a binary tree design and the use of generalized twiddle factors. There are two designs of the binary tree which leads to the decimation-in-time (DIT) and decimation-in-frequency (DIF) architectures. Both architectures produce equivalent results, they they differ in the direction and starting point of the computations on the FFT binary tree. See the wikipedia page on the Cooley-Tukey FFT ⤴ for a diagram and pseudo-code of the DIT implementation.
For this section of the assignment implement the Radix-2 DIT FFT algorithm for the case of a $2^n$ size input, this input can be either real or complex.
Step11: Once ditrad2() is properly implemented then the results of calling the function should be equivalent to the output of the numpy FFT, and should run faster than the DFT and one-layer FFT.
Step13: A non-$2^n$ FFT (10 points)
Now that we have implemented a fast radix-2 algorithm for vectors of length $2^n$, we can write a generic algorithm which can take any length input. This algorithm will check if the length of the input is divisible by 2, if so then it will use the FFT, otherwise it will default to the slower matrix-based DFT.
Step14: Now running this algorithm on inputs of different lengths there should be different run times. For a vector with a prime number length then the algorithm will default to the slow matrix-based DFT. For a vector of length nearly always divisible by 2 then the algorithm should be faster. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
import cmath
Explanation: Implementation of a Radix-2 Fast Fourier Transform
Import standard modules:
End of explanation
def loop_DFT(x):
Implementing the DFT in a double loop
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
X = np.zeros(N, dtype=complex)
for k in range(N):
for n in range(N):
X[k] += np.exp(-1j * 2.0* np.pi* k * n / N) * x[n]
return X
def matrix_DFT(x):
Implementing the DFT in vectorised form
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
n = np.arange(N)
k = n.reshape((N,1))
K = np.exp(-1j * 2.0 * np.pi * k * n / N)
return K.dot(x)
Explanation: This assignment is to implement a python-based Fast Fourier Transform (FFT). Building on $\S$ 2.8 ➞ we will implement a 1-D radix-2 Cooley-Tukey-based FFT using decimation in time (DIT) an $N = 2^n$ input function, and then generalize the function to take any input.
From $\S$ 2.8.2 ➞ the discrete Fourier transform (DFT) is defined as:
$$ \mathscr{F}{\rm D}{y}_k = Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}, $$
That is, the $k^{th}$ element of the Fourier transformed spectrum $Y$ is a sum over all $n$ elements of the function $y$, each multipled by a complex twiddle factor $e^{-\imath 2\pi \frac{nk}{N}}$. In $\S$ 2.8.5 ➞ two methods for computing the DFT for a size $N = 2^n$ discrete function. A double loop to compute all elements of the Fourier-transformed spectrum, and a matrix multiplication by generating the Fourier kernel $K$. The compute time to perform the DFT is $\mathcal{O}(N^2)$, this is it takes $cN^2$ operations where $c > 1$ is a constant factor. Though as note in $\S$ 2.8.5 ➞ the matrix implementation is much fast that the loop because this algorithm takes advantage of fast vector math libraries.
The DFT code is replicated here as it will be used to compare our implementation of the FFT:
End of explanation
def one_layer_FFT(x):
An implementation of the 1D Cooley-Tukey FFT using one layer
N = x.size
if N%2 > 0:
print "Warning: length of x is not a power of two, returning DFT"
return matrix_DFT(x)
else:
X_even = matrix_DFT(x[::2])
X_odd = matrix_DFT(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([X_even + factor[:N / 2] * X_odd, X_even + factor[N / 2:] * X_odd])
Explanation: In $\S$ 2.8.6 ➞ the fast Fourier transform was introduced as using recursion to implement a Fourier transform in $\mathcal{O}(N\log_2N)$ computations, significantly reducing the computational cost of computing the Fourier transform, especially for large $N$. A 'one layer' fast Fourier transform was presented which split the input function into two, and applied the twiddle factor to all values in the layer before calling the matrix-based DFT. This code is replicated below.
End of explanation
xTest = np.random.random(256) # create random vector to take the DFT of
print np.allclose(loop_DFT(xTest), matrix_DFT(xTest)) # returns True if all values are equal (within numerical error)
print np.allclose(matrix_DFT(xTest), one_layer_FFT(xTest)) # returns True if all values are equal (within numerical error)
Explanation: We can easily show that each of these functions produce the same results by introducting a discrete test function $x$ and showing that the same results are reported by each function call:
End of explanation
print 'Double Loop DFT:'
%timeit loop_DFT(xTest)
print '\nMatrix DFT:'
%timeit matrix_DFT(xTest)
print '\nOne Layer FFT + Matrix DFT:'
%timeit one_layer_FFT(xTest)
Explanation: We can also time each function to report of the amount of time is takes to return a finished spectrum.
End of explanation
print np.allclose(one_layer_FFT(xTest), np.fft.fft(xTest))
print 'numpy FFT:'
%timeit np.fft.fft(xTest)
Explanation: As we can see the matrix DFT is significatly faster than the double loop DFT, this is because of the fast vectorization functions in numpy. And, the 'one-layer' FFT is about twice as fast as the matrix DFT because of the FFT architecture. We can go one fast and use the built-in numpy FFT:
End of explanation
def ditrad2(x):
radix-2 DIT FFT
x: list or array of N values to perform FFT on, can be real or imaginary, x must be of size 2^n
ox = np.asarray(x, dtype='complex') # assure the input is an array of complex values
# INSERT: assign a value to N, the size of the FFT
N = #??? 1 point
if N==1: return ox # base case
# INSERT: compute the 'even' and 'odd' components of the FFT,
# you will recursively call ditrad() here on a subset of the input values
# Hint: a binary tree design splits the input in half
even = #??? 2 points
odd = #??? 2 points
twiddles = np.exp(-2.j * cmath.pi * np.arange(N) / N) # compute the twiddle factors
# INSERT: apply the twiddle factors and return the FFT by combining the even and odd values
# Hint: twiddle factors are only applied to the odd values
# Hint: combing even and odd is different from the way the inputs were split apart above.
return #??? 3 points
Explanation: The numpy FFT is very fast, in part because of the low-level programing implementation, but fundamentally because it uses an FFT architecture. Our goal for this assignment is to implement such an architecture.
Decimation-in-Time (DIT) FFT (12 Points)
The computational efficiency of the FFT comes from the recursive design of the algorithm which takes advantage of a binary tree design and the use of generalized twiddle factors. There are two designs of the binary tree which leads to the decimation-in-time (DIT) and decimation-in-frequency (DIF) architectures. Both architectures produce equivalent results, they they differ in the direction and starting point of the computations on the FFT binary tree. See the wikipedia page on the Cooley-Tukey FFT ⤴ for a diagram and pseudo-code of the DIT implementation.
For this section of the assignment implement the Radix-2 DIT FFT algorithm for the case of a $2^n$ size input, this input can be either real or complex.
End of explanation
print 'The output of ditrad2() is correct?', np.allclose(np.fft.fft(xTest), ditrad2(xTest)) # 2 points if true
print 'your FFT:'
%timeit ditrad2(xTest) # 2 point if your time < One Layer FFT + Matrix DFT
Explanation: Once ditrad2() is properly implemented then the results of calling the function should be equivalent to the output of the numpy FFT, and should run faster than the DFT and one-layer FFT.
End of explanation
def generalFFT(x):
radix-2 DIT FFT
x: list or array of N values to perform FFT on, can be real or imaginary
ox = np.asarray(x, dtype='complex') # assure the input is an array of complex values
# INSERT: assign a value to N, the size of the FFT
N = #??? 1 point
if N==1: return ox # base case
elif # INSERT: check if the length is divisible by 2, 1 point
elif N % 2 ==0: # the length of the input vector is divisable by 2
# INSERT: do a FFT, use your ditrad2() code here, 3 points
# Hint: your ditrad2() code can be copied here, and will work with only a minor modification
else: # INSERT: if not divisable by 2, do a slow Fourier Transform
return # ??? 1 point
Explanation: A non-$2^n$ FFT (10 points)
Now that we have implemented a fast radix-2 algorithm for vectors of length $2^n$, we can write a generic algorithm which can take any length input. This algorithm will check if the length of the input is divisible by 2, if so then it will use the FFT, otherwise it will default to the slower matrix-based DFT.
End of explanation
xTest2 = np.random.random(251) # create random vector to take the DFT of, not, this is not of length 2^n
xTest3 = np.random.random(12*32) # create random vector to take the DFT of, not, this is not of length 2^n
print 'The output of generalFFT() is correct?', np.allclose(np.fft.fft(xTest2), generalFFT(xTest2)) # 1 point
print 'Your generic FFT:'
%timeit generalFFT(xTest2) # 1 point if it runs in approximately the same time as matrix_DFT
%timeit generalFFT(xTest3) # 2 point if it runs faster than the xTest2 vector
Explanation: Now running this algorithm on inputs of different lengths there should be different run times. For a vector with a prime number length then the algorithm will default to the slow matrix-based DFT. For a vector of length nearly always divisible by 2 then the algorithm should be faster.
End of explanation |
9,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to build a machine learning marketing model for banking using Google Cloud Platform and Python
This notebook shows you how to build a marketing model for banking using Google Cloud Platform (GCP). Many financial institutions use traditional on-premise methods and tooling to build models for marketing. This notebook takes you through the steps to build a Machine Learning model using open source and Google Cloud Platform. Some of the advantages of this approach are
Step1: (Only if you are using a colab notebook) We need to authenticate to Google Cloud and create the service client. After running the cell below, a link will appear which you need to click on and follow the instructions.
Step2: Next, we need to set our project. Replace 'PROJECT_ID' with your GCP project ID.
Step3: Third, you need to activate some of the GCP services that we will be using. Run the following cell if you need to activate API's. Which can also be done via the GUI via APIs and Services -> Enable APIS and Services.
Step4: The data that we will be using for this demo is the UCI Bank Marketing Dataset.
The first step we will need to take is to create a BigQuery dataset and a table so that we can store this data. Make sure that you replace your_dataset and your_table variables with any dataset and table name you want.
Step5: There is a public dataset avaliable which has cleaned up some of the rows in the UCI Bank Marketing Dataset. We will download this file in the next cell and save locally as data.csv.
Step6: We will now upload the data.csv file into our BigQuery table.
Step7: 1) Fetching data
In this chapter we will get data from BigQuery and create a Pandas dataframe that we will be using for data engineering, data visualization and modeling.
Data from BigQuery to Pandas
We are going to use the datalab.bigquery library to fetch data from bigquery and load a Pandas dataframe.
Step8: We doing two things in this cell
Step9: We will now expore the data we got from BQ
2) Data exploration
We will use Pandas profiling to perform data exploration. This will give us information including distributions for each feature, missing values, the maximum and minimum values and many more. These are all out of the box. Run the next cell first if you haven't installed pandas profiling. (Note if after you haved installed pandas profiling, you get an import error, restart your kernel and re-run all the cells up until this section).
Step10: Some interesting points from the pandas profiling
Step11: 4) Data preparation (feature engineering)
Before we can create Machine Learning models, we need to format the data so that it is in a form that the models can understand.
We need to do the following steps
Step13: Now we are going to ceate a function to split the label we want to predict and the feature that we will use to predict this value. In addition, we convert the label to 1/0.
Step15: Our training dataset, train_features, contains both categorical and numeric values. However, we know that machine learning models can only use numeric values. The function below converts categorical variables to integers and then normalizes the current numeric columns so that certain columns with very large numbers would not over-power those columns whose values are not so large.
Step16: Some columns in our training dataset may not be very good predictors. This means that we should perform feature selection to get only the best predictors and reduce our time for training since our dataset will be much smaller.
Step17: To visualize the selection, we can plot a graph to look at the scores for each feature. Note that the duration feature had 0 as its p-value and so it could not be shown in the logarithmic scale.
Step18: It's also possible to use a Tree classifier to select the best features. It's often a good option when you have a highly imbalanced dataset.
Step19: The plot sometimes may be difficult to know which are the top five features. We can display a simple table with the top selected features and their scores. We are using SelectKBest with f_classif.
Step20: Let us now create a training dataset that contains the top 5 features.
Step21: 5) Building and evaluation of the models
In this section we will be building models using Scikit-Learn. We show how hyper parameter tuning / optimization and model evaluation can be used to select the best model for deployment.
Step23: The following code defines different hyperparameter combinations. More precisely, we define different model types (e.g., Logistic Regression, Support Vectors Machines (SVC)) and the corresponding lists of parameters that will be used during the optimization process (e.g., different kernel types for SVM).
Step25: After defining our hyperparameters, we use sklearn's grid search to iterate through the different combinations of hyperparameters and return the best parameters for each model type. Furthermore, we use crossvalidation, pruning the data into smaller subsets (see K-fold cross validation).
Step27: Finally, we define a process enabling us to return the best configuration for each model using cross-validation (the best model is selected based on its F1-score).
Step28: Evaluation of model performance
In order to choose the best performing model, we shall compare each of the models on the held-out test dataset.
Step29: Model comparison
To compare the performance of different models we create a table with different metrics.
Step31: Graphical comparison
For the graphical representation of model performance we use roc curves to highlight the True Positive Rate (TPR), also known as recall, and the False Positive Rate (FPR).
Step32: Now that we have all these evaluation metrics, we can select a model based on which metrics we want out models to maximize or minimize.
6) Explaning the model
We use the python package LIME to explain the model and so we will move from our use of pandas to numpy matrices since this is what LIME accepts.
Step33: The first thing we will do is to get the unique values from our label.
Step34: We need a dataset with our top 5 features but with the categorical values still present. This will allow LIME to know how it should display our features. E.g. using our column example earlier, it will know to display "yellow" whenever it sees a 0.
Step35: LIME needs to know the index of each catergorical column.
Step36: In addition, LIME requires a dictionary which contains the name of each column and the unique values for each column.
Step37: Create a function that will return the probability that the model (in our case we chose the logistic regression model) selects a certain class.
Step38: Use the LIME package to configure a variable that can be used to explain predicitons.
Step39: When you would like to understand the prediction of a value in the test set, create an explanation instance and show the result.
Step40: 7) Train and Predict with Cloud AI Platform
We just saw how to train and predict our models locally. However, when we want more compute power or want to put our model in production serving 1000s of requests, we can use Cloud AI Platform to perform these tasks.
Let us define some environment variables that AI Platform uses. Do not forget to replace all the variables in square brackets (along with the square brackets) with your credentials.
Step41: AI Platform needs a Python package with our code to train models. We need to create a directory and move our code there. We also need to create an __init__.py file, this is a unique feature of python. You can read the docs to understand more about this file.
Step42: In order for Cloud AI Platform to access the training data we need to upload a trainging file into Google Cloud Storage (GCS). We use our strat_train_set dataframe and convert it into a csv file which we upload to GCS.
Step43: This next cell might seem very long, however, most of the code is identical to earlier sections. We are simply combining the code we created previously into one file.
Before running this, cell substitute <BUCKET_NAME> with your GCS bucket name. Do not include the 'gs
Step44: To actully run the train.py file we need some parameters so that AI Platform knows how to set up the environment to run sucessfully.
Step45: The above cell submits a job to AI Platform which you can view by going to the Google Cloud Console's sidebar and select AI Platform > Jobs or search AI Platform in the search bar. ONLY run the cells below after your job has completed sucessfully. (It should take approximately 8 minutes to run).
Now that we have trained our model and it is saved in GCS we need to perform prediction. There are two options available to use for prediction
Step46: Next, find the model directory in your GCS bucket that contains the model created in the previous steps.
Step47: Just like training in AI Platform, we set some environment variables when we run our command line commands. Note that <GCS_BUCKET> is your the name of your GCS bucket set earlier. The MODEL_DIRECTORY will be inside the GCS bucket and of the form model_YYYYMMDD_HHMMSS (e.g. model_190114_134228).
Step48: Create a model resource for your model versions as well as the version.
Step49: For prediction, we will upload a file with one line in our GCS bucket.
Step50: We are now ready to submit our file to get a preditcion.
Step51: We can also use python to perform predictions. See the cell below for a simple way to get predictions using python. | Python Code:
!pip install pandas-profiling
!pip install lime
Explanation: How to build a machine learning marketing model for banking using Google Cloud Platform and Python
This notebook shows you how to build a marketing model for banking using Google Cloud Platform (GCP). Many financial institutions use traditional on-premise methods and tooling to build models for marketing. This notebook takes you through the steps to build a Machine Learning model using open source and Google Cloud Platform. Some of the advantages of this approach are:
Flexibility: The open source models can easily be ported.
Scalability: It's easy to scale using the power of Google Cloud.
Transparency: Lime will give you more insights in the model.
Ease of use: Pandas and Scikit-learn are easy to use.
We will go into some important topics for modeling within banking, like data exploration and model explanation. This notebook is created so that it can be re-used for migrating workloads to open source and the cloud. We will take you through the following steps:
Fetching data from Google BigQuery
Data Exploration using Pandas profling
Data partitioning using Scikit-learn
Data Engineering
Building and evaluating different models
Explaining the models using Lime
Use Cloud AI Platform to deploy to model as an API
Get predictions
Type of model
The goal of this model is to predict if the banking client will subscribe a term deposit, which is variable y in our dataset. This class of models are called "propensity to buy" models and this type of problem is binary classification. "Propensity to buy" models can help us predict the success of our marketing campaign.
<br>
What this notebook will not do
Teach you the basics of Machine Learning. We focus on how to train and deploy a ML model using the power of Google Cloud.
There is not one Cloud solution for all of your business problems. Because we choose Pandas and Scikit-Learn, they both have limitations. We choose to use Pandas + Scikit-learn because it helps making the transition from on-prem solutions to open source and Google Cloud easier. We have other solutions that can help you scale things even further using Google Cloud.
Prerequisites
Before we get started we need to go through a couple of prerequisites.
First, install two packages that your environment may not have (lime and pandas profiling). After running the cell, restart the Kernel by clicking 'Kernel > Restart Kernel' on the top menu.
End of explanation
# ONLY RUN IF YOU ARE IN A COLAB NOTEBOOK.
from google.colab import auth
auth.authenticate_user()
Explanation: (Only if you are using a colab notebook) We need to authenticate to Google Cloud and create the service client. After running the cell below, a link will appear which you need to click on and follow the instructions.
End of explanation
%env GOOGLE_CLOUD_PROJECT=PROJECT_ID
!gcloud config set project $GOOGLE_CLOUD_PROJECT
Explanation: Next, we need to set our project. Replace 'PROJECT_ID' with your GCP project ID.
End of explanation
!gcloud services enable ml.googleapis.com
!gcloud services enable bigquery-json.googleapis.com
Explanation: Third, you need to activate some of the GCP services that we will be using. Run the following cell if you need to activate API's. Which can also be done via the GUI via APIs and Services -> Enable APIS and Services.
End of explanation
import os
your_dataset = 'your_dataset'
your_table = 'your_table'
project_id = os.environ["GOOGLE_CLOUD_PROJECT"]
!bq mk -d {project_id}:{your_dataset}
!bq mk -t {your_dataset}.{your_table}
Explanation: The data that we will be using for this demo is the UCI Bank Marketing Dataset.
The first step we will need to take is to create a BigQuery dataset and a table so that we can store this data. Make sure that you replace your_dataset and your_table variables with any dataset and table name you want.
End of explanation
!curl https://storage.googleapis.com/erwinh-public-data/bankingdata/bank-full.csv --output data.csv
Explanation: There is a public dataset avaliable which has cleaned up some of the rows in the UCI Bank Marketing Dataset. We will download this file in the next cell and save locally as data.csv.
End of explanation
!bq load --autodetect --source_format=CSV --field_delimiter ';' --skip_leading_rows=1 --replace {your_dataset}.{your_table} data.csv
Explanation: We will now upload the data.csv file into our BigQuery table.
End of explanation
#import pandas and bigquery library
import pandas as pd
from google.cloud import bigquery as bq
Explanation: 1) Fetching data
In this chapter we will get data from BigQuery and create a Pandas dataframe that we will be using for data engineering, data visualization and modeling.
Data from BigQuery to Pandas
We are going to use the datalab.bigquery library to fetch data from bigquery and load a Pandas dataframe.
End of explanation
# Execute the query and converts the result into a Dataframe
client = bq.Client(project=project_id)
df = client.query('''
SELECT
*
FROM
`%s.%s`
''' % (your_dataset, your_table)).to_dataframe()
df.head(3).T
Explanation: We doing two things in this cell:
We are executing an SQL query
We are converting the output from BQ into a pandas dataframe using .to_dataframe()
End of explanation
import pandas_profiling as pp
# Let's create a Profile Report using the dataframe that we just created.
pp.ProfileReport(df)
Explanation: We will now expore the data we got from BQ
2) Data exploration
We will use Pandas profiling to perform data exploration. This will give us information including distributions for each feature, missing values, the maximum and minimum values and many more. These are all out of the box. Run the next cell first if you haven't installed pandas profiling. (Note if after you haved installed pandas profiling, you get an import error, restart your kernel and re-run all the cells up until this section).
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
#Here we apply a shuffle and stratified split to create a train and test set.
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=40)
for train_index, test_index in split.split(df, df["y"]):
strat_train_set = df.loc[train_index]
strat_test_set = df.loc[test_index]
# check the split sizes
print(strat_train_set.size)
print(strat_test_set.size)
# We can now check the data
strat_test_set.head(3).T
Explanation: Some interesting points from the pandas profiling:
* We have categorical and boolean columns which we need to convert to numeric values
* The predictor value is very skewed (only 5289 defaulted compared to a massive 39922 users not defaulting) so we need to ensure that our training and testing splits are representative of this skew
* No missing values
3) Data partitioning (split data into training and testing)
As our dataset is highly skewed, we need to be very careful with our sampling approach. Two things need to be considered:
Shuffle the dataset to avoid any form of pre-ordering.
Use stratified sampling (SE). SE makes sure that both datasets (test, training) do not significantly differ for variables of interest. In our case we use SE to achieve a similar distribution of y for both datasets.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
Explanation: 4) Data preparation (feature engineering)
Before we can create Machine Learning models, we need to format the data so that it is in a form that the models can understand.
We need to do the following steps:
For the numeric columns, we need to normalize these columns so that one column with very large values does not bias the computation.
Turn categorical values into numeric values replacing each unique value in a column with an integer. For example, if a column named "Colour" has three unique strings "red", "yellow" and "blue" they will be assigned the values 0, 1 and 2 respectively. So each instance of yellow in that column will be replaced with 0. Note: one hot encoding is an alternative method to convert categorical values to integers.
For True/False values we simply convert these to 1/0 respectively.
End of explanation
def return_features_and_label(df):
returns features and label given argument dataframe
# Get all the columns except "y". It's also possible to exclude other columns
X = df.drop("y", axis=1)
Y = df["y"].copy ()
# Convert our label to an integer
Y = LabelEncoder().fit_transform(Y)
return X, Y
train_features, train_label = return_features_and_label(strat_train_set)
Explanation: Now we are going to ceate a function to split the label we want to predict and the feature that we will use to predict this value. In addition, we convert the label to 1/0.
End of explanation
def data_pipeline(df):
Normalizes and converts data and returns dataframe
num_cols = df.select_dtypes(include=np.number).columns
cat_cols = list(set(df.columns) - set(num_cols))
# Normalize Numeric Data
df[num_cols] = StandardScaler().fit_transform(df[num_cols])
# Convert categorical variables to integers
df[cat_cols] = df[cat_cols].apply(LabelEncoder().fit_transform)
return df
train_features_prepared = data_pipeline(train_features)
train_features_prepared.head(3).T
Explanation: Our training dataset, train_features, contains both categorical and numeric values. However, we know that machine learning models can only use numeric values. The function below converts categorical variables to integers and then normalizes the current numeric columns so that certain columns with very large numbers would not over-power those columns whose values are not so large.
End of explanation
from sklearn.feature_selection import SelectKBest, f_classif
predictors = train_features_prepared.columns
# Perform feature selection where `k` (5 in this case) indicates the number of features we wish to select
selector = SelectKBest(f_classif, k=5)
selector.fit(train_features_prepared[predictors], train_label)
Explanation: Some columns in our training dataset may not be very good predictors. This means that we should perform feature selection to get only the best predictors and reduce our time for training since our dataset will be much smaller.
End of explanation
# Get the p-values from our selector for each model and convert to a logarithmic scale for easy vizualization
importance_score = -np.log(selector.pvalues_)
# Plot each column with their importance score
plt.rcParams["figure.figsize"] = [14,7]
plt.barh(range(len(predictors)), importance_score, color='C0')
plt.ylabel("Predictors")
plt.title("Importance Score")
plt.yticks(range(len(predictors)), predictors)
plt.show()
Explanation: To visualize the selection, we can plot a graph to look at the scores for each feature. Note that the duration feature had 0 as its p-value and so it could not be shown in the logarithmic scale.
End of explanation
# Example of how to use a Tree classifier to select best features.
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.feature_selection import SelectFromModel
predictors_tree = train_features_prepared.columns
selector_clf = ExtraTreesClassifier(n_estimators=50, random_state=0)
selector_clf.fit(train_features_prepared[predictors], train_label)
# Plotting feature importance
importances = selector_clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in selector_clf.estimators_],
axis=0)
plt.rcParams["figure.figsize"] = [14,7]
plt.barh(range(len(predictors_tree)), std, color='C0')
plt.ylabel("Predictors")
plt.title("Importance Score")
plt.yticks(range(len(predictors_tree)), predictors_tree)
plt.show()
Explanation: It's also possible to use a Tree classifier to select the best features. It's often a good option when you have a highly imbalanced dataset.
End of explanation
# Plot the top 5 features based on the Log Score that we calculated earlier.
train_prepared_indexs = [count for count, selected in enumerate(selector.get_support()) if selected == True]
pd.DataFrame(
{'Feature' : predictors[train_prepared_indexs],
'Original Score': selector.pvalues_[train_prepared_indexs],
'Log Score' : importance_score[train_prepared_indexs]
}
)
Explanation: The plot sometimes may be difficult to know which are the top five features. We can display a simple table with the top selected features and their scores. We are using SelectKBest with f_classif.
End of explanation
# Here we are creating our new dataframe based on the selected features (from selector)
train_prepared_columns = [col for (selected, col) in zip(selector.get_support(), predictors) if selected == True]
train_prepared = train_features_prepared[train_prepared_columns]
Explanation: Let us now create a training dataset that contains the top 5 features.
End of explanation
# Importing libraries needed
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import auc
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
import numpy as np
Explanation: 5) Building and evaluation of the models
In this section we will be building models using Scikit-Learn. We show how hyper parameter tuning / optimization and model evaluation can be used to select the best model for deployment.
End of explanation
# this function will create the classifiers (models) that we want to test
def create_classifiers():
Create classifiers and specify hyper parameters
log_params = [{'penalty': ['l1', 'l2'], 'C': np.logspace(0, 4, 10)}]
knn_params = [{'n_neighbors': [3, 4, 5]}]
svc_params = [{'kernel': ['linear', 'rbf'], 'probability': [True]}]
tree_params = [{'criterion': ['gini', 'entropy']}]
forest_params = {'n_estimators': [1, 5, 10]}
mlp_params = {'activation': [
'identity', 'logistic', 'tanh', 'relu'
]}
ada_params = {'n_estimators': [1, 5, 10]}
classifiers = [
['LogisticRegression', LogisticRegression(random_state=42),
log_params],
['KNeighborsClassifier', KNeighborsClassifier(), knn_params],
['SVC', SVC(random_state=42), svc_params],
['DecisionTreeClassifier',
DecisionTreeClassifier(random_state=42), tree_params],
['RandomForestClassifier',
RandomForestClassifier(random_state=42), forest_params],
['MLPClassifier', MLPClassifier(random_state=42), mlp_params],
['AdaBoostClassifier', AdaBoostClassifier(random_state=42),
ada_params],
]
return classifiers
Explanation: The following code defines different hyperparameter combinations. More precisely, we define different model types (e.g., Logistic Regression, Support Vectors Machines (SVC)) and the corresponding lists of parameters that will be used during the optimization process (e.g., different kernel types for SVM).
End of explanation
# this grid search will iterate through the different combinations and returns the best parameters for each model type.
# Running this cell might take a while
def grid_search(model, parameters, name,training_features, training_labels):
Grid search that returns best parameters for each model type
clf = GridSearchCV(model, parameters, cv=3, refit = 'f1',
scoring='f1', verbose=0, n_jobs=4)
clf.fit(training_features, training_labels)
best_estimator = clf.best_estimator_
return [name, str(clf.best_params_), clf.best_score_,
best_estimator]
Explanation: After defining our hyperparameters, we use sklearn's grid search to iterate through the different combinations of hyperparameters and return the best parameters for each model type. Furthermore, we use crossvalidation, pruning the data into smaller subsets (see K-fold cross validation).
End of explanation
# Now we want to get the best configuration for each model.
def best_configuration(classifiers, training_features, training_labels):
returns the best configuration for each model
clfs_best_config = []
for (name, model, parameters) in classifiers:
clfs_best_config.append(grid_search(model, parameters, name,
training_features, training_labels))
return clfs_best_config
# Here we call the Grid search and Best_configuration function (note we only use 100 rows to decrease the run time).
import warnings
warnings.filterwarnings('ignore')
classifiers = create_classifiers()
clfs_best_config = best_configuration(classifiers, train_prepared[:100], train_label[:100])
Explanation: Finally, we define a process enabling us to return the best configuration for each model using cross-validation (the best model is selected based on its F1-score).
End of explanation
# Prepare the test data for prediction
test_features, test_label = return_features_and_label(strat_test_set)
test_features_prepared = data_pipeline(test_features)
test_prepared = test_features_prepared[train_prepared_columns]
Explanation: Evaluation of model performance
In order to choose the best performing model, we shall compare each of the models on the held-out test dataset.
End of explanation
f1_score_list = []
accuracy_list = []
precision_list = []
recall_list = []
roc_auc_list = []
model_name_list = []
# Iterate through the different model combinations to calculate perf. metrics.
for name, params, score, model in clfs_best_config:
pred_label = model.predict(test_prepared) # Predict outcome.
f1_score_list.append(f1_score(test_label,pred_label)) # F1 score.
accuracy_list.append(accuracy_score(test_label, pred_label)) # Accuracy score.
precision_list.append(precision_score(test_label, pred_label)) # Precision score.
recall_list.append(recall_score(test_label, pred_label)) # Recall score.
roc_auc_list.append(roc_auc_score(test_label,
model.predict_proba(test_prepared)[:, 1])) # Predict probability.
model_name_list.append(name)
# Sum up metrics in a pandas data frame.
pd.DataFrame(
{'Model' : model_name_list,
'F1 Score' : f1_score_list,
'Accurary': accuracy_list,
'Precision': precision_list,
'Recall': recall_list,
'Roc_Auc': roc_auc_list
},
columns = ['Model','F1 Score','Precision','Recall', 'Accurary', 'Roc_Auc']
)
Explanation: Model comparison
To compare the performance of different models we create a table with different metrics.
End of explanation
# Create a function that plots an ROC curve
def roc_graph(test_label, pred_label, name):
Plots the ROC curve's in a Graph
fpr, tpr, thresholds = roc_curve(test_label, pred_label, pos_label=1)
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=2, label='%s ROC (area = %0.2f)' % (name, roc_auc))
plt.clf()
# Iterate though the models, create ROC graph for each model.
for name, _, _, model in clfs_best_config:
pred_label = model.predict_proba(test_prepared)[:,1]
roc_graph(test_label, pred_label, name)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curves ')
plt.legend(loc="lower right", fontsize='small')
plt.show()
Explanation: Graphical comparison
For the graphical representation of model performance we use roc curves to highlight the True Positive Rate (TPR), also known as recall, and the False Positive Rate (FPR).
End of explanation
import lime.lime_tabular
import lime
import sklearn
import pprint
Explanation: Now that we have all these evaluation metrics, we can select a model based on which metrics we want out models to maximize or minimize.
6) Explaning the model
We use the python package LIME to explain the model and so we will move from our use of pandas to numpy matrices since this is what LIME accepts.
End of explanation
class_names = strat_train_set["y"].unique()
Explanation: The first thing we will do is to get the unique values from our label.
End of explanation
train = train_features[train_prepared_columns].values
Explanation: We need a dataset with our top 5 features but with the categorical values still present. This will allow LIME to know how it should display our features. E.g. using our column example earlier, it will know to display "yellow" whenever it sees a 0.
End of explanation
num_cols = train_features._get_numeric_data().columns
cat_cols = list(set(train_features.columns) - set(num_cols))
categorical_features_index = [i for i, val in enumerate(train_prepared_columns) if val in cat_cols]
Explanation: LIME needs to know the index of each catergorical column.
End of explanation
categorical_names = {}
for feature in categorical_features_index:
# We still need to convert catergorical variables to integers
le = sklearn.preprocessing.LabelEncoder()
le.fit(train[:, feature])
train[:, feature] = le.transform(train[:, feature])
categorical_names[feature] = le.classes_
Explanation: In addition, LIME requires a dictionary which contains the name of each column and the unique values for each column.
End of explanation
predict_fn = lambda x: clfs_best_config[0][-1].predict_proba(x).astype(float)
Explanation: Create a function that will return the probability that the model (in our case we chose the logistic regression model) selects a certain class.
End of explanation
explainer = lime.lime_tabular.LimeTabularExplainer(train, feature_names=train_prepared_columns,class_names=class_names,
categorical_features=categorical_features_index,
categorical_names=categorical_names, kernel_width=3)
Explanation: Use the LIME package to configure a variable that can be used to explain predicitons.
End of explanation
i = 106
exp = explainer.explain_instance(train[i], predict_fn)
pprint.pprint(exp.as_list())
fig = exp.as_pyplot_figure()
Explanation: When you would like to understand the prediction of a value in the test set, create an explanation instance and show the result.
End of explanation
%env GCS_BUCKET=<GCS_BUCKET>
%env REGION=us-central1
%env LOCAL_DIRECTORY=./trainer/data
%env TRAINER_PACKAGE_PATH=./trainer
Explanation: 7) Train and Predict with Cloud AI Platform
We just saw how to train and predict our models locally. However, when we want more compute power or want to put our model in production serving 1000s of requests, we can use Cloud AI Platform to perform these tasks.
Let us define some environment variables that AI Platform uses. Do not forget to replace all the variables in square brackets (along with the square brackets) with your credentials.
End of explanation
%%bash
mkdir trainer
touch trainer/__init__.py
Explanation: AI Platform needs a Python package with our code to train models. We need to create a directory and move our code there. We also need to create an __init__.py file, this is a unique feature of python. You can read the docs to understand more about this file.
End of explanation
strat_train_set.to_csv('train.csv', index=None)
!gsutil cp train.csv $GCS_BUCKET
Explanation: In order for Cloud AI Platform to access the training data we need to upload a trainging file into Google Cloud Storage (GCS). We use our strat_train_set dataframe and convert it into a csv file which we upload to GCS.
End of explanation
%%writefile trainer/task.py
import datetime
import os
import pandas as pd
import numpy as np
import subprocess
from google.cloud import storage
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelEncoder, StandardScaler
from sklearn.metrics import make_scorer
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
# TODO: REPLACE '<BUCKET_NAME>' with your GCS bucket name
BUCKET_NAME = <BUCKET_NAME>
# Bucket holding the training data
bucket = storage.Client().bucket(BUCKET_NAME)
# Path to the data inside the bucket
blob = bucket.blob('train.csv')
# Download the data
blob.download_to_filename('train.csv')
# [END download-data]
# [START scikit-learn code]
# Load the training dataset
with open('./train.csv', 'r') as train_data:
df = pd.read_csv(train_data)
def return_features_and_label(df_tmp):
# Get all the columns except the one named "y"
X = df_tmp.drop("y", axis=1)
Y = df_tmp["y"].copy()
# Convert label to an integer
Y = LabelEncoder().fit_transform(Y)
return X, Y
def data_pipeline(df_tmp):
num_cols = df_tmp._get_numeric_data().columns
cat_cols = list(set(df_tmp.columns) - set(num_cols))
# Normalize Numeric Data
df_tmp[num_cols] = StandardScaler().fit_transform(df_tmp[num_cols])
# Convert categorical variables to integers
df_tmp[cat_cols] = df_tmp[cat_cols].apply(LabelEncoder().fit_transform)
return df_tmp
def create_classifiers():
log_params = [{'penalty': ['l1', 'l2'], 'C': np.logspace(0, 4, 10)}]
knn_params = [{'n_neighbors': [3, 4, 5]}]
svc_params = [{'kernel': ['linear', 'rbf'], 'probability': [True]}]
tree_params = [{'criterion': ['gini', 'entropy']}]
forest_params = {'n_estimators': [1, 5, 10]}
mlp_params = {'activation': [
'identity', 'logistic', 'tanh', 'relu'
]}
ada_params = {'n_estimators': [1, 5, 10]}
classifiers = [
['LogisticRegression', LogisticRegression(random_state=42),
log_params],
['KNeighborsClassifier', KNeighborsClassifier(), knn_params],
['SVC', SVC(random_state=42), svc_params],
['DecisionTreeClassifier',
DecisionTreeClassifier(random_state=42), tree_params],
['RandomForestClassifier',
RandomForestClassifier(random_state=42), forest_params],
['MLPClassifier', MLPClassifier(random_state=42), mlp_params],
['AdaBoostClassifier', AdaBoostClassifier(random_state=42),
ada_params],
]
return classifiers
def grid_search(model, parameters, name, X, y):
clf = GridSearchCV(model, parameters, cv=3, refit = 'f1',
scoring='f1', verbose=0, n_jobs=4)
clf.fit(X, y)
best_estimator = clf.best_estimator_
return [name, clf.best_score_, best_estimator]
def best_configuration(classifiers, training_values, testing_values):
clfs_best_config = []
best_clf = None
best_score = 0
for (name, model, parameters) in classifiers:
clfs_best_config.append(grid_search(model, parameters, name,
training_values, testing_values))
for name, quality, clf in clfs_best_config:
if quality > best_score:
best_score = quality
best_clf = clf
return best_clf
train_features, train_label = return_features_and_label(df)
train_features_prepared = data_pipeline(train_features)
predictors = train_features_prepared.columns
# Perform feature selection
selector = SelectKBest(f_classif, k=5)
selector.fit(train_features_prepared[predictors], train_label)
train_prepared_columns = [col for (selected, col) in zip(selector.get_support(), predictors) if selected == True]
train_features_prepared = train_features_prepared[train_prepared_columns]
x = train_features_prepared.values
y = train_label
classifiers = create_classifiers()
clf = best_configuration(classifiers, x[:100], y[:100])
# [END scikit-learn]
# [START export-to-gcs]
# Export the model to a file
model = 'model.joblib'
joblib.dump(clf, model)
# Upload the model to GCS
bucket = storage.Client().bucket(BUCKET_NAME)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('model_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
# [END export-to-gcs]
Explanation: This next cell might seem very long, however, most of the code is identical to earlier sections. We are simply combining the code we created previously into one file.
Before running this, cell substitute <BUCKET_NAME> with your GCS bucket name. Do not include the 'gs://' prefix.
End of explanation
%%bash
JOBNAME=banking_$(date -u +%y%m%d_%H%M%S)
echo $JOBNAME
gcloud ai-platform jobs submit training model_training_$JOBNAME \
--job-dir $GCS_BUCKET/$JOBNAME/output \
--package-path trainer \
--module-name trainer.task \
--region $REGION \
--runtime-version=1.9 \
--python-version=3.5 \
--scale-tier BASIC
Explanation: To actully run the train.py file we need some parameters so that AI Platform knows how to set up the environment to run sucessfully.
End of explanation
test_features_prepared = data_pipeline(test_features)
test_prepared = test_features_prepared[train_prepared_columns]
test = test_prepared.as_matrix().tolist()
Explanation: The above cell submits a job to AI Platform which you can view by going to the Google Cloud Console's sidebar and select AI Platform > Jobs or search AI Platform in the search bar. ONLY run the cells below after your job has completed sucessfully. (It should take approximately 8 minutes to run).
Now that we have trained our model and it is saved in GCS we need to perform prediction. There are two options available to use for prediction:
Command Line
Python
End of explanation
!gsutil ls $GCS_BUCKET
Explanation: Next, find the model directory in your GCS bucket that contains the model created in the previous steps.
End of explanation
%env VERSION_NAME=v1
%env MODEL_NAME=cmle_model
%env JSON_INSTANCE=input.json
%env MODEL_DIR=gs://<GCS_BUCKET>/MODEL_DIRECTORY
%env FRAMEWORK=SCIKIT_LEARN
Explanation: Just like training in AI Platform, we set some environment variables when we run our command line commands. Note that <GCS_BUCKET> is your the name of your GCS bucket set earlier. The MODEL_DIRECTORY will be inside the GCS bucket and of the form model_YYYYMMDD_HHMMSS (e.g. model_190114_134228).
End of explanation
! gcloud ai-platform models create $MODEL_NAME --regions=us-central1
! gcloud ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME --origin $MODEL_DIR \
--runtime-version 1.9 --framework $FRAMEWORK \
--python-version 3.5
Explanation: Create a model resource for your model versions as well as the version.
End of explanation
import json
with open('input.json', 'w') as outfile:
json.dump(test[0], outfile)
!gsutil cp input.json $GCS_BUCKET
Explanation: For prediction, we will upload a file with one line in our GCS bucket.
End of explanation
! gcloud ai-platform predict --model $MODEL_NAME \
--version $VERSION_NAME \
--json-instances $JSON_INSTANCE
Explanation: We are now ready to submit our file to get a preditcion.
End of explanation
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['GOOGLE_CLOUD_PROJECT']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
# Create our AI Platform service
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Iterate over the first 10 rows of our test dataset
results = []
for data in test[:10]:
# Send a prediction request
responses = service.projects().predict(
name=name,
body={"instances": [data]}
).execute()
if 'error' in responses:
raise RuntimeError(response['error'])
else:
results.extend(responses['predictions'])
for i, response in enumerate(results):
print('Prediction: {}\tLabel: {}'.format(response, test_label[i]))
Explanation: We can also use python to perform predictions. See the cell below for a simple way to get predictions using python.
End of explanation |
9,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6 Conditional Loops
Loops
Loops are a big deal in computing and robotics! Think about the kinds of tasks that computers and robots often get used for
Step1: If you've done it right, your output should look like this
Step2: If you've done it right, your output should look like this | Python Code:
word = input("What is the magic word? ")
while word!="abracadabra":
word = input("Wrong. Try again. What is the magic word? ")
print("Correct")
Explanation: 6 Conditional Loops
Loops
Loops are a big deal in computing and robotics! Think about the kinds of tasks that computers and robots often get used for:
- jobs that are dangerous
- jobs where accuracy is important
- jobs that are repetitive and where a human might get bored!
Loops are basically how a computer programmer can make the computer do the same thing over and over again. There are two main kinds of loops
* Conditional Loops
* Fixed Loops
Simplest Conditional Loop
The most basic conditional loop is one that goes on forever. Think about a task where you do the same sequence of operations endlessly - for example production line work often involves seemingly endless loops:
* pick up product from conveyor belt
* check product looks OK
* put product in packing crate
Factories often use robots to do the kind of repetitive work pictured above - for around £20,000 a reconditioned robotic system could pack these crates.
In Python we can use a while loop to keep doing the same thing over and over:
python
while True:
pick_up_product()
check_product()
put_product_in_crate()
The part in brackets () is called the condition. In this case, we have set the condition to be True permananently. This means the loop would go on forever, which is sometimes what you want. Later we will see how if the condition in the brackets () ever stops being True then the loop would stop.
A Conditional Loop with an Actual Condition!
A conditional loop is repeatedly carried out a block of code until a condition of some kind is met.
You are now going to run these two small programs. They should be ready to run, you don't need to edit them.
Run the first program. Once running, you should see that it repeatedly runs its code block until you supply the magic word. Get it wrong first, then enter the magic word
End of explanation
happy = input("Are you happy? ")
while not(happy in ["yes","no"]):
print("I did not understand. ")
happy = input("Are you happy? ")
Explanation: If you've done it right, your output should look like this:
What is the magic word? Foobar
Wrong. Try again. What is the magic word? abracadabra
Correct
In Python != means is not equal to
Run the second program. This one will keep repeating its code block until you type in one of the words in the list ["yes", "no"].
End of explanation
happy = input("Are you happy? ")
while not(happy in ["yes","no"]): #Add to this list of words here
print("I did not understand. ")
happy = input("Are you happy? ")
Explanation: If you've done it right, your output should look like this:
Are you happy? Naw.
I did not understand.
Are you happy? aye
I did not understand.
Are you happy? yes
By adding some more words to the list ["yes", "no] edit the program below to accept "Yes", "No" and test it by running it a few times
End of explanation |
9,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Lecture 4
Step2: <p class='alert alert-success'>
Solve the questions in green blocks. Save the file as ME249-Lecture-4-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send the instructor and the grader the <b>html</b> file not the ipynb file.
</p>
Test Function u
Step3: <h2>Compact Finite Difference Schemes</h2>
The central compact finite difference scheme is defined as (in its general form)
Step4: This is the Matrix approach
Step5: <h4>Gauss-Seidel Method</h4>
This method is directly derived from the Jacobi method, where it is recognized that in the process of sweeping through all indexes $i$, $\phi_{i-1}$ has already been updated. For the compact system, the Gauss-Seidel method is | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
from IPython.display import Image
from IPython.core.display import HTML
def header(text):
raw_html = '<h4>' + str(text) + '</h4>'
return raw_html
def box(text):
raw_html = '<div style="border:1px dotted black;padding:2em;">'+str(text)+'</div>'
return HTML(raw_html)
def nobox(text):
raw_html = '<p>'+str(text)+'</p>'
return HTML(raw_html)
def addContent(raw_html):
global htmlContent
htmlContent += raw_html
class PDF(object):
def __init__(self, pdf, size=(200,200)):
self.pdf = pdf
self.size = size
def _repr_html_(self):
return '<iframe src={0} width={1[0]} height={1[1]}></iframe>'.format(self.pdf, self.size)
def _repr_latex_(self):
return r'\includegraphics[width=1.0\textwidth]{{{0}}}'.format(self.pdf)
class ListTable(list):
Overridden list class which takes a 2-dimensional list of
the form [[1,2,3],[4,5,6]], and renders an HTML Table in
IPython Notebook.
def _repr_html_(self):
html = ["<table>"]
for row in self:
html.append("<tr>")
for col in row:
html.append("<td>{0}</td>".format(col))
html.append("</tr>")
html.append("</table>")
return ''.join(html)
font = {'family' : 'serif',
'color' : 'black',
'weight' : 'normal',
'size' : 18,
}
Explanation: Lecture 4: Implicit Finite Difference Schemes and Solving Linear Systems
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Lx = 2.*np.pi
Nx = 512
u = np.zeros(Nx,dtype='float64')
du = np.zeros(Nx,dtype='float64')
ddu = np.zeros(Nx,dtype='float64')
k_0 = 2.*np.pi/Lx
dx = Lx/Nx
x = np.linspace(dx,Lx,Nx)
Nwave = 33
uwave = np.zeros((Nx,Nwave),dtype='float64')
duwave = np.zeros((Nx,Nwave),dtype='float64')
dduwave = np.zeros((Nx,Nwave),dtype='float64')
ampwave = np.random.random(Nwave)
phasewave = np.random.random(Nwave)*2*np.pi
for iwave in range(Nwave):
uwave[:,iwave] = ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
duwave[:,iwave] = -k_0*iwave*ampwave[iwave]*np.sin(k_0*iwave*x+phasewave[iwave])
dduwave[:,iwave] = -(k_0*iwave)**2*ampwave[iwave]*np.cos(k_0*iwave*x+phasewave[iwave])
u = np.sum(uwave,axis=1)
du = np.sum(duwave,axis=1)
ddu = np.sum(dduwave,axis=1)
#print(u)
plt.plot(x,u,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$u$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,du,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$du/dx$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
plt.plot(x,ddu,lw=2)
plt.xlim(0,Lx)
#plt.legend(loc=3, bbox_to_anchor=[0, 1],
# ncol=3, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$d^2u/dx^2$', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.show()
Explanation: <p class='alert alert-success'>
Solve the questions in green blocks. Save the file as ME249-Lecture-4-YOURNAME.ipynb and change YOURNAME in the bottom cell. Send the instructor and the grader the <b>html</b> file not the ipynb file.
</p>
Test Function u
End of explanation
Nitermax = 1000
it = 0
error_threshold = 1e-6
error = np.inf
phi = np.zeros(Nx)
phi_old = np.zeros(Nx)
error_jacobi = np.zeros(Nitermax)
b = np.zeros(Nx)
#generate rhs
b[1:Nx-1] = 0.75*(u[2:Nx]-u[0:Nx-2])/dx
b[0] = 0.75*(u[1]-u[Nx-1])/dx
b[Nx-1] = 0.75*(u[0]-u[Nx-2])/dx
for it in range(Nitermax):
phi_old = np.copy(phi)
phi[1:Nx-1] = -0.25*(phi_old[0:Nx-2] + phi_old[2:Nx]) \
+b[1:Nx-1]
phi[0] = -0.25*(phi_old[1] + phi_old[Nx-1]) \
+b[0]
phi[Nx-1] = -0.25*(phi_old[Nx-2] + phi_old[0]) \
+b[Nx-1]
error_jacobi[it] = np.max(np.abs(phi-phi_old))
if (error_jacobi[it] < error_threshold): break
#print(error)
it_jacobi = it
plt.semilogy(error_jacobi[0:it_jacobi+1],lw=2,label='Jacobi')
plt.xlabel('Iterations', fontdict = font)
plt.ylabel('error', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.show()
plt.plot(x,phi-du)
plt.show()
print(np.max(np.abs(phi-du)))
Explanation: <h2>Compact Finite Difference Schemes</h2>
The central compact finite difference scheme is defined as (in its general form):
<p class='alert alert-danger'>
$$
\beta f'_{i-2}+\alpha f'_{i-1}+f'_i+\alpha f'_{i+1}+\beta f'_{i+2}=
c\frac{f_{i+3}-f_{i-3}}{6\Delta}+b\frac{f_{i+2}-f_{i-2}}{4\Delta}+a\frac{f_{i+1}-f_{i-1}}{2\Delta}
$$
</p>
Note that if $\beta=\alpha=b=c=0$, the second order explicit central scheme is retrieved. Here we will limit our study to a tridagonal system:
$$
\alpha f'{i-1}+f'_i+\alpha f'{i+1}=a\frac{f_{i+1}-f_{i-1}}{2\Delta}
$$
and we wish to derive a fourth-order compact scheme.
The Taylor series expansion needed here are:
$$
f_{i\pm1}=f_i\pm\Delta f'i+\frac{\Delta^2}{2!}f''_i\pm\frac{\Delta^3}{3!}f'''_i+\frac{\Delta^4}{4!}f^{(4)}_i+{\cal O}(\Delta^5)
$$
$$
f'{i\pm1}=f'_i\pm\Delta f''_i+\frac{\Delta^2}{2!}f'''_i\pm\frac{\Delta^3}{3!}f^{(3)}_i+{\cal O}(\Delta^4)
$$
The goal is to find $\alpha$ and $a$ such that, the substitution of the Taylor series expansions above in the numerical scheme yields
$$
f'_i=f'_i+{\cal O}(\Delta^4)
$$
The left hand side develops into:
$$
\begin{split}
\alpha f'{i-1}+f'_i+\alpha f'{i+1}&=(1+2\alpha)f'i\
&+(-\alpha+\alpha)\Delta f''_i\
&+\alpha\Delta^2f'''_i\
&+(-\alpha+\alpha)\frac{\Delta^3}{3!}f^{(3)}_i+{\cal O}(\Delta^4)
\end{split}
$$
while the right hand side is:
$$
\begin{split}
a\frac{f{i+1}-f_{i-1}}{2\Delta}&=(-a+a)\frac{1}{2\Delta}f_i\
&+af'_i\
&+(-a+a)\frac{\Delta}{2}f''_i\
&+a\frac{\Delta^2}{3!}f'''_i\
&+(-a+a)\frac{\Delta^3}{4!}f^{(4)}_i+{\cal O}(\Delta^4)
\end{split}
$$
Fourth order is achieved if:
$$
1+2\alpha=a
$$
and
$$
\alpha = \frac{a}{3!}
$$
<p class='alert alert-danger'>
The fourth order compact scheme is
$$
\frac{1}{4} f'_{i-1}+f'_i+\frac{1}{4} f'_{i+1}=\frac{3}{2}\frac{f_{i+1}-f_{i-1}}{2\Delta}
$$
or
$$
f'_{i-1}+4f'_i+ f'_{i+1}=3\frac{f_{i+1}-f_{i-1}}{\Delta}
$$
</p>
Consider a periodic function $f$ defined on $x\in[0,2\pi]$, discretized on a uniform mesh $x_i=i\Delta$ with $i=0,...,N-1$ and $\Delta=2\pi/(N)$. The system can be written in matrix form:
$$
\left[
\begin{matrix}
4 & 1 & & & & & 1 \
1 & 4 & 1 & & & & \
& \ddots & \ddots & \ddots & & & \
& & 1 & 4 & 1 & & \
& & & \ddots & \ddots & \ddots & \
& & & & 1 & 4 & 1 \
1 & & & & & 1 & 4
\end{matrix}
\right]
\left[
\begin{matrix}
f'0\
\vdots\
f'{i-1}\
f'i\
f'{i+1}\
\vdots \
f'{N-1}
\end{matrix}
\right]=
\left[
\begin{matrix}
3(f_1-f{N-1})/\Delta\
\vdots\
\vdots\
3(f_{i+1}-f_{i-1})/\Delta\
\vdots\
\vdots \
3(f_{0}-f_{N-2})/\Delta
\end{matrix}
\right]
$$
<p class='alert alert-success'>
Write the first and last element of the right hand side vector.
</p>
For the general central compact scheme, the order conditions can be derived by matching the Taylor series coefficients, as done above, yielding the following system:
<p class='alert alert-danger'>
$$
\begin{matrix}
a+b+c=1+2\alpha+2\beta & \text{2$^\text{nd}$ order} \\
a+2^2b+3^2c=2\cfrac{3!}{2!}(\alpha+2^2\beta) & \text{4$^\text{th}$ order}\\
a+2^4b+3^4c=2\cfrac{5!}{4!}(\alpha+2^4\beta) & \text{6$^\text{th}$ order}\\
a+2^6b+3^6c=2\cfrac{7!}{6!}(\alpha+2^6\beta) & \text{8$^\text{th}$ order}\\
a+2^8b+3^8c=2\cfrac{9!}{8!}(\alpha+2^8\beta) & \text{10$^\text{th}$ order}
\end{matrix}
$$
</p>
The modified wavenumber for a general central compact scheme is
<p class='alert alert-danger'>
$$
\omega^{mod}(\omega)=\frac{a\sin(\omega)+(b/2)\sin(2\omega)+(c/3)\sin(3\omega)}{1+2\alpha\cos(\omega)+2\beta\cos(2\omega)}
$$
</p>
<p class='alert alert-success'>
Plot the modified wavenumbers of the following schemes:
<ol>
<li> Second order explicit finite difference</li>
<li> Fourth order explicit finite difference</li>
<li> Fourth order compact</li>
<li> Sixth order compact with $\alpha=1/3$ and $\beta=c=0$</li>
<li> Tenth order compact with $\alpha=1/2$ and $\beta=1/20$</li>
</ol>
</p>
<p class='alert alert-success'>
Derive an third order upwind compact of the form:
$$
(2+a\epsilon)f'_{i-1}+8f'_i+(2-a\epsilon)f'_{i+1}=\frac{6}{\Delta}\left((1-b\epsilon)f_{i+1}+2\epsilon f_i-(1+b\epsilon)f_{i-1}\right)
$$
$a$ and $b$ are parameters to be found and $\epsilon$ is a parameter that takes the values $-1$, $1$ or $0$ depending on the flow directions. Verify that the scheme reverts to the fourth order central compact scheme when $\epsilon=0$
</p>
<h2>Solving a Linear system</h2>
<h3>Iterative Solution Methods</h3>
The goal is to solve a linear system of a vector $\vec{x}$ unknown variables:
$$
A\vec{x}=\vec{b}
$$
If the matrix A is invertible, the solution is simply $\vec{x}=A^{-1}\vec{b}$, however the inverse of $A$ may not always be easy to derive. The following describes a few iterative methods (used in CFD).
<h4> General Principle</h4>
Let $A=B-C$, where $B$ is invertible, the linear system becomes
$$
B\vec{x}=C\vec{x}+\vec{b}
$$
and the iterative solution is sought as
$$
B\vec{x}^{(n+1)}=C\vec{x}^{(n)}+\vec{b}
$$
where $n$ is the iteration index and $x^{(0)}$ is an initial guess. The assumption is that
$$
\lim_{n\rightarrow\infty}\vec{x}^{(n)}=\vec{x}
$$
Naturally, one hopes that $n$ is small enough to achieve an acceptable convergence defined by the error
$$
\vec{\epsilon}^{(n)}=\vec{x}-\vec{x}^{(n)}
$$
The linear system for the error can be derived as
$$
\vec{\epsilon}^{(n)}=(B^{-1}C)\vec{\epsilon}^{(n-1)}
$$
or
$$
\vec{\epsilon}^{(n)}=(B^{-1}C)^n\vec{\epsilon}^{(0)}
$$
Therefore the convergence criterion on the error depends on the eigenvalues $\lambda_i$ of $(B^{-1}C)$ and is guaranteed if the spectral radius
$$
\rho=\max_{i}\vert\lambda_i\vert\leq1
$$
The choice of $B$ and $C$ through the spectral radius $\rho$ governs the convergence rate of the iterative method.
<h4>Point Jacobi Method</h4>
The Jacobi method proposes that $B$ is the matrix $D$ formed by the diagonal elements of matrix $A=D+R$, where $R$ is the residual matrix containing all off-diagonal components. Providing that no element is zero, the iterative method is
$$
\vec{x}^{(n+1)}=D^{-1}(\vec{b}-R\vec{x}^{(n)})
$$
For the compact system, the Jacobi method becomes (let $\phi_i = f'i$):
$$
\phi_i^{(n+1)}=-\frac{1}{4}\left(\phi{i-1}^{(n)}+\phi_{i+1}^{(n)}\right)+\frac{1}{4}\left(\frac{3}{\Delta}\left(f_{i+1}-f_{i-1}\right)\right)
$$
Note that $R=L+U$, where $L$ and $U$ are the upper and lower triangle of matrix $A$ (without the diagonal). This decomposition will be useful later.
The index approach is written below. Make sure that you understand the code.
End of explanation
Nitermax = 1000
it = 0
error_threshold = 1e-6
phi = np.zeros(Nx)
phi_old = np.zeros(Nx)
error_jacobi = np.inf*np.ones(Nitermax)
b = np.zeros(Nx)
A = np.zeros((Nx,Nx))
for i in range(Nx):
if (i == 0):
A[i,i] = 4.
A[i,i+1] = 1.
A[i,Nx-1] = 1.
b[0] = 3./dx*(u[1] - u[Nx-1])
elif (i == Nx-1):
A[i,i-1] = 1.
A[i,i] = 4.
A[i,0] = 0.
b[i] = 3./dx*(u[0] - u[Nx-2])
else:
A[i,i-1] = 1.
A[i,i] = 4.
A[i,i+1] = 1.
b[i] = 3./dx*(u[i+1] - u[i-1])
D = np.diag(A)
B = np.diagflat(D)
C = A - B
for it in range (Nitermax):
phi_old = np.copy(phi)
phi = (b-np.dot(C,phi_old))/D
error_jacobi[it] = np.max(np.abs(phi-phi_old))
if (error_jacobi[it] < error_threshold): break
#print(error)
it_jacobi = it
plt.semilogy(error_jacobi[0:it_jacobi+1],lw=2,label='Jacobi')
plt.xlabel('Iterations', fontdict = font)
plt.ylabel('error', fontdict = font)
plt.xticks(fontsize = 16)
plt.yticks(fontsize = 16)
plt.legend(loc=3, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.show()
print(np.max(np.abs(phi-du)))
Explanation: This is the Matrix approach:
End of explanation
!ipython nbconvert --to html ME249-Lecture-4-YOURNAME.ipynb
Explanation: <h4>Gauss-Seidel Method</h4>
This method is directly derived from the Jacobi method, where it is recognized that in the process of sweeping through all indexes $i$, $\phi_{i-1}$ has already been updated. For the compact system, the Gauss-Seidel method is:
$$
\phi_i^{(n+1)}=-\frac{1}{4}\left(\phi_{i-1}^{(n+1)}+\phi_{i+1}^{(n)}\right)+\frac{1}{4}\left(\frac{3}{\Delta}\left(f_{i+1}-f_{i-1}\right)\right)
$$
The general form of this method starts with the decomposition of $A$ into its diagonal $D$, lower triangular $L$ and upper triangular $U$ matrices $A=D+L+U$, yielding $B=D-L$ and $C=-U$:
$$
(D+L)\vec{x}^{(n+1)}=\vec{b}-U\vec{x}^{(n)}
$$
<p class='alert alert-success'>
Write the Gauss Seidel code and compare the convergence rate to Jacobi
</p>
<h4>Successive Over-Relaxation (SOR) and Symmetric SOR (SSOR)</h4>
Building of on the Gauss-Seidel method, SOR and SSOR introduce the concept of relaxation to the iterative process. The general form of SOR writes:
$$
\phi_i^{(n+1)}=\phi_i^{n}+\omega(\tilde{\phi}_i^{n+1}-\phi_i^{n})
$$
where $\omega$ is the relaxation paramterer, and \tilde{\phi}_i^{n+1} is the prediction of $\phi$ using the Gauss-Seidel method, which translates into:
$$
(D+\omega L)\vec{x}^{(n+1)}=\omega\vec{b}+[(1-\omega)D-\omega U)\vec{x}^{(n)}
$$
In the Gauss-Seidel method, only the lower triangle of the matrix is used, creating an asymmetry in the search for the solution. The SSOR method removes this asymmetry by performing the standard Gauss Seidel operation immediately followed by the upper-triangle Gauss-Seidel, for each iteration.
Finding $\omega$ requires some algebra. When $\omega<1$, the method is under-relaxed and slow to converge. For $\omega=1$, the method reverts to Gauss-Seidel. The over-relaxation $\omega>1$ method is typically faster to converge for diffusion problem ($\nabla^2 T=f$), but not necessarily for the compact scheme here. For diffusion problem, such as the following example (from Fundamentals of Engineering Numerical Analysis, Author: Parviz Moin, Publisher: Cambridge),
$$
\frac{d^2u}{dx^2}=\sin(k\pi x),\;0\leq x\leq 1\text{ and }u(0)=u(1)=0
$$
the optimimum relaxation coefficient for the SOR method is defined from the eigenvalues $\lambda_i$ of the Jacobi method matrix $D^{-1}R$:
$$
\omega_{opt}=\frac{2}{1+\sqrt{1-\max_{i=0,N-1}\lambda_i^2}}
$$
<p class='alert alert-success'>
Write the code to solve the diffusion example above with $N=64$, $k=1$ and $k=16$ using the Jacobi, Gauss-Seidel, SOR and SSOR methods.
</p>
End of explanation |
9,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the Tutorial!
First I'll introduce the theory behind neural nets. then we will implement one from scratch in numpy, (which is installed on the uni computers) - just type this code into your text editor of choice. I'll also show you how to define a neural net in googles DL library Tensorflow(which is not installed on the uni computers) and train it to clasify handwritten digits.
You will understand things better if you're familiar with calculus and linear algebra, but the only thing you really need to know is basic programming. Don't worry if you don't understand the equations.
Numpy/linear algebra crash course
(You should be able to run this all in python 2.7.8 on the uni computers.)
Vectors and matrices are the language of neural networks. For our purposes, a vector is a list of numbers and a matrix is a 2d grid of numbers. Both can be defined as instances of numpy's ndarray class
Step1: Putting an ndarray through a function will apply it elementwise
Step2: What is a neural network?
For our data-sciencey purposes, it's best to think of a neural network as a function approximator or a statistical model. Surprisingly enough they are made up of a network of neurons. What is a neuron?
WARNING
Step6: Learning
Well that's all very nice, but we need it to be able to learn | Python Code:
import numpy as np
my_vector = np.asarray([1,2,3])
my_matrix = np.asarray([[1,2,3],[10,10,10]])
print(my_matrix*my_vector)
Explanation: Welcome to the Tutorial!
First I'll introduce the theory behind neural nets. then we will implement one from scratch in numpy, (which is installed on the uni computers) - just type this code into your text editor of choice. I'll also show you how to define a neural net in googles DL library Tensorflow(which is not installed on the uni computers) and train it to clasify handwritten digits.
You will understand things better if you're familiar with calculus and linear algebra, but the only thing you really need to know is basic programming. Don't worry if you don't understand the equations.
Numpy/linear algebra crash course
(You should be able to run this all in python 2.7.8 on the uni computers.)
Vectors and matrices are the language of neural networks. For our purposes, a vector is a list of numbers and a matrix is a 2d grid of numbers. Both can be defined as instances of numpy's ndarray class:
End of explanation
print((my_matrix**2))
print((my_matrix))
Explanation: Putting an ndarray through a function will apply it elementwise:
End of explanation
def sigmoid(x):
return 1.0/(1.0+np.exp(-x))
hidden_1 = sigmoid(x.dot(W1) + b_1)
output = hidden1.dot(W2) + b_2
Explanation: What is a neural network?
For our data-sciencey purposes, it's best to think of a neural network as a function approximator or a statistical model. Surprisingly enough they are made up of a network of neurons. What is a neuron?
WARNING: huge oversimplification that will make neuroscientists cringe.
This is what a neuron in your brain looks like. On the right are the axons, on the left are the dendrites, which recieve signals from the axons of other neurons. The dendrites are connected to the axons with synapses. If the neuron has enough voltage across, it will "spike" and send a signal through its axon to neighbouring neurons. Some synapses are excitory in that if a signal goes through them it will increase the voltage across the next neuron, making it more likely to spike. Others are inhibitory
and do the opposite. We learn by changing the strengths of synapses(well, kinda), and that is also usually how artificial neural networks learn.
This is what a the simplest possible artificial neuron looks like. This neuron is connected to two other input neurons named \(x_1 \) and \( x_2\) with "synapses" \(w_1\) and \(w_2\). All of these symbols are just numbers(real/float).
To get the neurons output signal \(h\), just sum the input neurons up, weighted by their "synapses" then put them through a nonlinear function \( f\):
$$ h = f(x_1 w_1 + x_2 w_2)$$
\(f\) can be anything that maps a real number to a real number, but for ML you want something nonlinear and smooth. For this neuron, \(f\) is the sigmoid function:
$$\sigma(x) = \frac{1}{1+e^{-x}} $$
Sigmoid squashes its output into [0,1], so it's closer to "fully firing" the more positive it's input, and closer to "not firing" the more negative it's input.
If you like to think in terms of graph theory, neurons are nodes and
If you have a stats background you might have noticed that this looks similar a logistic regression on two variables. That's because it is!
As you can see, these artificial neurons are only loosely inspired by biological neurons. That's ok, our goal is to have a good model, not simulate a brain.
There are many exciting ways to arange these neurons into a network, but we will focus on one of the easier, more useful topologies called a "two layer perceptron", which looks like this:
Neurons are arranged in layers, with the first hidden layer of neurons connected to a vector(think list of numbers) of input data, \(x\), sometimes referred to as an "input layer". Every neuron in a given layer is connected to every neuron in the previous layer.
$$net = \sum_{i=0}^{N}x_i w_i = \vec{x} \cdot \vec{w}$$
Where \(\vec{x}\) is a vector of previous layer's neuron activations and \(\vec{w} \) is a vector of the weights(synapses) for every \(x \in \vec{x} \).
Look back at the diagram again. Each of these 4 hidden units will have a vector of 3 weights for each of the inputs. We can arrange them as a 3x4 matrix of row vectors, which we call \(W_1\). Then we can multiply this matrix with \(\vec{x}\) and apply our nonlinearity \(f\) to get a vector of neuron activations:
$$\vec{h} = f( \vec{x} \cdot W_1 )$$
..actually, in practice we add a unique learnable "bias" \(b\) to every neurons weighted sum, which has the effect of shifting the nonlinearity left or right:
$$\vec{h} = f( \vec{x} \cdot W_1 + \vec{b}_1 )$$
We pretty much do the same thing to get the output for the second hidden layer, but with a different weight matrix \(W_2\):
$$\vec{h_2} = f( \vec{h_1} \cdot W_2 + \vec{b}_2 )$$
So if we want to get an output for a given data vector x, we can just plug it into these equations. Here it is in numpy:
End of explanation
N,D = 300,2 # number of examples, dimension of examples
X = np.random.uniform(size=(N,D),low=0,high=20)
y = [X[i,0] * X[i,1] for i in range(N)]
class TwoLayerPerceptron:
Simple implementation of the most basic neural net
def __init__(self,X,H,Y):
N,D = X.shape
N,O = y.shape
# initialize the weights, or "connections between neurons" to random values.
self.W1 = np.random.normal(size=(D,H))
self.b1 = np.zeros(size=(H,))
self.W2 = np.random.normal(size=(H,O))
self.b2 = np.random.normal(size=(O,))
def forward_pass(X):
Get the outputs for batch X, and a cache of hidden states for backprop
hidden_inputs = X.dot(W1) + b #matrix multiply
hidden_activations = relu(hidden_inputs)
output = hidden_activations.dot(W2) + b
cache = [X, hidden_inputs, ]
return cache
def backwards_pass(self,cache):
[X,hidden_inputs, hidden_activations, output] = cache
#//TODO: backwards pass
return d_W1, d_W2, d_b1, d_b2
def subtract_gradients(self,gradients,lr=0.001):
[d_W1, d_W2, d_b1, d_b2] = gradients
self.W1 -= lr * d_W1
self.W2 -= lr * d_W2
self.b1 -= lr * d_b1
self.b2 -= lr * d_b2
hidden_activations = relu(np.dot(X,W1) + b1)
output = np.dot(hidden_activations,W2)+b2
errors = 0.5 * (output - y)**2
d_h1 = np.dot((output - y),W2.T)
d_b1 = np.sum(d_h1,axis=1)
d_a1 = sigmoid()
d_W2 = np.dot(hidden_Activations, errors)
d_W1 = np.dot(d_h1, W1.T)
W_2 += d_W2
b1 += db1
W_1 += d_W1
display(Math(r'h_1 = \sigma(X \cdot W_1 + b)'))
Explanation: Learning
Well that's all very nice, but we need it to be able to learn
End of explanation |
9,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
Notebook version
Step1: 1. Introduction
1.1. Binary classification
The goal of a classification problem is to assign a class or category to every instance or observation of a data collection.
Here, we will assume that
every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and
the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = {0, 1}$.
The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or decision. If $y=\hat{y}$, the decision is a hit, otherwise $y\neq \hat{y}$ and the decision is an error.
1.2. Decision theory
Step2: It is straightforward to see that the logistic function has the following properties
Step3: The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the color map.
Step4: 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$
Exercise 3
Step5: 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = {({\bf x}_k, y_k) \in {\mathbb{R}}^N \times {0,1}, k=0,\ldots,{K-1}}$ to estimate ${\bf w}$. The estimate, $\hat{\bf w}$, can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png" width=400>
In this notebook, we will discuss two different approaches to the estimation of ${\bf w}$
Step6: Now, we select two classes and two attributes.
Step7: 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.
Step8: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
Step9: The following figure generates a plot of the normalized training data.
Step10: In order to apply the gradient descent rule, we need to define two methods
Step11: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\top)^\top$.
Step12: 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors
Step13: 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing.
Step14: Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.
Step15: 4. Regularization and MAP estimation.
4.1 MAP estimation
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}{\text{MAP}} & = \arg\max{\bf w} \left{ P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \right}\
& = \arg\max_{\bf w} \left{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right} \
& = \arg\min_{\bf w} \left{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right}
\end{align}
In the light of this expression, we can conclude that the MAP solution is affected by two terms
Step16: 6. Logistic regression in Scikit Learn.
The <a href="http | Python Code:
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
Explanation: Logistic Regression
Notebook version: 2.0 (Nov 21, 2017)
2.1 (Oct 19, 2018)
2.2 (Oct 09, 2019)
2.3 (Oct 27, 2020)
Author: Jesús Cid Sueiro ([email protected])
Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7)
Assumptions for regression model modified
v.2.1 - Minor changes regarding notation and assumptions
v.2.2 - Updated notation
v.2.3 - Improved slides format. Backward compatibility removed
End of explanation
# Define the logistic function
def logistic(t):
#<SOL>
#</SOL>
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$g(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()
Explanation: 1. Introduction
1.1. Binary classification
The goal of a classification problem is to assign a class or category to every instance or observation of a data collection.
Here, we will assume that
every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and
the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = {0, 1}$.
The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or decision. If $y=\hat{y}$, the decision is a hit, otherwise $y\neq \hat{y}$ and the decision is an error.
1.2. Decision theory: the MAP criterion
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model.
Assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P{\hat{Y} \neq Y}$ is minimum.
Noting that
$$
P{\hat{Y} \neq Y} = \int P{\hat{Y} \neq Y | {\bf x}} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision maker should take, for every sample ${\bf x}$, the decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P{Y \neq \hat{y} |{\bf x}} \
&= \arg\max_{\hat{y}} P{Y = \hat{y} |{\bf x}} \
\end{align}
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}{\hat{y}=0}\quad P{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually referred to as the MAP (Maximum A Posteriori) classifier. As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems.
1.3. Learning
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known.
Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal D = {{\bf x}k, y_k}{k=0}^{K-1}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = {({\bf x}k, y_k) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}}$ of independent and identically distributed (i.i.d.) samples from an unknown distribution $p{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
1.4. Parametric classifiers
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, we can use the dataset to estimate the a posterior class probability model, and apply it to approximate the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
<img src="./figs/parametric_decision.png" width=400>
In practice, the dataset ${\mathcal D}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this notebook, we explore one of the most popular model-based parametric classification methods: logistic regression.
2. Logistic regression.
2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in {0,1}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the logistic function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
The code below defines and plots the logistic function:
End of explanation
# Weight vector:
w = [4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
h = (x_max - x_min) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights, and plot
Z = logistic(w[0]*xx0 + w[1]*xx1)
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
Explanation: It is straightforward to see that the logistic function has the following properties:
P1: Probabilistic output: $\quad 0 \le g(t) \le 1$
P2: Symmetry: $\quad g(-t) = 1-g(t)$
P3: Monotonicity: $\quad g'(t) = g(t)\cdot [1-g(t)] \ge 0$
Exercise 1: Verify properties P2 and P3.
Exercise 2: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$.
2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Thus, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$.
End of explanation
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
Explanation: The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the color map.
End of explanation
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
h = (x_max - x_min) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
# Z = <FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5], colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
Explanation: 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$
Exercise 3: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$
End of explanation
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain, cTrain, xTest, cTest = [], [], [], []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)
Explanation: 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = {({\bf x}_k, y_k) \in {\mathbb{R}}^N \times {0,1}, k=0,\ldots,{K-1}}$ to estimate ${\bf w}$. The estimate, $\hat{\bf w}$, can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png" width=400>
In this notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
Maximum Likelihood (ML): $\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$
Maximum *A Posteriori (MAP): $\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: using the symmetry property of the logistic function, we can write
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g\left({\bf w}^\intercal{\bf z}({\bf x})\right)
= g\left(-{\bf w}^\intercal{\bf z}({\bf x})\right)$$
thus
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g\left(\overline{y}{\bf w}^\intercal{\bf z}({\bf x})\right)$$
where $\overline{y} = 2y-1$ is a symmetrized label ($\overline{y}\in{-1, 1}$).
3.1. Model assumptions
In the following, we will make the following assumptions:
A1. (Logistic Regression): We assume a logistic model for the a posteriori probability of ${Y}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g\left({\bar y}\cdot {\bf w}^\intercal{\bf z}({\bf x})\right).$$
A2. All samples in ${\mathcal D}$ have been generated from the same distribution, $p_{{\bf X}, Y| {\bf W}}({\bf x}, y| {\bf w})$.
A3. Input variables $\bf x$ do not depend on $\bf w$. This implies that $p({\bf x}|{\bf w}) = p({\bf x})$
A4. Targets $y_0, \cdots, y_{K-1}$ are statistically independent given $\bf w$ and the inputs ${\bf x}0, \cdots, {\bf x}{K-1}$, that is:
$$P(y_0, \cdots, y_{K-1} | {\bf x}0, \cdots, {\bf x}{K-1}, {\bf w}) = \prod_{k=0}^{K-1} P(y_k | {\bf x}_k, {\bf w})$$
3.2. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}{\text{ML}} = \arg\max{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$
Ussing assumptions A2 and A3 above, we have that
\begin{align}
P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y_0, \cdots, y_{K-1},{\bf x}0, \cdots, {\bf x}{K-1}| {\bf w}) \
& = P(y_0, \cdots, y_{K-1}|{\bf x}0, \cdots, {\bf x}{K-1}, {\bf w}) \; p({\bf x}0, \cdots, {\bf x}{K-1}| {\bf w}) \
& = P(y_0, \cdots, y_{K-1}|{\bf x}0, \cdots, {\bf x}{K-1}, {\bf w}) \; p({\bf x}0, \cdots, {\bf x}{K-1})\end{align}
Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following optimization problem
\begin{align}
\hat {\bf w}\text{ML} & = \arg \max{\bf w} P(y_0, \cdots, y_{K-1}|{\bf x}0, \cdots, {\bf x}{K-1}, {\bf w}) \
& = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y_k|{\bf x}k, {\bf w}) \
& = \arg \max{\bf w} \sum_{k=0}^{K-1} \log P(y_k|{\bf x}k, {\bf w}) \
& = \arg \min{\bf w} \sum_{k=0}^{K-1} - \log P(y_k|{\bf x}_k, {\bf w})
\end{align}
where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the likelihood, log-likelihood $\left[L(\bf w)\right]$, and negative log-likelihood $\left[\text{NLL}(\bf w)\right]$, respectively.
Now, using A1 (the logistic model)
\begin{align}
\text{NLL}({\bf w})
&= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}k{\bf w}^\intercal {\bf z}_k\right)\right] \
&= \sum{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}_k{\bf w}^\intercal {\bf z}_k\right)\right]
\end{align}
where ${\bf z}_k={\bf z}({\bf x}_k)$.
It can be shown that $\text{NLL}({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} \text{NLL}(\hat{\bf w}{\text{ML}})
&= - \sum{k=0}^{K-1}
\frac{\exp\left(-\overline{y}k\hat{\bf w}{\text{ML}}^\intercal {\bf z}k\right) \overline{y}_k {\bf z}_k}
{1+\exp\left(-\overline{y}_k\hat{\bf w}{\text{ML}}^\intercal {\bf z}k
\right)} = \
&= - \sum{k=0}^{K-1} \left[y_k-g(\hat{\bf w}_{\text{ML}}^T {\bf z}_k)\right] {\bf z}_k = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum.
3.3. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}{n+1} = {\bf w}_n - \rho_n \nabla{\bf w} \text{NLL}({\bf w}_n)
\end{align}
where $\rho_n >0$ is the learning step.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}{n+1} &= {\bf w}_n
+ \rho_n \sum{k=0}^{K-1} \left[y_k-g({\bf w}_n^\intercal {\bf z}_k)\right] {\bf z}_k
\end{align}
Gradient descent in matrix form
Defining vectors
\begin{align}
{\bf y} &= [y_0,\ldots,y_{K-1}]^\top \
\hat{\bf p}n &= [g({\bf w}_n^\top {\bf z}_0), \ldots, g({\bf w}_n^\top {\bf z}{K-1})]^\top
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}0,\ldots,{\bf z}{K-1}\right]^\top
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset.
End of explanation
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
Explanation: Now, we select two classes and two attributes.
End of explanation
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx
Explanation: 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.
End of explanation
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)
Explanation: Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
End of explanation
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa', 'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
plt.show()
Explanation: The following figure generates a plot of the normalized training data.
End of explanation
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric.
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
# Compute negative log-likelihood
# (note that this is not required for the weight update, only for nll tracking)
nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w))))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w)).flatten()
# Class
D = [int(round(pn)) for pn in p]
return p, D
Explanation: In order to apply the gradient descent rule, we need to define two methods:
- A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A predict method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions.
End of explanation
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print(f'The optimal weights are: {w}')
print('The final error rates are:')
print(f'- Training: {pe_tr}')
print(f'- Test: {pe_tst}')
print(f'The NLL after training is {nll_tr[len(nll_tr)-1]}')
Explanation: We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\top)^\top$.
End of explanation
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
# Color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
Explanation: 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
Number of iterations
Initialization
Learning step
Exercise 4: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array p with nbins, you can use plt.hist(p, n)
3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
Exercise 5: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?
Exercise 6: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, \frac{1}{10}, \frac{1}{100}, \frac{1}{1000}, \ldots$
In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= \frac{1}{n}$. Another common choice is $\rho_n = \frac{\alpha}{1+\beta n}$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method.
3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights.
End of explanation
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print(f'The optimal weights are: {w.T}')
print('The final error rates are:')
print(f'- Training: {pe_tr} \n- Test: {pe_tst}')
print('The NLL after training is', nll_tr[len(nll_tr)-1])
Explanation: 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing.
End of explanation
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((Z_grid.shape[0],1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.legend(loc='best')
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
Explanation: Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.
End of explanation
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
print('The NLL after training is:', str(nll_tr[len(nll_tr)-1]))
Explanation: 4. Regularization and MAP estimation.
4.1 MAP estimation
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}{\text{MAP}} & = \arg\max{\bf w} \left{ P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \right}\
& = \arg\max_{\bf w} \left{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right} \
& = \arg\min_{\bf w} \left{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right}
\end{align}
In the light of this expression, we can conclude that the MAP solution is affected by two terms:
- The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data (smaller $\text{NLL}$ values)
- The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our a priori preference for some solutions.
4.2. Regularization
Even though the prior distribution has a natural interpretation as a model of our knowledge about $p({\bf w})$ before observing the data, its choice is frequenty motivated by the need to avoid data overfitting.
Data overfitting is a frequent problem in ML estimation when the dimension of ${\bf w}$ is much higher that the dimension of the input ${\bf x}$: the ML solution can be too adjusted to the training data, while the test error rate is large.
In practice we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to smooth classification borders). This helps to improve generalization.
In this way, the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
In machine learning, the process of introducing penalty terms to avoid overfitting is usually named regularization.
4.3 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ follows a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}{\text{MAP}}
&= \arg\min{\bf w} \left{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right}
\end{align}
where $C = 2v$. Note that the regularization term associated to the prior penalizes parameter vectors with large components. Parameter $C$ controls the regularizatin, and it is named the inverse regularization strength.
Noting that
$$\nabla_{\bf w}\left{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Note that the regularization term "pushes" the weights towards zero.
4.4 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate becomes
\begin{align}
\hat{\bf w}{\text{MAP}}
&= \arg\min{\bf w} \left{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right}
\end{align}
Parameter $C$ is named the inverse regularization strength.
Exercise 7: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior.
5. Other optimization algorithms
5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}_n \left(y_n-\hat{p}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs many more iterations to converge.
Exercise 8: Modify logregFit to implement an algorithm that applies the SGD rule.
5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}0)
+ \nabla{\bf w}^\top C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\top{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w})$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> Hessian matrix</a> of $C$ at ${\bf w}$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}0 - {\bf H}({\bf w}_0)^{-1} \nabla{\bf w}^\top C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optimization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rule becomes
$$\hat{\bf w}{n+1} = \hat{\bf w}{n} - \rho_n {\bf H}({\bf w}n)^{-1} \nabla{{\bf w}}C({\bf w}_n)
$$
5.2.1. Example: MAP estimation with Gaussian prior.
For instance, for the MAP estimate with Gaussian prior, the Hessian matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I}
+ \sum_{k=0}^{K-1} g({\bf w}^\top {\bf z}_k)
\left[1-g({\bf w}^\top {\bf z}_k)\right]{\bf z}_k {\bf z}_k^\top
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left[g({\bf w}^\top {\bf z}_k) \left(1-g({\bf w}^\top {\bf z}_k)\right)\right]
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
{\bf w}{n+1} = {\bf w}{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
End of explanation
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.legend(loc='best')
plt.contour(xx, yy, pp, levels=[0.5], colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
Explanation: 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm.
End of explanation |
9,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Text Classification on CNAE-9 Data Set</h1>
In this notebook, we build a Text Classification Model on <a href="https
Step1: Getting the Data
Step2: The result is a 1080*857 dense matrix. The first column is the label. We extract the first column as the label and convert the rest to a sparse matrix.
Step3: Check for Class Imbalance
Step4: Model Construction and Cross-Validation
In this section, we construct a classification model by
Perform Singular Value Decomposition of the sparse matrix and keep the top 100 components.
Use a Maximum Entropy Classifier on the scaled SVD components. | Python Code:
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import seaborn as sns
%load_ext version_information
%version_information scipy, numpy, pandas, matplotlib, seaborn, version_information
Explanation: <h1 align="center">Text Classification on CNAE-9 Data Set</h1>
In this notebook, we build a Text Classification Model on <a href="https://archive.ics.uci.edu/ml/datasets/CNAE-9">CNAE-9 Dataset on UCI.</a>
From the description:
This is a data set containing 1080 documents of free text business descriptions of Brazilian companies categorized into a subset of 9 categories cataloged in a table called National Classification of Economic Activities (Classificação Nacional de Atividade Econômicas - CNAE). The original texts were pre-processed to obtain the current data set: initially, it was kept only letters and then it was removed prepositions of the texts. Next, the words were transformed to their canonical form. Finally, each document was represented as a vector, where the weight of each word is its frequency in the document. This data set is highly sparse (99.22% of the matrix is filled with zeros).
End of explanation
url = r'https://archive.ics.uci.edu/ml/machine-learning-databases/00233/CNAE-9.data'
count_df = pd.read_csv(url, header=None)
count_df.info()
Explanation: Getting the Data
End of explanation
from scipy.sparse import csr_matrix
labels, count_features = count_df.loc[:, 0], count_df.loc[:, 1:]
count_data = csr_matrix(count_features.values)
count_data
Explanation: The result is a 1080*857 dense matrix. The first column is the label. We extract the first column as the label and convert the rest to a sparse matrix.
End of explanation
label_counts = pd.Series(labels).value_counts()
label_counts.plot(kind='bar', rot=0)
Explanation: Check for Class Imbalance
End of explanation
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold, cross_val_predict
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import StandardScaler
pipeline = Pipeline(
[
('reducer', TruncatedSVD(n_components=100, random_state=1000)),
('scaler', StandardScaler(with_mean=False)),
('model', LogisticRegression(max_iter=100, random_state=1234,
solver='lbfgs', multi_class='multinomial'))
]
)
cv = StratifiedKFold(n_splits=10, random_state=1245, shuffle=True)
predictions = cross_val_predict(pipeline, count_data, labels, cv=cv)
cr = classification_report(labels, predictions)
print(cr)
Explanation: Model Construction and Cross-Validation
In this section, we construct a classification model by
Perform Singular Value Decomposition of the sparse matrix and keep the top 100 components.
Use a Maximum Entropy Classifier on the scaled SVD components.
End of explanation |
9,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NetworkX
NetworkX is a Python library for doing in-memory graph analysis.
Step1: Implicit node creation on edge add
Step2: Just a touch of computational theory
Dijkstra's algorithm
Finds the shortest path between a source node and all other nodes
Time complexity | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
# A SIMPLE EXAMPLE
G=nx.Graph()
G.add_node("a")
G.add_node("b")
G.add_node("c")
G.add_node("d")
G.add_node("e")
G.add_node("f")
G.add_edge('a', 'c')
G.add_edge('b', 'c')
G.add_edge('e', 'd')
G.add_edge('c', 'e')
G.add_edge('e', 'f')
G.add_edge('c', 'f')
pos=nx.spring_layout(G)
nx.draw(G, pos=pos)
nx.draw_networkx_labels(G, pos=pos)
plt.show()
# SOME FAKE DATA
locations =['Large Warehouse', 'Small Warehouse'
, 'Retail 1', 'Retail 2', 'Retail 3', 'Retail 4'
, 'Supplier 1', 'Supplier 2', 'Supplier 3']
adjacency = [
[ 0, 1500, 100, 275, 1600, 1750, 500, 130, 1550] #Large Warehouse'
, [ -1, 0, 1475, 1600, 400, 50, 500, 1800, 100] #Small Warehouse'
, [ -1, -1, 0, 300, 1750, 1600, 9999, 9999, 9999] #Retail 1
, [ -1, -1, -1, 0, 1840, 1900, 9999, 9999, 9999] #Retail 2
, [ -1, -1, -1, -1, 0, 650, 9999, 9999, 9999] #Retail 3
, [ -1, -1, -1, -1, -1, 0, 9999, 9999, 9999] #Retail 4
, [ -1, -1, -1, -1, -1, -1, 0, 400, 700] #Supplier 1
, [ -1, -1, -1, -1, -1, -1, -1, 0, 1900] #Supplier 2
, [ -1, -1, -1, -1, -1, -1, -1, -1, 1775] #Supplier 3
]
# CONVERT THAT FAKE DATA INTO A GRAPH
g = nx.Graph()
for loc in locations:
g.add_node(loc)
for i in range(len(locations)):
r = locations[i]
row = adjacency[i]
for j in range (i+1, len(locations)):
c = locations[j]
val = row[j]
if val > 0 and val < 9999:
g.add_edge(r, c, miles=val)
# VISUALIZE OUR DATASET
pos={'Large Warehouse': [ 7, 2],
'Small Warehouse': [ 2, 1.75],
'Retail 1': [ 6.5, 3],
'Retail 2': [ 7.5, .6],
'Retail 3': [ 3, .6],
'Retail 4': [ 1.5, 0.75],
'Supplier 1': [ 5, 3.5],
'Supplier 2': [ 9, 3],
'Supplier 3': [ 1, 2.5 ]}
nx.draw(g, pos=pos, node_size=4000)
nx.draw_networkx_labels(g, pos=pos)
plt.show()
# WHAT IS THE SHORTEST ROUTE TO TRANSPORT FROM SUPPLIER 1 TO RETAIL 3?
nx.dijkstra_path(g, source='Supplier 1', target='Retail 3', weight='miles')
Explanation: NetworkX
NetworkX is a Python library for doing in-memory graph analysis.
End of explanation
print g.nodes()
g.add_edge('Supplier 1', 'Retail 5')
print g.nodes()
Explanation: Implicit node creation on edge add: a "gotcha" / feature of note
End of explanation
ap = nx.floyd_warshall(g, weight='miles')
print(ap['Supplier 3'])
Explanation: Just a touch of computational theory
Dijkstra's algorithm
Finds the shortest path between a source node and all other nodes
Time complexity: $O(|E| + |V|\text{log}|V|)$
A* search algorithm
Extends Dijkstra adding in some heuristics
Time complexity: $O(|E|)$
Johnson's algorithm
Finds all pairs of shortest paths in a weighted graph
An extension of Dijkstra (sort of)
An impressive algorithm
Time complexity: $O(|V|^2 \text{log}|V| + |V||E|$
Floyd-Warshall algorithm
Finds all pairs of shortest paths in a weighted graph
Allows negative edges, but no negative cycles
Better with highly dense graphs
Time complexity: $O(|V|^3)$
Space complexity: $\Theta(|V|^2)$
End of explanation |
9,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Landau-Zener-Stuckelberg interferometry
Step2: Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from qutip.ui.progressbar import TextProgressBar as ProgressBar
Explanation: QuTiP example: Landau-Zener-Stuckelberg inteferometry
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
# set up the parameters and start calculation
delta = 1.0 * 2 * np.pi # qubit sigma_x coefficient
w = 2.0 * 2 * np.pi # driving frequency
T = 2 * np.pi / w # driving period
gamma1 = 0.00001 # relaxation rate
gamma2 = 0.005 # dephasing rate
eps_list = np.linspace(-20.0, 20.0, 101) * 2 * np.pi
A_list = np.linspace( 0.0, 20.0, 101) * 2 * np.pi
# pre-calculate the necessary operators
sx = sigmax(); sz = sigmaz(); sm = destroy(2); sn = num(2)
# collapse operators
c_op_list = [np.sqrt(gamma1) * sm, np.sqrt(gamma2) * sz] # relaxation and dephasing
# ODE settings (for list-str format)
options = Options()
options.atol = 1e-6 # reduce accuracy to speed
options.rtol = 1e-5 # up the calculation a bit
options.rhs_reuse = True # Compile Hamiltonian only the first time.
# perform the calculation for each combination of eps and A, store the result
# in a matrix
def calculate():
p_mat = np.zeros((len(eps_list), len(A_list)))
H0 = - delta/2.0 * sx
# Define H1 (first time-dependent term)
# String method:
H1 = [- sz / 2, 'eps']
# Function method:
# H1 = [- sz / 2, lambda t, args: args['eps'] ]
# Define H2 (second time-dependent term)
# String method:
H2 = [sz / 2, 'A * sin(w * t)']
# Function method:
# H2 = [sz / 2, lambda t, args: args['A']*np.sin(args['w'] * t) ]
H = [H0, H1, H2]
pbar = ProgressBar(len(eps_list))
for m, eps in enumerate(eps_list):
pbar.update(m)
for n, A in enumerate(A_list):
args = {'w': w, 'A': A, 'eps': eps}
U = propagator(H, T, c_op_list, args, options=options)
rho_ss = propagator_steadystate(U)
p_mat[m,n] = np.real(expect(sn, rho_ss))
return p_mat
p_mat = calculate()
fig, ax = plt.subplots(figsize=(8, 8))
A_mat, eps_mat = np.meshgrid(A_list/(2*np.pi), eps_list/(2*np.pi))
ax.pcolor(eps_mat, A_mat, p_mat, shading='auto')
ax.set_xlabel(r'Bias point $\epsilon$')
ax.set_ylabel(r'Amplitude $A$')
ax.set_title("Steadystate excitation probability\n" +
r'$H = -\frac{1}{2}\Delta\sigma_x -\frac{1}{2}\epsilon\sigma_z - \frac{1}{2}A\sin(\omega t)$' + "\n");
Explanation: Landau-Zener-Stuckelberg interferometry: Steady state of a strongly driven two-level system, using the one-period propagator.
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
9,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 保存和恢复模型
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 获取示例数据集
为了演示如何保存和加载权重,您将使用 MNIST 数据集。为了加快运行速度,请使用前 1000 个样本:
Step3: 定义模型
首先构建一个简单的序列(sequential)模型:
Step4: 在训练期间保存模型(以 checkpoints 形式保存)
您可以使用经过训练的模型而无需重新训练,或者在训练过程中断的情况下从离开处继续训练。tf.keras.callbacks.ModelCheckpoint 回调允许您在训练期间和结束时持续保存模型。
Checkpoint 回调用法
创建一个只在训练期间保存权重的 tf.keras.callbacks.ModelCheckpoint 回调:
Step5: 这将创建一个 TensorFlow checkpoint 文件集合,这些文件在每个 epoch 结束时更新:
Step6: 只要两个模型共享相同的架构,您就可以在它们之间共享权重。因此,当从仅权重恢复模型时,创建一个与原始模型具有相同架构的模型,然后设置其权重。
现在,重新构建一个未经训练的全新模型并基于测试集对其进行评估。未经训练的模型将以机会水平执行(约 10% 的准确率):
Step7: 然后从 checkpoint 加载权重并重新评估:
Step8: checkpoint 回调选项
回调提供了几个选项,为 checkpoint 提供唯一名称并调整 checkpoint 频率。
训练一个新模型,每五个 epochs 保存一次唯一命名的 checkpoint :
Step9: 现在查看生成的 checkpoint 并选择最新的 checkpoint :
Step10: 注:默认 TensorFlow 格式只保存最近的 5 个检查点。
如果要进行测试,请重置模型并加载最新的 checkpoint :
Step11: 这些文件是什么?
上述代码将权重存储到 checkpoint—— 格式化文件的集合中,这些文件仅包含二进制格式的训练权重。 Checkpoints 包含:
一个或多个包含模型权重的分片。
一个索引文件,指示哪些权重存储在哪个分片中。
如果您在一台计算机上训练模型,您将获得一个具有如下后缀的分片:.data-00000-of-00001
手动保存权重
使用 Model.save_weights 方法手动保存权重。默认情况下,tf.keras(尤其是 save_weights)使用扩展名为 .ckpt 的 TensorFlow 检查点格式(保存在扩展名为 .h5 的 HDF5 中,保存和序列化模型指南中会讲到这一点):
Step12: 保存整个模型
调用 model.save 将保存模型的结构,权重和训练配置保存在单个文件/文件夹中。这可以让您导出模型,以便在不访问原始 Python 代码*的情况下使用它。因为优化器状态(optimizer-state)已经恢复,您可以从中断的位置恢复训练。
整个模型可以保存为两种不同的文件格式(SavedModel 和 HDF5)。TensorFlow SavedModel 格式是 TF2.x 中的默认文件格式。但是,模型能够以 HDF5 格式保存。下面详细介绍了如何以两种文件格式保存整个模型。
保存完整模型会非常有用——您可以在 TensorFlow.js(Saved Model, HDF5)加载它们,然后在 web 浏览器中训练和运行它们,或者使用 TensorFlow Lite 将它们转换为在移动设备上运行(Saved Model, HDF5)
自定义对象(例如,子类化模型或层)在保存和加载时需要特别注意。请参阅下面的保存自定义对象*部分
SavedModel 格式
SavedModel 格式是另一种序列化模型的方式。以这种格式保存的模型可以使用 tf.keras.models.load_model 恢复,并且与 TensorFlow Serving 兼容。SavedModel 指南详细介绍了如何应用/检查 SavedModel。以下部分说明了保存和恢复模型的步骤。
Step13: SavedModel 格式是一个包含 protobuf 二进制文件和 TensorFlow 检查点的目录。检查保存的模型目录:
Step14: 从保存的模型重新加载一个新的 Keras 模型:
Step15: 使用与原始模型相同的参数编译恢复的模型。尝试使用加载的模型运行评估和预测:
Step16: HDF5 格式
Keras使用 HDF5 标准提供了一种基本的保存格式。
Step17: 现在,从该文件重新创建模型:
Step18: 检查其准确率(accuracy): | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install pyyaml h5py # Required to save models in HDF5 format
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
Explanation: 保存和恢复模型
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/keras/save_and_load"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 tensorflow.google.cn 上查看</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 运行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/save_and_load.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 Github 上查看源代码</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/save_and_load.ipynb" class="_active_edit_href"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载此 notebook</a> </td>
</table>
可以在训练期间和之后保存模型进度。这意味着模型可以从停止的地方恢复,避免长时间的训练。此外,保存还意味着您可以分享您的模型,其他人可以重现您的工作。在发布研究模型和技术时,大多数机器学习从业者会分享:
用于创建模型的代码
模型训练的权重 (weight) 和参数 (parameters) 。
共享数据有助于其他人了解模型的工作原理,并使用新数据自行尝试。
小心:TensorFlow 模型是代码,对于不受信任的代码,一定要小心。请参阅 安全使用 TensorFlow 以了解详情。
选项
根据您使用的 API,可以通过多种方式保存 TensorFlow 模型。本指南使用 tf.keras,这是一种在 TensorFlow 中构建和训练模型的高级 API。对于其他方式,请参阅 TensorFlow 保存和恢复指南或在 Eager 中保存。
配置
安装并导入
安装并导入Tensorflow和依赖项:
End of explanation
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
Explanation: 获取示例数据集
为了演示如何保存和加载权重,您将使用 MNIST 数据集。为了加快运行速度,请使用前 1000 个样本:
End of explanation
# Define a simple sequential model
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.metrics.SparseCategoricalAccuracy()])
return model
# Create a basic model instance
model = create_model()
# Display the model's architecture
model.summary()
Explanation: 定义模型
首先构建一个简单的序列(sequential)模型:
End of explanation
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images, test_labels),
callbacks=[cp_callback]) # Pass callback to training
# This may generate warnings related to saving the state of the optimizer.
# These warnings (and similar warnings throughout this notebook)
# are in place to discourage outdated usage, and can be ignored.
Explanation: 在训练期间保存模型(以 checkpoints 形式保存)
您可以使用经过训练的模型而无需重新训练,或者在训练过程中断的情况下从离开处继续训练。tf.keras.callbacks.ModelCheckpoint 回调允许您在训练期间和结束时持续保存模型。
Checkpoint 回调用法
创建一个只在训练期间保存权重的 tf.keras.callbacks.ModelCheckpoint 回调:
End of explanation
os.listdir(checkpoint_dir)
Explanation: 这将创建一个 TensorFlow checkpoint 文件集合,这些文件在每个 epoch 结束时更新:
End of explanation
# Create a basic model instance
model = create_model()
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Untrained model, accuracy: {:5.2f}%".format(100 * acc))
Explanation: 只要两个模型共享相同的架构,您就可以在它们之间共享权重。因此,当从仅权重恢复模型时,创建一个与原始模型具有相同架构的模型,然后设置其权重。
现在,重新构建一个未经训练的全新模型并基于测试集对其进行评估。未经训练的模型将以机会水平执行(约 10% 的准确率):
End of explanation
# Loads the weights
model.load_weights(checkpoint_path)
# Re-evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
Explanation: 然后从 checkpoint 加载权重并重新评估:
End of explanation
# Include the epoch in the file name (uses `str.format`)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
batch_size = 32
# Create a callback that saves the model's weights every 5 epochs
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
save_freq=5*batch_size)
# Create a new model instance
model = create_model()
# Save the weights using the `checkpoint_path` format
model.save_weights(checkpoint_path.format(epoch=0))
# Train the model with the new callback
model.fit(train_images,
train_labels,
epochs=50,
batch_size=batch_size,
callbacks=[cp_callback],
validation_data=(test_images, test_labels),
verbose=0)
Explanation: checkpoint 回调选项
回调提供了几个选项,为 checkpoint 提供唯一名称并调整 checkpoint 频率。
训练一个新模型,每五个 epochs 保存一次唯一命名的 checkpoint :
End of explanation
os.listdir(checkpoint_dir)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
Explanation: 现在查看生成的 checkpoint 并选择最新的 checkpoint :
End of explanation
# Create a new model instance
model = create_model()
# Load the previously saved weights
model.load_weights(latest)
# Re-evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
Explanation: 注:默认 TensorFlow 格式只保存最近的 5 个检查点。
如果要进行测试,请重置模型并加载最新的 checkpoint :
End of explanation
# Save the weights
model.save_weights('./checkpoints/my_checkpoint')
# Create a new model instance
model = create_model()
# Restore the weights
model.load_weights('./checkpoints/my_checkpoint')
# Evaluate the model
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("Restored model, accuracy: {:5.2f}%".format(100 * acc))
Explanation: 这些文件是什么?
上述代码将权重存储到 checkpoint—— 格式化文件的集合中,这些文件仅包含二进制格式的训练权重。 Checkpoints 包含:
一个或多个包含模型权重的分片。
一个索引文件,指示哪些权重存储在哪个分片中。
如果您在一台计算机上训练模型,您将获得一个具有如下后缀的分片:.data-00000-of-00001
手动保存权重
使用 Model.save_weights 方法手动保存权重。默认情况下,tf.keras(尤其是 save_weights)使用扩展名为 .ckpt 的 TensorFlow 检查点格式(保存在扩展名为 .h5 的 HDF5 中,保存和序列化模型指南中会讲到这一点):
End of explanation
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model as a SavedModel.
!mkdir -p saved_model
model.save('saved_model/my_model')
Explanation: 保存整个模型
调用 model.save 将保存模型的结构,权重和训练配置保存在单个文件/文件夹中。这可以让您导出模型,以便在不访问原始 Python 代码*的情况下使用它。因为优化器状态(optimizer-state)已经恢复,您可以从中断的位置恢复训练。
整个模型可以保存为两种不同的文件格式(SavedModel 和 HDF5)。TensorFlow SavedModel 格式是 TF2.x 中的默认文件格式。但是,模型能够以 HDF5 格式保存。下面详细介绍了如何以两种文件格式保存整个模型。
保存完整模型会非常有用——您可以在 TensorFlow.js(Saved Model, HDF5)加载它们,然后在 web 浏览器中训练和运行它们,或者使用 TensorFlow Lite 将它们转换为在移动设备上运行(Saved Model, HDF5)
自定义对象(例如,子类化模型或层)在保存和加载时需要特别注意。请参阅下面的保存自定义对象*部分
SavedModel 格式
SavedModel 格式是另一种序列化模型的方式。以这种格式保存的模型可以使用 tf.keras.models.load_model 恢复,并且与 TensorFlow Serving 兼容。SavedModel 指南详细介绍了如何应用/检查 SavedModel。以下部分说明了保存和恢复模型的步骤。
End of explanation
# my_model directory
!ls saved_model
# Contains an assets folder, saved_model.pb, and variables folder.
!ls saved_model/my_model
Explanation: SavedModel 格式是一个包含 protobuf 二进制文件和 TensorFlow 检查点的目录。检查保存的模型目录:
End of explanation
new_model = tf.keras.models.load_model('saved_model/my_model')
# Check its architecture
new_model.summary()
Explanation: 从保存的模型重新加载一个新的 Keras 模型:
End of explanation
# Evaluate the restored model
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
print(new_model.predict(test_images).shape)
Explanation: 使用与原始模型相同的参数编译恢复的模型。尝试使用加载的模型运行评估和预测:
End of explanation
# Create and train a new model instance.
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# Save the entire model to a HDF5 file.
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
Explanation: HDF5 格式
Keras使用 HDF5 标准提供了一种基本的保存格式。
End of explanation
# Recreate the exact same model, including its weights and the optimizer
new_model = tf.keras.models.load_model('my_model.h5')
# Show the model architecture
new_model.summary()
Explanation: 现在,从该文件重新创建模型:
End of explanation
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('Restored model, accuracy: {:5.2f}%'.format(100 * acc))
Explanation: 检查其准确率(accuracy):
End of explanation |
9,395 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: The list of all the emails from Sara are in the from_sara list likewise for emails from Chris (from_chris).
The actual documents are in the Enron email dataset, which you downloaded/unpacked. The data is stored in lists and packed away in pickle files at the end.
Step2: temp_counter is a way to speed up the development--there are thousands of emails from Sara and Chris, so running over all of them can take a long time. temp_counter helps you only look at the first 200 emails in the list so you can iterate your modifications quicker
Step3: TfIdf
Tf Term Frequency
Idf Inverse document frequency
Step4: How many different words are there?
Step5: What is word number 34597 in your TfIdf? | Python Code:
from_sara = open('../text_learning/from_sara.txt', "r")
from_chris = open('../text_learning/from_chris.txt', "r")
from_data = []
word_data = []
from nltk.stem.snowball import SnowballStemmer
import string
filePath = '/Users/omojumiller/mycode/hiphopathy/HipHopDataExploration/JayZ/'
f = open(filePath+"JayZ_American Gangster_American Gangster.txt", "r")
f.seek(0) ### go back to beginning of file (annoying)
all_text = f.read()
content = all_text.split("X-FileName:")
words = ""
stemmer = SnowballStemmer("english")
text_string = content
for sentence in text_string:
words = sentence.split()
stemmed_words = [stemmer.stem(word) for word in words]
def parseOutText(f):
given an opened email file f, parse out all text below the
metadata block at the top
example use case:
f = open("email_file_name.txt", "r")
text = parseOutText(f)
stemmer = SnowballStemmer("english")
f.seek(0) ### go back to beginning of file (annoying)
all_text = f.read()
### split off metadata
content = all_text.split("X-FileName:")
words = ""
if len(content) > 1:
### remove punctuation
text_string = content[1].translate(string.maketrans("", ""), string.punctuation)
### split the text string into individual words, stem each word,
### and append the stemmed word to words (make sure there's a single
### space between each stemmed word)
words = ' '.join([stemmer.stem(word) for word in text_string.split()])
return words
ff = open("../text_learning/test_email.txt", "r")
text = parseOutText(ff)
print text
Explanation: The list of all the emails from Sara are in the from_sara list likewise for emails from Chris (from_chris).
The actual documents are in the Enron email dataset, which you downloaded/unpacked. The data is stored in lists and packed away in pickle files at the end.
End of explanation
temp_counter = 1
for name, from_person in [("sara", from_sara), ("chris", from_chris)]:
for path in from_person:
### only look at first 200 emails when developing
### once everything is working, remove this line to run over full dataset
#temp_counter += 1
if temp_counter:
path = os.path.join('..', path[:-1])
#print path
email = open(path, "r")
### use parseOutText to extract the text from the opened email
text = parseOutText(email)
### use str.replace() to remove any instances of the words
replaceWords = ["sara", "shackleton", "chris", "germani"]
for word in replaceWords:
text = text.replace(word, '')
### append the text to word_data
word_data.append(text)
### append a 0 to from_data if email is from Sara, and 1 if email is from Chris
if name == "sara":
from_data.append(0)
else:
from_data.append(1)
email.close()
print "emails processed"
len(word_data)
Explanation: temp_counter is a way to speed up the development--there are thousands of emails from Sara and Chris, so running over all of them can take a long time. temp_counter helps you only look at the first 200 emails in the list so you can iterate your modifications quicker
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words="english",lowercase=True)
bag_of_words = vectorizer.fit(word_data)
Explanation: TfIdf
Tf Term Frequency
Idf Inverse document frequency
End of explanation
len(vectorizer.get_feature_names())
Explanation: How many different words are there?
End of explanation
vectorizer.get_feature_names()[34597]
Explanation: What is word number 34597 in your TfIdf?
End of explanation |
9,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading ns-ALEX data from Photon-HDF5
In this notebook we show how to read a ns-ALEX smFRET measurement stored in *
Photon-HDF5 format
using python and a few common scientific libraries (numpy, pytables, matplotlib).
Specifically, we show how to load timestamps, detectors and nanotimes arrays
and how to plot a TCSPC histogram.*
For a µs-ALEX example see Reading µs-ALEX data from Photon-HDF5.
Step2: 1. Utility functions
Here we define an utility function to print HDF5 file contents
Step3: 2. Open the data file
Let assume we have a Photon-HDF5 file at the following location
Step4: We can open the file, as a normal HDF5 file
Step5: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
3. Print the content
Let's start by taking a look at the file content
Step6: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 900 seconds.
As an example, let's take a look at the content of the sample group
Step7: Let's define a shortcut to the photon_data group to save some typing later
Step8: 4. Reading the data
First, we make sure the file contains the right type of measurement
Step9: Ok, tha's what we espect.
Now we can load all the photon_data arrays an their specs
Step10: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group
Step11: Now, let's load the definitions of donor/acceptor channel and excitation periods
Step12: These numbers define the donor and acceptor excitation periods as shown below
Step13: Next, we compute the histograms
Step14: And finally we plot the TCSPC histogram using matplotlib | Python Code:
from __future__ import division, print_function # only needed on py2
%matplotlib inline
import numpy as np
import tables
import matplotlib.pyplot as plt
Explanation: Reading ns-ALEX data from Photon-HDF5
In this notebook we show how to read a ns-ALEX smFRET measurement stored in *
Photon-HDF5 format
using python and a few common scientific libraries (numpy, pytables, matplotlib).
Specifically, we show how to load timestamps, detectors and nanotimes arrays
and how to plot a TCSPC histogram.*
For a µs-ALEX example see Reading µs-ALEX data from Photon-HDF5.
End of explanation
def print_children(group):
Print all the sub-groups in `group` and leaf-nodes children of `group`.
Parameters:
group (pytables group): the group to be printed.
for name, value in group._v_children.items():
if isinstance(value, tables.Group):
content = '(Group)'
else:
content = value.read()
print(name)
print(' Content: %s' % content)
print(' Description: %s\n' % value._v_title.decode())
Explanation: 1. Utility functions
Here we define an utility function to print HDF5 file contents:
End of explanation
filename = '../data/Pre.hdf5'
Explanation: 2. Open the data file
Let assume we have a Photon-HDF5 file at the following location:
End of explanation
h5file = tables.open_file(filename)
Explanation: We can open the file, as a normal HDF5 file
End of explanation
print_children(h5file.root)
Explanation: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
3. Print the content
Let's start by taking a look at the file content:
End of explanation
print_children(h5file.root.sample)
Explanation: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 900 seconds.
As an example, let's take a look at the content of the sample group:
End of explanation
photon_data = h5file.root.photon_data
Explanation: Let's define a shortcut to the photon_data group to save some typing later:
End of explanation
photon_data.measurement_specs.measurement_type.read().decode()
Explanation: 4. Reading the data
First, we make sure the file contains the right type of measurement:
End of explanation
timestamps = photon_data.timestamps.read()
timestamps_unit = photon_data.timestamps_specs.timestamps_unit.read()
detectors = photon_data.detectors.read()
nanotimes = photon_data.nanotimes.read()
tcspc_num_bins = photon_data.nanotimes_specs.tcspc_num_bins.read()
tcspc_unit = photon_data.nanotimes_specs.tcspc_unit.read()
print('Number of photons: %d' % timestamps.size)
print('Timestamps unit: %.2e seconds' % timestamps_unit)
print('TCSPC unit: %.2e seconds' % tcspc_unit)
print('TCSPC number of bins: %d' % tcspc_num_bins)
print('Detectors: %s' % np.unique(detectors))
Explanation: Ok, tha's what we espect.
Now we can load all the photon_data arrays an their specs:
End of explanation
h5file.root.setup.excitation_wavelengths.read()
Explanation: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group:
End of explanation
donor_ch = photon_data.measurement_specs.detectors_specs.spectral_ch1.read()
acceptor_ch = photon_data.measurement_specs.detectors_specs.spectral_ch2.read()
print('Donor CH: %d Acceptor CH: %d' % (donor_ch, acceptor_ch))
laser_rep_rate = photon_data.measurement_specs.laser_repetition_rate.read()
donor_period = photon_data.measurement_specs.alex_excitation_period1.read()
acceptor_period = photon_data.measurement_specs.alex_excitation_period2.read()
print('Laser repetion rate: %5.1f MHz \nDonor period: %s \nAcceptor period: %s' % \
(laser_rep_rate*1e-6, donor_period, acceptor_period))
Explanation: Now, let's load the definitions of donor/acceptor channel and excitation periods:
End of explanation
nanotimes_donor = nanotimes[detectors == donor_ch]
nanotimes_acceptor = nanotimes[detectors == acceptor_ch]
Explanation: These numbers define the donor and acceptor excitation periods as shown below:
$$150 < \widetilde{t} < 1500 \qquad \textrm{donor period}$$
$$1540 < \widetilde{t} < 3050 \qquad \textrm{acceptor period}$$
where $\widetilde{t}$ represent the nanotimes array.
For more information
please refer to the measurements_specs section
of the Reference Documentation.
5. Plotting the TCSPC histogram
Let start by separating nanotimes from donor and acceptor channels:
End of explanation
bins = np.arange(0, tcspc_num_bins + 1)
hist_d, _ = np.histogram(nanotimes_donor, bins=bins)
hist_a, _ = np.histogram(nanotimes_acceptor, bins=bins)
Explanation: Next, we compute the histograms:
End of explanation
fig, ax = plt.subplots(figsize=(10, 4.5))
scale = tcspc_unit*1e9
ax.plot(bins[:-1]*scale, hist_d, color='green', label='donor')
ax.plot(bins[:-1]*scale, hist_a, color='red', label='acceptor')
ax.axvspan(donor_period[0]*scale, donor_period[1]*scale, alpha=0.3, color='green')
ax.axvspan(acceptor_period[0]*scale, acceptor_period[1]*scale, alpha=0.3, color='red')
ax.set_xlabel('TCSPC Nanotime (ns) ')
ax.set_title('TCSPC Histogram')
ax.set_yscale('log')
ax.set_ylim(10)
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5));
#plt.close('all')
Explanation: And finally we plot the TCSPC histogram using matplotlib:
End of explanation |
9,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Pandas-Python-Data-Analysis-Library"><span class="toc-item-num">1 - </span><a href="http
Step1: Pandas Python Data Analysis Library
If you find manipulating dataframes in R a bit too cumbersome, why don't you give Pandas a chance. On top of easy and efficient table management, plotting functionality is pretty great.
Data Structures in Pandas
Data alignment is intrinsic in pandas.
Series
One-dimensional labelled array which can hold any data type (even Python objects).
Step2: <span class="mark">Starting from version v0.8.0, pandas supporst non-unique index values</span>
Series is ndarray-like, dick-like, supports vectorized operations and label alignment
Step3: DataFrame
DataFrame is a 2-dimensional labelled data structure, like a spreadsheet or SQL table or a dict of Series objects. Obviously, the most used data structure in Pandas and what we'll be discussing more often.
Step4: There are several other constructors for creating a DataFrame object
- pd.DataFrame.from_records
- pd.DataFrame.from_dict
- pd.DataFrame.from_items
Other Pandas data objects which we are not going to talk about are
Panels (3D, 4D, ND)
IO Tools
The Pandas I/O API is a set of nice reader functions which generally return a pandas object
pd.from_csv
Some important parameters
- sep - Delimiter
- index_col - Specifies which column to select as index
- usecols - Specify which columns to read when reading a file
- compression - Can handle gzip, bz2 compressed text files
- comment - Comment character
- names - If header=None, you can specify the names of columns
- iterator - Return an iterator TextFileReader object
Step5: Let's see the power of pandas. We'll use Gencode v24 to demonstrate and read the annotation file.
Step6: pd.DataFrame.to_csv
Dumps data to a csv file. A lot of optional parameters apply which will help you save the file just like you want.
python
iris.to_csv("iris_copy.csv")
pd.DataFrame.to_hdf
python
iris.to_hdf("iris_copy.h5", "df")
Creates a HDF5 file (binary indexed file for faster loading and index filtering during load times). Requires pytables as a depandency if you want to go full on with it's functionality
Reshaping
Almost everyone will be familiar on how much you need to reshape the data if we want to plot it properly. This functionality is also pretty well covered in pandas.
pd.melt
Step7: Indexing and Selecting Data
pd.DataFrame and pd.Series support basic array-like indexing. To get into detail, it's better to use .loc and .iloc
Step9: <span class="burk"><span class="girk">Almost forgot, HTML conditional formatting just made it into the latest release 0.17.1 and it's pretty awesome. Use a function to your liking or do it with a background gradient</span></span>
Step10: Group-by and apply
You can group data (on both axes) based on a criteria. It returns an iterator but you can directly apply a function without the need to iterate through.
Remember though, you'll get a new index based on what you group with if you directly apply the function without iterating over the groups.
pd.DataFrame.groupby
Step12: Applying a function
pd.DataFrame.apply
Step13: Filtering (Numeric and String)
There's always need for that. Obviously needed float & int filters but exceptional string filtering options baked in... So much good stuff this..
Inside the pd.DataFrame.loc, you can specify and (&), or (|), not (~) as logical operators. This stuff works and is tested ;)
- >, <, >=, <=
- str.contains, str.startswith, str.endswith | Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
from IPython.display import display, HTML
Explanation: Table of Contents
<p><div class="lev1"><a href="#Pandas-Python-Data-Analysis-Library"><span class="toc-item-num">1 - </span><a href="http://pandas.pydata.org" target="_blank">Pandas</a> Python Data Analysis Library</a></div><div class="lev2"><a href="#Data-Structures-in-Pandas"><span class="toc-item-num">1.1 - </span>Data Structures in Pandas</a></div><div class="lev3"><a href="#Series"><span class="toc-item-num">1.1.1 - </span>Series</a></div><div class="lev3"><a href="#DataFrame"><span class="toc-item-num">1.1.2 - </span>DataFrame</a></div><div class="lev3"><a href="#Panels-(3D,-4D,-ND)"><span class="toc-item-num">1.1.3 - </span>Panels (3D, 4D, ND)</a></div><div class="lev2"><a href="#IO-Tools"><span class="toc-item-num">1.2 - </span>IO Tools</a></div><div class="lev3"><a href="#pd.from_csv"><span class="toc-item-num">1.2.1 - </span>pd.from_csv</a></div><div class="lev3"><a href="#pd.DataFrame.to_csv"><span class="toc-item-num">1.2.2 - </span>pd.DataFrame.to_csv</a></div><div class="lev3"><a href="#pd.DataFrame.to_hdf"><span class="toc-item-num">1.2.3 - </span>pd.DataFrame.to_hdf</a></div><div class="lev2"><a href="#Reshaping"><span class="toc-item-num">1.3 - </span>Reshaping</a></div><div class="lev2"><a href="#Indexing-and-Selecting-Data"><span class="toc-item-num">1.4 - </span>Indexing and Selecting Data</a></div><div class="lev2"><a href="#Group-by-and-apply"><span class="toc-item-num">1.5 - </span>Group-by and apply</a></div><div class="lev3"><a href="#Applying-a-function"><span class="toc-item-num">1.5.1 - </span>Applying a function</a></div><div class="lev2"><a href="#Filtering-(Numeric-and-String)"><span class="toc-item-num">1.6 - </span>Filtering (Numeric and String)</a></div>
> `Usual stuff to import`
End of explanation
series_one = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
series_one
series_two = pd.Series({'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5})
series_two
Explanation: Pandas Python Data Analysis Library
If you find manipulating dataframes in R a bit too cumbersome, why don't you give Pandas a chance. On top of easy and efficient table management, plotting functionality is pretty great.
Data Structures in Pandas
Data alignment is intrinsic in pandas.
Series
One-dimensional labelled array which can hold any data type (even Python objects).
End of explanation
series_one[2:4]
series_one['a']
series_one + series_two
series_one * 3
Explanation: <span class="mark">Starting from version v0.8.0, pandas supporst non-unique index values</span>
Series is ndarray-like, dick-like, supports vectorized operations and label alignment
End of explanation
df_one = pd.DataFrame({'one': pd.Series(np.random.rand(5),
index=['a', 'b', 'c', 'd' , 'e']),
'two': pd.Series(np.random.rand(4),
index=['a', 'b', 'c', 'e'])})
df_one
Explanation: DataFrame
DataFrame is a 2-dimensional labelled data structure, like a spreadsheet or SQL table or a dict of Series objects. Obviously, the most used data structure in Pandas and what we'll be discussing more often.
End of explanation
iris = pd.read_csv("iris.csv", index_col=0)
iris.head()
Explanation: There are several other constructors for creating a DataFrame object
- pd.DataFrame.from_records
- pd.DataFrame.from_dict
- pd.DataFrame.from_items
Other Pandas data objects which we are not going to talk about are
Panels (3D, 4D, ND)
IO Tools
The Pandas I/O API is a set of nice reader functions which generally return a pandas object
pd.from_csv
Some important parameters
- sep - Delimiter
- index_col - Specifies which column to select as index
- usecols - Specify which columns to read when reading a file
- compression - Can handle gzip, bz2 compressed text files
- comment - Comment character
- names - If header=None, you can specify the names of columns
- iterator - Return an iterator TextFileReader object
End of explanation
url = "ftp://ftp.sanger.ac.uk/pub/gencode/Gencode_human/release_24/gencode.v24.primary_assembly.annotation.gtf.gz"
gencode = pd.read_csv(url, compression="gzip", iterator=True, header=None,
sep="\t", comment="#", quoting=3,
usecols=[0, 1, 2, 3, 4, 6])
gencode.get_chunk(10)
Explanation: Let's see the power of pandas. We'll use Gencode v24 to demonstrate and read the annotation file.
End of explanation
planets = pd.read_csv("planets.csv", index_col=0)
planets.head()
planets_melt = pd.melt(planets, id_vars="method")
planets_melt.head()
Explanation: pd.DataFrame.to_csv
Dumps data to a csv file. A lot of optional parameters apply which will help you save the file just like you want.
python
iris.to_csv("iris_copy.csv")
pd.DataFrame.to_hdf
python
iris.to_hdf("iris_copy.h5", "df")
Creates a HDF5 file (binary indexed file for faster loading and index filtering during load times). Requires pytables as a depandency if you want to go full on with it's functionality
Reshaping
Almost everyone will be familiar on how much you need to reshape the data if we want to plot it properly. This functionality is also pretty well covered in pandas.
pd.melt
End of explanation
heatmap = pd.read_csv("Heatmap.tsv", sep="\t", index_col=0)
heatmap.head(10)
heatmap.iloc[4:8]
heatmap.loc[['prisons', 'jacks', 'irons']]
Explanation: Indexing and Selecting Data
pd.DataFrame and pd.Series support basic array-like indexing. To get into detail, it's better to use .loc and .iloc
End of explanation
def color_negative_red(val):
Takes a scalar and returns a string with
the css property `'color: red'` for negative
strings, black otherwise.
color = 'red' if val < 0 else 'black'
return 'color: %s' % color
# Apply the function like this
heatmap.head(10).style.applymap(color_negative_red)
heatmap.head(10).style.background_gradient(cmap="RdBu_r")
Explanation: <span class="burk"><span class="girk">Almost forgot, HTML conditional formatting just made it into the latest release 0.17.1 and it's pretty awesome. Use a function to your liking or do it with a background gradient</span></span>
End of explanation
# No need to iter through to apply mean based on species
iris_species_grouped = iris.groupby('species')
iris_species_grouped.mean()
# The previous iterator has reached it's end, so re-initialize
iris_species_grouped = iris.groupby('species')
for species, group in iris_species_grouped:
display(HTML(species))
display(pd.DataFrame(group.mean(axis=0)).T)
Explanation: Group-by and apply
You can group data (on both axes) based on a criteria. It returns an iterator but you can directly apply a function without the need to iterate through.
Remember though, you'll get a new index based on what you group with if you directly apply the function without iterating over the groups.
pd.DataFrame.groupby
End of explanation
pd.DataFrame(iris[[0, 1, 2, 3]].apply(np.std, axis=0)).T
def add_length_width(x):
Adds up the length and width of the features and returns
a pd.Series object so as to get a pd.DataFrame
sepal_sum = x['sepal_length'] + x['sepal_width']
petal_sum = x['petal_length'] + x['petal_width']
return pd.Series([sepal_sum, petal_sum, x['species']],
index=['sepal_sum', 'petal_sum', 'species'])
iris.apply(add_length_width, axis=1).head(5)
Explanation: Applying a function
pd.DataFrame.apply
End of explanation
iris.loc[iris.sepal_width > 3.5]
iris.loc[(iris.sepal_width > 3.5) & (iris.species == 'virginica')]
heatmap.loc[heatmap.index.str.contains("due|ver|ap")]
Explanation: Filtering (Numeric and String)
There's always need for that. Obviously needed float & int filters but exceptional string filtering options baked in... So much good stuff this..
Inside the pd.DataFrame.loc, you can specify and (&), or (|), not (~) as logical operators. This stuff works and is tested ;)
- >, <, >=, <=
- str.contains, str.startswith, str.endswith
End of explanation |
9,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image model on MNIST using the tf.keras API. It builds the foundation for this <a href="https
Step1: Exploring the data
The MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.
Step2: Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
Step3: Define the model
Let's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!
We can build our linear classifer using the tf.keras API, so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.
When building more complex models such as DNNs and CNNs our code will be more readable by using the tf.keras API. Let's get one working so we can test it and use it as a benchmark.
Step5: Write Input Functions
As usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.
TODO 1
Step6: Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
Step7: Evaluating Predictions
Were you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?
TODO 2
Step8: It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
Each of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.
TODO 3 | Python Code:
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import Dense, Flatten, Softmax
print(tf.__version__)
!python3 -m pip freeze | grep 'tensorflow==2\|tensorflow-gpu==2' || \
python3 -m pip install tensorflow==2
Explanation: MNIST Image Classification with TensorFlow
This notebook demonstrates how to implement a simple linear image model on MNIST using the tf.keras API. It builds the foundation for this <a href="https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb">companion notebook</a>, which explores tackling the same problem with other types of models such as DNN and CNN.
Learning Objectives
Know how to read and display image data
Know how to find incorrect predictions to analyze the model
Visually see how computers see images
This notebook uses TF2.0
Please check your tensorflow version using the cell below. If it is not 2.0, please run the pip line below and restart the kernel.
End of explanation
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
HEIGHT, WIDTH = x_train[0].shape
NCLASSES = tf.size(tf.unique(y_train).y)
print("Image height x width is", HEIGHT, "x", WIDTH)
tf.print("There are", NCLASSES, "classes")
Explanation: Exploring the data
The MNIST dataset is already included in tensorflow through the keras datasets module. Let's load it and get a sense of the data.
End of explanation
IMGNO = 12
# Uncomment to see raw numerical values.
# print(x_test[IMGNO])
plt.imshow(x_test[IMGNO].reshape(HEIGHT, WIDTH));
print("The label for image number", IMGNO, "is", y_test[IMGNO])
Explanation: Each image is 28 x 28 pixels and represents a digit from 0 to 9. These images are black and white, so each pixel is a value from 0 (white) to 255 (black). Raw numbers can be hard to interpret sometimes, so we can plot the values to see the handwritten digit as an image.
End of explanation
def linear_model():
model = Sequential([
Flatten(),
Dense(NCLASSES),
Softmax()
])
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
Explanation: Define the model
Let's start with a very simple linear classifier. This was the first method to be tried on MNIST in 1998, and scored an 88% accuracy. Quite ground breaking at the time!
We can build our linear classifer using the tf.keras API, so we don't have to define or initialize our weights and biases. This happens automatically for us in the background. We can also add a softmax layer to transform the logits into probabilities. Finally, we can compile the model using categorical cross entropy in order to strongly penalize high probability predictions that were incorrect.
When building more complex models such as DNNs and CNNs our code will be more readable by using the tf.keras API. Let's get one working so we can test it and use it as a benchmark.
End of explanation
BUFFER_SIZE = 5000
BATCH_SIZE = 100
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
def load_dataset(training=True):
Loads MNIST dataset into a tf.data.Dataset
(x_train, y_train), (x_test, y_test) = mnist
x = x_train if training else x_test
y = y_train if training else y_test
# One-hot encode the classes
y = tf.keras.utils.to_categorical(y, NCLASSES)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.map(scale).batch(BATCH_SIZE)
if training:
dataset = dataset.shuffle(BUFFER_SIZE).repeat()
return dataset
def create_shape_test(training):
dataset = load_dataset(training=training)
data_iter = dataset.__iter__()
(images, labels) = data_iter.get_next()
expected_image_shape = (BATCH_SIZE, HEIGHT, WIDTH)
expected_label_ndim = 2
assert(images.shape == expected_image_shape)
assert(labels.numpy().ndim == expected_label_ndim)
test_name = 'training' if training else 'eval'
print("Test for", test_name, "passed!")
create_shape_test(True)
create_shape_test(False)
Explanation: Write Input Functions
As usual, we need to specify input functions for training and evaluating. We'll scale each pixel value so it's a decimal value between 0 and 1 as a way of normalizing the data.
TODO 1: Define the scale function below and build the dataset
End of explanation
NUM_EPOCHS = 10
STEPS_PER_EPOCH = 100
model = linear_model()
train_data = load_dataset()
validation_data = load_dataset(training=False)
OUTDIR = "mnist_linear/"
checkpoint_callback = ModelCheckpoint(
OUTDIR, save_weights_only=True, verbose=1)
tensorboard_callback = TensorBoard(log_dir=OUTDIR)
history = model.fit(
train_data,
validation_data=validation_data,
epochs=NUM_EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
verbose=2,
callbacks=[checkpoint_callback, tensorboard_callback]
)
BENCHMARK_ERROR = .12
BENCHMARK_ACCURACY = 1 - BENCHMARK_ERROR
accuracy = history.history['accuracy']
val_accuracy = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
assert(accuracy[-1] > BENCHMARK_ACCURACY)
assert(val_accuracy[-1] > BENCHMARK_ACCURACY)
print("Test to beat benchmark accuracy passed!")
assert(accuracy[0] < accuracy[1])
assert(accuracy[1] < accuracy[-1])
assert(val_accuracy[0] < val_accuracy[1])
assert(val_accuracy[1] < val_accuracy[-1])
print("Test model accuracy is improving passed!")
assert(loss[0] > loss[1])
assert(loss[1] > loss[-1])
assert(val_loss[0] > val_loss[1])
assert(val_loss[1] > val_loss[-1])
print("Test loss is decreasing passed!")
Explanation: Time to train the model! The original MNIST linear classifier had an error rate of 12%. Let's use that to sanity check that our model is learning.
End of explanation
image_numbers = range(0, 10, 1) # Change me, please.
def load_prediction_dataset():
dataset = (x_test[image_numbers], y_test[image_numbers])
dataset = tf.data.Dataset.from_tensor_slices(dataset)
dataset = dataset.map(scale).batch(len(image_numbers))
return dataset
predicted_results = model.predict(load_prediction_dataset())
for index, prediction in enumerate(predicted_results):
predicted_value = np.argmax(prediction)
actual_value = y_test[image_numbers[index]]
if actual_value != predicted_value:
print("image number: " + str(image_numbers[index]))
print("the prediction was " + str(predicted_value))
print("the actual label is " + str(actual_value))
print("")
bad_image_number = 8
plt.imshow(x_test[bad_image_number].reshape(HEIGHT, WIDTH));
Explanation: Evaluating Predictions
Were you able to get an accuracy of over 90%? Not bad for a linear estimator! Let's make some predictions and see if we can find where the model has trouble. Change the range of values below to find incorrect predictions, and plot the corresponding images. What would you have guessed for these images?
TODO 2: Change the range below to find an incorrect prediction
End of explanation
DIGIT = 0 # Change me to be an integer from 0 to 9.
LAYER = 1 # Layer 0 flattens image, so no weights
WEIGHT_TYPE = 0 # 0 for variable weights, 1 for biases
dense_layer_weights = model.layers[LAYER].get_weights()
digit_weights = dense_layer_weights[WEIGHT_TYPE][:, DIGIT]
plt.imshow(digit_weights.reshape((HEIGHT, WIDTH)))
Explanation: It's understandable why the poor computer would have some trouble. Some of these images are difficult for even humans to read. In fact, we can see what the computer thinks each digit looks like.
Each of the 10 neurons in the dense layer of our model has 785 weights feeding into it. That's 1 weight for every pixel in the image + 1 for a bias term. These weights are flattened feeding into the model, but we can reshape them back into the original image dimensions to see what the computer sees.
TODO 3: Reshape the layer weights to be the shape of an input image and plot.
End of explanation |
9,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebooks
We are going to be using jupyter notebooks for this course and a version of python known as ipython in the notebooks.
The notebook is a live document that can contain a number of different types of content, including fragments of programs that can actually be run within the text.
You can use the notebooks as a scratchpad to process data, for example, and the workflow that you develop is recorded and can be revisited. It's like a lab notebook but for data manipulation and programming.
You can export the document you produce as a pdf to make a final version.
I am going to encourage you to use notebooks for the many data processing / recording tasks you will face during your degree.
Notebook cells
A notebook is made up of different cells which contain different types of content and which can be executed by selecting the cell and running it (see toolbar).
The results of running a cell depend on the content of the cell. It may produce an output cell beneath it with results, or it may reformat itself.
Try double-clicking on this cell and you will see the raw programming language underneath that produces the formatted text. If you run this cell it will display the text again.
The cells can have different types of content and will behave accordingly when run.
Formatted text
This uses an almost-already-formatted version of text known as markdown to make content that can be formatted to look similar to a word document, but which also tends to highlight the intended formatting even if not processed. It is quite a useful form to learn for taking notes.
If you look at the raw content you will see how to make
bullet points
nested bullet points as well as
text in bold
text in italics (or emphasised)
sh
echo "Code that can be highlighted for different languages"
ls -l
```python
print "including python, unix shell scripts, fortran"
for i in range(0,100)
Step1: When you run a code cell, it is just the same as typing all the code into the interpreter. If you run a cell twice it is the same as if you typed everything twice ... the notebook remembers ! You can run some of a notebook or all of it, and you can run cells out of order. This is great for experimenting and getting things right, but be careful, this can break things easily.
Step2: Hidden cells
In this notebook, I have added some extensions that mean some cells are hidden (I use this to hide some of the details when we learn something complicated). The hidden cells are small, blank lines that you can view by selecting them and hitting the carat button in the toolbar.
There is one hidden below ... see if you can find it !
Step3: Exercise cells
Some cells are intended to be used for class exercises and have a little plus sign which indicates you can expand the cell to see a worked example or answer to a question. These cells are not executed unless they are expanded and run individually. Try this one both ways.
Step4: A more interesting exercise
Why does this not work ?
python
print "Run number {}".format(b)
b += 1
and, more importantly, how would you fix this ? | Python Code:
## Example of a simple python code cell
print "Hello little world"
a = 1
## The last statement in a cell prints its value
a
## (this is sometimes a little confusing - add a pass statement to get rid of this !)
#pass
Explanation: Jupyter Notebooks
We are going to be using jupyter notebooks for this course and a version of python known as ipython in the notebooks.
The notebook is a live document that can contain a number of different types of content, including fragments of programs that can actually be run within the text.
You can use the notebooks as a scratchpad to process data, for example, and the workflow that you develop is recorded and can be revisited. It's like a lab notebook but for data manipulation and programming.
You can export the document you produce as a pdf to make a final version.
I am going to encourage you to use notebooks for the many data processing / recording tasks you will face during your degree.
Notebook cells
A notebook is made up of different cells which contain different types of content and which can be executed by selecting the cell and running it (see toolbar).
The results of running a cell depend on the content of the cell. It may produce an output cell beneath it with results, or it may reformat itself.
Try double-clicking on this cell and you will see the raw programming language underneath that produces the formatted text. If you run this cell it will display the text again.
The cells can have different types of content and will behave accordingly when run.
Formatted text
This uses an almost-already-formatted version of text known as markdown to make content that can be formatted to look similar to a word document, but which also tends to highlight the intended formatting even if not processed. It is quite a useful form to learn for taking notes.
If you look at the raw content you will see how to make
bullet points
nested bullet points as well as
text in bold
text in italics (or emphasised)
sh
echo "Code that can be highlighted for different languages"
ls -l
```python
print "including python, unix shell scripts, fortran"
for i in range(0,100):
print i
```
Mathematical symbols and formulae that can appear in the text like this "the circumference of a circle is $2\pi r$" or as equations like this:
$$
\begin{equation}
A = \pi
\end{equation}
$$
BUT you will need to be able to parse the $\LaTeX$ language to write mathematics.
You can add links like this: also see the markdown cheatsheet
And images like this:
Running cells containing formatted code will replace the cell with the formatted version. This allows you to write a well-formatted page with interleaved code which also can be executed.
Python code
Cells which are defined to be code cells contain python statements which will be executed when the cell is run. If multiple cells have code, then they will be run in the order you choose (or if you "run all" or "run all above" they will be run in the order they are listed).
End of explanation
print "Run number {}".format(a)
a += 1
Explanation: When you run a code cell, it is just the same as typing all the code into the interpreter. If you run a cell twice it is the same as if you typed everything twice ... the notebook remembers ! You can run some of a notebook or all of it, and you can run cells out of order. This is great for experimenting and getting things right, but be careful, this can break things easily.
End of explanation
## The simplest possible python program
print "You can run but you can't hide"
## This is a hidden cell !
## You can't usually see it but it still runs if you execute the notebook
print "Yes you can !"
Explanation: Hidden cells
In this notebook, I have added some extensions that mean some cells are hidden (I use this to hide some of the details when we learn something complicated). The hidden cells are small, blank lines that you can view by selecting them and hitting the carat button in the toolbar.
There is one hidden below ... see if you can find it !
End of explanation
print "You can hide and you can't run !"
Explanation: Exercise cells
Some cells are intended to be used for class exercises and have a little plus sign which indicates you can expand the cell to see a worked example or answer to a question. These cells are not executed unless they are expanded and run individually. Try this one both ways.
End of explanation
# Try it !!
## Because b hasn't been defined.
try:
print "Run number {}".format(c)
except:
print "Run number 1"
c = 1
c += 1
Explanation: A more interesting exercise
Why does this not work ?
python
print "Run number {}".format(b)
b += 1
and, more importantly, how would you fix this ?
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.