Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="center">
<h1>Welcome to </h1>
<br>
</div>
<div align="center">
<img src='media/mapamundi-bilbao.jpg' width="100%" />
</div>
<div align="center">
<h1>Welcome to </h1>
</div>
<div >
<img src='media/05-Secondary Logo B.png' width=512 />
</div>
Ho we are
Fabio Pliger
@fpliger
- EPS Board member
Oier Echaniz
@oiertwo
- ACPySS Chair (On-site team)
<div class="col-md-8">
<h1>Attendees evolution</h1>
</div>
<div class="col-md-4">
<img src='media/05-Secondary Logo B.png' width=512 />
</div>
Step1: Attendees evolution
IN BILBAO
Step2: Wifi information
- Password and SSID | Python Code:
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = [('year', 'location', 'attendees'),
(2002, 'Charleroi', 240),
(2003, 'Charleroi', 300),
(2004, 'Göteborg', 'nan'),
(2005, 'Göteborg', 'nan'),
(2006, 'Geneva', 'nan'),
(2007, 'Vilnius', 'nan'),
(2008, 'Vilnius', 206),
(2009, 'Birmingham', 410),
(2010, 'Birmingham', 446),
(2011, 'Florence', 670),
(2012, 'Florence', 760),
(2013, 'Florence', 870),
(2014, 'Berlin', 1250),
(2015, 'Bilbao', 1100),]
names = data[0]
eps = {name: [] for name in names}
for line in data[1:]:
for pos, name in enumerate(names):
eps[name].append(line[pos])
plt.plot(eps['year'], eps['attendees'])
Explanation: <div align="center">
<h1>Welcome to </h1>
<br>
</div>
<div align="center">
<img src='media/mapamundi-bilbao.jpg' width="100%" />
</div>
<div align="center">
<h1>Welcome to </h1>
</div>
<div >
<img src='media/05-Secondary Logo B.png' width=512 />
</div>
Ho we are
Fabio Pliger
@fpliger
- EPS Board member
Oier Echaniz
@oiertwo
- ACPySS Chair (On-site team)
<div class="col-md-8">
<h1>Attendees evolution</h1>
</div>
<div class="col-md-4">
<img src='media/05-Secondary Logo B.png' width=512 />
</div>
End of explanation
data = [('year', 'location', 'attendees'),
(2014, 'Bilbao', 0),
(2015, 'Bilbao', 1100)]
names = data[0]
eps = {name: [] for name in names}
for line in data[1:]:
for pos, name in enumerate(names):
eps[name].append(line[pos])
plt.plot(eps['year'], eps['attendees'])
Explanation: Attendees evolution
IN BILBAO
End of explanation
from IPython.display import Image
Image('media/coc-bsol.gif')
Explanation: Wifi information
- Password and SSID: europython2015
- If problem:
* Move through the venue to find an antenna with a empty spot
* there is nobody in the venue that can solve
Cable (for emergency purpose)
-For speakers
-Only if the wifi is not working properly
-In the helpdesk and info desk in case of emergency
<div class="col-md-8">
<h1>Code of conduct </h1>
<ul>
<li>Available online</li>
<li>Not tolerate</li>
<li>Behave properly</li>
</ul>
<h1>Enjoy the conference </h1>
</div>
<div class= "col-md-4" >
<img src='media/coc-bsol.gif' width="100%" />
</div>
End of explanation |
13,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal 1
Injest file as pure text
Step1: Import data as a list of lines
Step2: Import data as a data frame
Step3: Goal 2
Step4: Goal 3 | Python Code:
MovieTextFile = open("tmdb_5000_movies.csv")
# for line in MovieTextFile:
# print(line) # not quite right
# type(MovieTextFile)
Explanation: Goal 1
Injest file as pure text
End of explanation
import csv
with open("tmdb_5000_movies.csv",encoding="utf8") as f:
reader = csv.reader(f)
MovieList = list(reader)
MovieList[:5]
Explanation: Import data as a list of lines
End of explanation
import pandas as pd
movies = pd.read_csv("tmdb_5000_movies.csv")
movieFrame = pd.DataFrame(movies)
movieFrame[:5]
Explanation: Import data as a data frame
End of explanation
#pull out genres array of JSON strings from data frame
genres = movieFrame['genres']
# genresFrame = pd.DataFrame(genres)
genres[:5]
# Pull out list of names for each row of the data frame
# Start with testing first row and iterating through JSON string
import json
genreList = []
genre = json.loads(genres[0])
for i,val in enumerate(genre):
genreList.append(genre[i]['name'])
genreList
# Iterate through indices of genre array to create a list of lists of genre names
import json
genresAll = []
for k,x in enumerate(genres):
genreList = []
genre = json.loads(genres[k])
for i,val in enumerate(genre):
genreList.append(genre[i]['name'])
genresAll.append(genreList)
genresAll[:10]
genreSeries['W'] = pd.Series(genresAll,index=movieFrame.index)
genreFrame = pd.DataFrame(genreSeries['W'],columns=['GenreList'])
genreFrame[:5]
genreDummies = genreFrame.GenreList.astype(str).str.strip('[]').str.get_dummies(', ')
genreDummies[:10]
# append lists as a column at end of dataframe
movieGenreFrame = pd.merge(movieFrame,genreFrame,how='inner',left_index=True, right_index=True)
movieGenreFrame[:5]
wideMovieFrame = pd.merge(movieGenreFrame,genreDummies,how='inner',left_index=True,right_index=True)
wideMovieFrame[:5]
Explanation: Goal 2
End of explanation
longMovieFrame = pd.melt(wideMovieFrame, id_vars=movieGenreFrame.columns, value_vars=genreDummies.columns,
var_name='Genre',value_name="genre_present")
longMovieFrame[:10]
# test results with 'Avatar' example
longMovieFrame[longMovieFrame['title']=='Avatar']
# If only retaining "true" genres
longMovieFrameTrimmed = longMovieFrame[longMovieFrame['genre_present']==1]
longMovieFrameTrimmed[:5]
Explanation: Goal 3
End of explanation |
13,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Like Polynomial Terms
Remember in Algebra how you had to combine "like terms" to simplify problems?
You'd see expressions such as 60 + 2x^3 - 6x + x^3 + 17x in which there are 5 total terms but only 4 are "like terms".
2x^3 and x^3 are like, and -6x and 17x are like, while 60 doesn't have any like siblings.
Can we teach a model to predict that there are 4 like terms in the above expression?
Let's give it a shot using Mathy to generate math problems and thinc to build a regression model that outputs the number of like terms in each input problem.
Step1: Sketch a Model
Before we get started it can be good to have an idea of what input/output shapes we want for our model.
We'll convert text math problems into lists of lists of integers, so our example (X) type can be represented using thinc's Ints2d type.
The model will predict how many like terms there are in each sequence, so our output (Y) type can represented with the Floats2d type.
Knowing the thinc types we want enables us to create an alias for our model, so we only have to type out the verbose generic signature once.
Step2: Encode Text Inputs
Mathy generates ascii-math problems and we have to encode them into integers that the model can process.
To do this we'll build a vocabulary of all the possible characters we'll see, and map each input character to its index in the list.
For math problems our vocabulary will include all the characters of the alphabet, numbers 0-9, and special characters like *, -, ., etc.
Step3: Try It
Let's try it out on some fixed data to be sure it works.
Step4: Generate Math Problems
We'll use Mathy to generate random polynomial problems with a variable number of like terms. The generated problems will act as training data for our model.
Step5: Try It
Step6: Count Like Terms
Now that we can generate input problems, we'll need a function that can count the like terms in each one and return the value for use as a label.
To accomplish this we'll use a few helpers from mathy to enumerate the terms and compare them to see if they're like.
Step7: Try It
Step8: Generate Problem/Answer pairs
Now that we can generate problems, count the number of like terms in them, and encode their text into integers, we have the pieces required to generate random problems and answers that we can train a neural network with.
Let's write a function that will return a tuple of
Step9: Try It
Step10: Build a Model
Now that we can generate X/Y values, let's define our model and verify that it can process a single input/output.
For this we'll use Thinc and the define_operators context manager to connect the pieces together using overloaded operators for chain and clone operations.
Step11: Try It
Let's pass an example through the model to make sure we have all the sizes right.
Step12: Generate Training Datasets
Now that we can generate examples and we have a model that can process them, let's generate random unique training and evaluation datasets.
For this we'll write another helper function that can generate (n) training examples and respects an exclude list to avoid letting examples from the training/test sets overlap.
Step13: Try It
Generate a small dataset to be sure everything is working as expected
Step14: Evaluate Model Performance
We're almost ready to train our model, we just need to write a function that will check a given trained model against a given dataset and return a 0-1 score of how accurate it was.
We'll use this function to print the score as training progresses and print final test predictions at the end of training.
Step15: Try It
Let's try it out with an untrained model and expect to see a really sad score.
Step16: Train/Evaluate a Model
The final helper function we need is one to train and evaluate a model given two input datasets.
This function does a few things
Step17: We'll generate the dataset first, so we can iterate on the model without having to spend time generating examples for each run. This also ensures we have the same dataset across different model runs, to make it easier to compare performance.
Step18: Finally, we can build, train, and evaluate our model!
Step19: Intermediate Exercise
The model we built can train up to ~80% given 100 or more epochs. Improve the model architecture so that it trains to a similar accuracy while requiring fewer epochs or a smaller dataset size.
Step22: Advanced Exercise
Rewrite the model to encode the whole expression with a BiLSTM, and then generate pairs of terms, using the BiLSTM vectors. Over each pair of terms, predict whether the terms are alike or unlike. | Python Code:
!pip install "thinc>=8.0.0" mathy_core
Explanation: Predicting Like Polynomial Terms
Remember in Algebra how you had to combine "like terms" to simplify problems?
You'd see expressions such as 60 + 2x^3 - 6x + x^3 + 17x in which there are 5 total terms but only 4 are "like terms".
2x^3 and x^3 are like, and -6x and 17x are like, while 60 doesn't have any like siblings.
Can we teach a model to predict that there are 4 like terms in the above expression?
Let's give it a shot using Mathy to generate math problems and thinc to build a regression model that outputs the number of like terms in each input problem.
End of explanation
from typing import List
from thinc.api import Model
from thinc.types import Ints2d, Floats1d
ModelX = Ints2d
ModelY = Floats1d
ModelT = Model[List[ModelX], ModelY]
Explanation: Sketch a Model
Before we get started it can be good to have an idea of what input/output shapes we want for our model.
We'll convert text math problems into lists of lists of integers, so our example (X) type can be represented using thinc's Ints2d type.
The model will predict how many like terms there are in each sequence, so our output (Y) type can represented with the Floats2d type.
Knowing the thinc types we want enables us to create an alias for our model, so we only have to type out the verbose generic signature once.
End of explanation
from typing import List
from thinc.api import Model
from thinc.types import Ints2d, Floats1d
from thinc.api import Ops, get_current_ops
vocab = " .+-/^*()[]-01234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"
def encode_input(text: str) -> ModelX:
ops: Ops = get_current_ops()
indices: List[List[int]] = []
for c in text:
if c not in vocab:
raise ValueError(f"'{c}' missing from vocabulary in text: {text}")
indices.append([vocab.index(c)])
return ops.asarray2i(indices)
Explanation: Encode Text Inputs
Mathy generates ascii-math problems and we have to encode them into integers that the model can process.
To do this we'll build a vocabulary of all the possible characters we'll see, and map each input character to its index in the list.
For math problems our vocabulary will include all the characters of the alphabet, numbers 0-9, and special characters like *, -, ., etc.
End of explanation
outputs = encode_input("4+2")
assert outputs[0][0] == vocab.index("4")
assert outputs[1][0] == vocab.index("+")
assert outputs[2][0] == vocab.index("2")
print(outputs)
Explanation: Try It
Let's try it out on some fixed data to be sure it works.
End of explanation
from typing import List, Optional, Set
import random
from mathy_core.problems import gen_simplify_multiple_terms
def generate_problems(number: int, exclude: Optional[Set[str]] = None) -> List[str]:
if exclude is None:
exclude = set()
problems: List[str] = []
while len(problems) < number:
text, complexity = gen_simplify_multiple_terms(
random.randint(2, 6),
noise_probability=1.0,
noise_terms=random.randint(2, 10),
op=["+", "-"],
)
assert text not in exclude, "duplicate problem generated!"
exclude.add(text)
problems.append(text)
return problems
Explanation: Generate Math Problems
We'll use Mathy to generate random polynomial problems with a variable number of like terms. The generated problems will act as training data for our model.
End of explanation
generate_problems(10)
Explanation: Try It
End of explanation
from typing import Optional, List, Dict
from mathy_core import MathExpression, ExpressionParser, get_terms, get_term_ex, TermEx
from mathy_core.problems import mathy_term_string
parser = ExpressionParser()
def count_like_terms(input_problem: str) -> int:
expression: MathExpression = parser.parse(input_problem)
term_nodes: List[MathExpression] = get_terms(expression)
node_groups: Dict[str, List[MathExpression]] = {}
for term_node in term_nodes:
ex: Optional[TermEx] = get_term_ex(term_node)
assert ex is not None, f"invalid expression {term_node}"
key = mathy_term_string(variable=ex.variable, exponent=ex.exponent)
if key == "":
key = "const"
if key not in node_groups:
node_groups[key] = [term_node]
else:
node_groups[key].append(term_node)
like_terms = 0
for k, v in node_groups.items():
if len(v) <= 1:
continue
like_terms += len(v)
return like_terms
Explanation: Count Like Terms
Now that we can generate input problems, we'll need a function that can count the like terms in each one and return the value for use as a label.
To accomplish this we'll use a few helpers from mathy to enumerate the terms and compare them to see if they're like.
End of explanation
assert count_like_terms("4x - 2y + q") == 0
assert count_like_terms("x + x + z") == 2
assert count_like_terms("4x + 2x - x + 7") == 3
Explanation: Try It
End of explanation
from typing import Tuple
from thinc.api import Ops, get_current_ops
def to_example(input_problem: str) -> Tuple[str, ModelX, ModelY]:
ops: Ops = get_current_ops()
encoded_input = encode_input(input_problem)
like_terms = count_like_terms(input_problem)
return input_problem, encoded_input, ops.asarray1f([like_terms])
Explanation: Generate Problem/Answer pairs
Now that we can generate problems, count the number of like terms in them, and encode their text into integers, we have the pieces required to generate random problems and answers that we can train a neural network with.
Let's write a function that will return a tuple of: the problem text, its encoded example form, and the output label.
End of explanation
text, X, Y = to_example("x+2x")
assert text == "x+2x"
assert X[0] == vocab.index("x")
assert Y[0] == 2
print(text, X, Y)
Explanation: Try It
End of explanation
from typing import List
from thinc.model import Model
from thinc.api import concatenate, chain, clone, list2ragged
from thinc.api import reduce_sum, Mish, with_array, Embed, residual
def build_model(n_hidden: int, dropout: float = 0.1) -> ModelT:
with Model.define_operators({">>": chain, "|": concatenate, "**": clone}):
model = (
# Iterate over each element in the batch
with_array(
# Embed the vocab indices
Embed(n_hidden, len(vocab), column=0)
# Activate each batch of embedding sequences separately first
>> Mish(n_hidden, dropout=dropout)
)
# Convert to ragged so we can use the reduction layers
>> list2ragged()
# Sum the features for each batch input
>> reduce_sum()
# Process with a small resnet
>> residual(Mish(n_hidden, normalize=True)) ** 4
# Convert (batch_size, n_hidden) to (batch_size, 1)
>> Mish(1)
)
return model
Explanation: Build a Model
Now that we can generate X/Y values, let's define our model and verify that it can process a single input/output.
For this we'll use Thinc and the define_operators context manager to connect the pieces together using overloaded operators for chain and clone operations.
End of explanation
text, X, Y = to_example("14x + 2y - 3x + 7x")
m = build_model(12)
m.initialize([X], m.ops.asarray(Y, dtype="f"))
mY = m.predict([X])
print(mY.shape)
assert mY.shape == (1, 1)
Explanation: Try It
Let's pass an example through the model to make sure we have all the sizes right.
End of explanation
from typing import Tuple, Optional, Set, List
DatasetTuple = Tuple[List[str], List[ModelX], List[ModelY]]
def generate_dataset(
size: int,
exclude: Optional[Set[str]] = None,
) -> DatasetTuple:
ops: Ops = get_current_ops()
texts: List[str] = generate_problems(size, exclude=exclude)
examples: List[ModelX] = []
labels: List[ModelY] = []
for i, text in enumerate(texts):
text, x, y = to_example(text)
examples.append(x)
labels.append(y)
return texts, examples, labels
Explanation: Generate Training Datasets
Now that we can generate examples and we have a model that can process them, let's generate random unique training and evaluation datasets.
For this we'll write another helper function that can generate (n) training examples and respects an exclude list to avoid letting examples from the training/test sets overlap.
End of explanation
texts, x, y = generate_dataset(10)
assert len(texts) == 10
assert len(x) == 10
assert len(y) == 10
Explanation: Try It
Generate a small dataset to be sure everything is working as expected
End of explanation
from typing import List
from wasabi import msg
def evaluate_model(
model: ModelT,
*,
print_problems: bool = False,
texts: List[str],
X: List[ModelX],
Y: List[ModelY],
):
Yeval = model.predict(X)
correct_count = 0
print_n = 12
if print_problems:
msg.divider(f"eval samples max({print_n})")
for text, y_answer, y_guess in zip(texts, Y, Yeval):
y_guess = round(float(y_guess))
correct = y_guess == int(y_answer)
print_fn = msg.fail
if correct:
correct_count += 1
print_fn = msg.good
if print_problems and print_n > 0:
print_n -= 1
print_fn(f"Answer[{int(y_answer[0])}] Guess[{y_guess}] Text: {text}")
if print_problems:
print(f"Model predicted {correct_count} out of {len(X)} correctly.")
return correct_count / len(X)
Explanation: Evaluate Model Performance
We're almost ready to train our model, we just need to write a function that will check a given trained model against a given dataset and return a 0-1 score of how accurate it was.
We'll use this function to print the score as training progresses and print final test predictions at the end of training.
End of explanation
texts, X, Y = generate_dataset(128)
m = build_model(12)
m.initialize(X, m.ops.asarray(Y, dtype="f"))
# Assume the model should do so poorly as to round down to 0
assert round(evaluate_model(m, texts=texts, X=X, Y=Y)) == 0
Explanation: Try It
Let's try it out with an untrained model and expect to see a really sad score.
End of explanation
from thinc.api import Adam
from wasabi import msg
import numpy
from tqdm.auto import tqdm
def train_and_evaluate(
model: ModelT,
train_tuple: DatasetTuple,
eval_tuple: DatasetTuple,
*,
lr: float = 3e-3,
batch_size: int = 64,
epochs: int = 48,
) -> float:
(train_texts, train_X, train_y) = train_tuple
(eval_texts, eval_X, eval_y) = eval_tuple
msg.divider("Train and Evaluate Model")
msg.info(f"Batch size = {batch_size}\tEpochs = {epochs}\tLearning Rate = {lr}")
optimizer = Adam(lr)
best_score: float = 0.0
best_model: Optional[bytes] = None
for n in range(epochs):
loss = 0.0
batches = model.ops.multibatch(batch_size, train_X, train_y, shuffle=True)
for X, Y in tqdm(batches, leave=False, unit="batches"):
Y = model.ops.asarray(Y, dtype="float32")
Yh, backprop = model.begin_update(X)
err = Yh - Y
backprop(err)
loss += (err ** 2).sum()
model.finish_update(optimizer)
score = evaluate_model(model, texts=eval_texts, X=eval_X, Y=eval_y)
if score > best_score:
best_model = model.to_bytes()
best_score = score
print(f"{n}\t{score:.2f}\t{loss:.2f}")
if best_model is not None:
model.from_bytes(best_model)
print(f"Evaluating with best model")
score = evaluate_model(
model, texts=eval_texts, print_problems=True, X=eval_X, Y=eval_y
)
print(f"Final Score: {score}")
return score
Explanation: Train/Evaluate a Model
The final helper function we need is one to train and evaluate a model given two input datasets.
This function does a few things:
Create an Adam optimizer we can use for minimizing the model's prediction error.
Loop over the given training dataset (epoch) number of times.
For each epoch, make batches of (batch_size) examples. For each batch(X), predict the number of like terms (Yh) and subtract the known answers (Y) to get the prediction error. Update the model using the optimizer with the calculated error.
After each epoch, check the model performance against the evaluation dataset.
Save the model weights for the best score out of all the training epochs.
After all training is done, restore the best model and print results from the evaluation set.
End of explanation
train_size = 1024 * 8
test_size = 2048
seen_texts: Set[str] = set()
with msg.loading(f"Generating train dataset with {train_size} examples..."):
train_dataset = generate_dataset(train_size, seen_texts)
msg.good(f"Train set created with {train_size} examples.")
with msg.loading(f"Generating eval dataset with {test_size} examples..."):
eval_dataset = generate_dataset(test_size, seen_texts)
msg.good(f"Eval set created with {test_size} examples.")
init_x = train_dataset[1][:2]
init_y = train_dataset[2][:2]
Explanation: We'll generate the dataset first, so we can iterate on the model without having to spend time generating examples for each run. This also ensures we have the same dataset across different model runs, to make it easier to compare performance.
End of explanation
model = build_model(64)
model.initialize(init_x, init_y)
train_and_evaluate(
model, train_dataset, eval_dataset, lr=2e-3, batch_size=64, epochs=16
)
Explanation: Finally, we can build, train, and evaluate our model!
End of explanation
from typing import List
from thinc.model import Model
from thinc.types import Array2d, Array1d
from thinc.api import chain, clone, list2ragged, reduce_mean, Mish, with_array, Embed, residual
def custom_model(n_hidden: int, dropout: float = 0.1) -> Model[List[Array2d], Array2d]:
# Put your custom architecture here
return build_model(n_hidden, dropout)
model = custom_model(64)
model.initialize(init_x, init_y)
train_and_evaluate(
model, train_dataset, eval_dataset, lr=2e-3, batch_size=64, epochs=16
)
Explanation: Intermediate Exercise
The model we built can train up to ~80% given 100 or more epochs. Improve the model architecture so that it trains to a similar accuracy while requiring fewer epochs or a smaller dataset size.
End of explanation
from dataclasses import dataclass
from thinc.types import Array2d, Ragged
from thinc.model import Model
@dataclass
class Comparisons:
data: Array2d # Batch of vectors for each pair
indices: Array2d # Int array of shape (N, 3), showing the (batch, term1, term2) positions
def pairify() -> Model[Ragged, Comparisons]:
Create pair-wise comparisons for items in a sequence. For each sequence of N
items, there will be (N**2-N)/2 comparisons.
...
def predict_over_pairs(model: Model[Array2d, Array2d]) -> Model[Comparisons, Comparisons]:
Apply a prediction model over a batch of comparisons. Outputs a Comparisons
object where the data is the scores. The prediction model should predict over
two classes, True and False.
...
Explanation: Advanced Exercise
Rewrite the model to encode the whole expression with a BiLSTM, and then generate pairs of terms, using the BiLSTM vectors. Over each pair of terms, predict whether the terms are alike or unlike.
End of explanation |
13,203 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given two sets of points in n-dimensional space, how can one map points from one set to the other, such that each point is only used once and the total euclidean distance between the pairs of points is minimized? | Problem:
import numpy as np
import scipy.spatial
import scipy.optimize
points1 = np.array([(x, y) for x in np.linspace(-1,1,7) for y in np.linspace(-1,1,7)])
N = points1.shape[0]
points2 = 2*np.random.rand(N,2)-1
C = scipy.spatial.distance.cdist(points1, points2)
_, result = scipy.optimize.linear_sum_assignment(C) |
13,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hausaufgaben Einstieg in Python - Lektion 2
1.Kreiere eine Liste aus Zahlen, die aus 10 Elementen besteht, und ordne sie der Variabel a zu.
Step1: 2.Mache dasselbe mit einer Liste aus 100 Elementen und ordne sie der Variabel b zu.
Step2: 3.Füge beiden Listen folgenden String an
Step3: 4.Lösche diesen letzten Eintrag in der Liste wieder
Step4: 5.Verwandle jede Zahl in den Listen a und b von int in str?
Step5: 6.Von der list b, zeige nur die letzten zehn Nummern an
Step6: 7.Von der folgenden Liste, zeige den zweit grössten Wert an
Step7: 8.Multipliziere jede Nummer in dieser Liste, die kleiner ist als 100, mit 100; wenn die Nummern zwischen 100 und 1000 verwandle sie in eine String, und wenn sie grösser oder gleich 1000 ist, lösche sie.
Step8: 9.Schreibe eine Dictionary-Liste der fünf grössten Schweizer Städte, ihrer Bevölkerung und dem jeweiligen Kanton.
Step9: 10.Zeige nur die Bevölkerung der Stadt Genf an
Step10: 11.Drucke (print) das Total der Bevölkerungen aller Städte aus
Step11: 12.Rechne den Anteil der Bevölkerung der jeweiligen Städte an der Gesamtbevölkerung der Schweiz aus und "print" die Ergebnisse neben dem Städtenamen
Step12: 13.Füge noch die Städte Winterthur und Luzern hinzu
Step13: 14.Ergänze die Städte-Dictionary-Liste mit Winterthur und Luzern | Python Code:
a = list(range(10))
a
Explanation: Hausaufgaben Einstieg in Python - Lektion 2
1.Kreiere eine Liste aus Zahlen, die aus 10 Elementen besteht, und ordne sie der Variabel a zu.
End of explanation
b = list(range(100))
b
Explanation: 2.Mache dasselbe mit einer Liste aus 100 Elementen und ordne sie der Variabel b zu.
End of explanation
a.append("ich bin keine Zahl")
a
b.append("ich bin keine Zahl")
Explanation: 3.Füge beiden Listen folgenden String an: 'ich bin keine Zahl'
End of explanation
a.pop() #Oder auch mittel Funktion remove a.remove
a
b.pop()
b
Explanation: 4.Lösche diesen letzten Eintrag in der Liste wieder
End of explanation
a = list(map(str, a))
a
b = list(map(str, b))
b
#Barnabys Lösung for x in a:
a = range(10)
a
#Barnabys Lösung: for loops! Sehr wichtig! Wir müssen in Liste reingehen und die Liste manipulieren.
empty_list = []
for element_in_the_list in a:
new_element = str(element_in_the_list)
empty_list.append(new_element)
empty_list
Explanation: 5.Verwandle jede Zahl in den Listen a und b von int in str?
End of explanation
b = range(100)
str(b)
lange_leere_liste = []
for x in b:
neues_element = str(x)
lange_leere_liste.append(neues_element)
b[89:95]
Explanation: 6.Von der list b, zeige nur die letzten zehn Nummern an
End of explanation
#Schritt 1: sortieren Schritt 2: -2
c = range(100)
c = range(0,100)
c = list (range(100))
c
c[-2]
sorted(c, reverse=True) #Sortieren und die Reihenfolge ändern
c[1]
c[::2] #Jedes 2. Element.
c[::3] #Jedes 3. Element.
d = [23, 333, 567, 888]
d
#Python Panda als Hilfsmlittel bei Listen. Listen sind sehr praktisch.
Explanation: 7.Von der folgenden Liste, zeige den zweit grössten Wert an
End of explanation
#Umwandlung zurück in integer
b = list(map(int, b))
#Neue Liste, in welche die Datenveränderungen abgespeichert wird
b_new = []
#Schritt 1: Multiplikation mit 100 wenn
for elem in b:
if elem < 100:
temp = elem*100
b_new.append(temp)
else:
b_new.append(elem)
continue
#Schritt 2: Umwandlung in String wenn Listenelement zwischen 100 und 1000
for elem in b_new:
if elem > 100 and elem < 1000:
index = int(elem/100)
b_new[index]=str(elem)
if elem >= 1000:
for i in [i for i,x in enumerate(b_new) if x == elem]:
del b_new[i]
print(i)
else:
continue
b_new
#Frage 8 gemäss Barnaby
e = list(range(10))
e
#Multipliziere jede Nummer in dieser Liste, die kleiner ist als 100, mit 100; wenn die Nummern zwischen 100 und 1000 verwandle sie in eine String, und wenn sie grösser oder gleich 1000 ist, lösche sie.
e_neue = []
for elem in e:
if elem > 1000:
pass
elif elem > 100:
e_neue.append(str(elem))
else:
elem= elem *100
e_neue.append(elem)
e_neue
e_neue = []
index = 0
for elem in e:
if elem > 1000:
pass
elif elem > 100:
e_neue.append(str(elem))
else:
elem = elem * 100
e_neue.append(elem)
print(index)
index +=1
Explanation: 8.Multipliziere jede Nummer in dieser Liste, die kleiner ist als 100, mit 100; wenn die Nummern zwischen 100 und 1000 verwandle sie in eine String, und wenn sie grösser oder gleich 1000 ist, lösche sie.
End of explanation
#Quelle: Wikipedia. Die grössten Schweizer Städte.
ZH = {'Stadt': 'Zurich', 'Bevölkerung': 396027, 'Kanton': 'ZH'}
GE = {'Stadt': 'Genf', 'Bevölkerung': 194565, 'Kanton': 'GE'}
BS = {'Stadt': 'Basel', 'Bevölkerung': 175131, 'Kanton': 'BS'}
BE = {'Stadt': 'Bern', 'Bevölkerung': 140634, 'Kanton': 'BE'}
LS = {'Stadt': 'Lausanne', 'Bevölkerung': 135629, 'Kanton': 'VD'}
dct_lst = [ZH,GE,BS,BE,LS]
dct_lst
Explanation: 9.Schreibe eine Dictionary-Liste der fünf grössten Schweizer Städte, ihrer Bevölkerung und dem jeweiligen Kanton.
End of explanation
dct_lst[1]['Bevölkerung']
b = 0
for elem in dct_lst:
b = b + elem["Bevölkerung"] #Eingabe mittels Tab-Taste.
print(b)
b
Explanation: 10.Zeige nur die Bevölkerung der Stadt Genf an
End of explanation
a=0
for dic in dct_lst:
a = a+ dic['Bevölkerung']
a
Explanation: 11.Drucke (print) das Total der Bevölkerungen aller Städte aus
End of explanation
gesamt = 8000000
for city in dct_lst:
prozent = (city['Bevölkerung']/gesamt*100)
print(city['Stadt'], '::::: ', str(prozent)) #Auf Klammern achten!
Explanation: 12.Rechne den Anteil der Bevölkerung der jeweiligen Städte an der Gesamtbevölkerung der Schweiz aus und "print" die Ergebnisse neben dem Städtenamen
End of explanation
Winterthur = {'Stadt': 'Winterthur', 'Bevölkerung': 106778, 'Kanton': 'ZH'}
Luzern = {'Stadt': 'Luzern', 'Bevölkerung': 81284, 'Kanton': 'LU'}
dct_lst.append(Winterthur)
dct_lst.append(Luzern)
dct_lst
dct_lst.pop()
Explanation: 13.Füge noch die Städte Winterthur und Luzern hinzu
End of explanation
#Siehe oben.
Explanation: 14.Ergänze die Städte-Dictionary-Liste mit Winterthur und Luzern
End of explanation |
13,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing Test
Consolidating the returned CSVs into one is relatively painless
Main issue is that for some reason the time is still in GMT, and needs 5 hours in milliseconds subtracted from the epoch
Validating against Weather Underground read from O'Hare
Step1: NEXRAD at O'Hare Zip 60666
Step2: Wunderground | Python Code:
s3_client = boto3.client('s3')
resource = boto3.resource('s3')
# Disable signing for anonymous requests to public bucket
resource.meta.client.meta.events.register('choose-signer.s3.*', disable_signing)
def file_list(client, bucket, prefix=''):
paginator = client.get_paginator('list_objects')
for result in client.list_objects(Bucket=bucket, Prefix=prefix, Delimiter='/')['Contents']:
yield result.get('Key')
gen_s3_files = list(file_list(s3_client, 'nexrad-etl', prefix='test-aug3/'))
for i, f in enumerate(gen_s3_files):
s3_client.download_file('nexrad-etl',f,'test-aug3/nexrad{}.csv'.format(i))
folder_files = os.listdir(os.path.join(os.getcwd(), 'test-aug3'))
nexrad_df_list = list()
for f in folder_files:
if f.endswith('.csv'):
try:
nexrad_df_list.append(pd.read_csv('test-aug3/{}'.format(f)))
except:
#print(f)
pass
print(len(nexrad_df_list))
merged_nexrad = pd.concat(nexrad_df_list)
merged_nexrad['timestamp'] = pd.to_datetime(((merged_nexrad['timestamp'] / 1000) - (5*3600*1000)), unit='ms')
#merged_nexrad['timestamp'] = pd.to_datetime(merged_nexrad['timestamp'] / 1000, unit='ms')
merged_nexrad = merged_nexrad.set_index(pd.DatetimeIndex(merged_nexrad['timestamp']))
merged_nexrad = merged_nexrad.sort_values('timestamp')
merged_nexrad = merged_nexrad.fillna(0.0)
# Get diff between previous two reads
merged_nexrad['diff'] = merged_nexrad['timestamp'].diff()
merged_nexrad = merged_nexrad[1:]
print(merged_nexrad.shape)
merged_nexrad.index.min()
merged_nexrad['diff'] = (merged_nexrad['diff'] / np.timedelta64(1, 'm')).astype(float) / 60
merged_nexrad.head()
aug_day_ohare = merged_nexrad['2016-08-12'][['timestamp','60666','diff']]
aug_day_ohare.head()
aug_day_ohare['60666'] = (aug_day_ohare['60666']*aug_day_ohare['diff'])/25.4
aug_day_ohare.head()
Explanation: Processing Test
Consolidating the returned CSVs into one is relatively painless
Main issue is that for some reason the time is still in GMT, and needs 5 hours in milliseconds subtracted from the epoch
Validating against Weather Underground read from O'Hare
End of explanation
# Checking against Weather Underground read for O'Hare on this day
print(aug_day_ohare['60666'].sum())
aug_day_ohare['60666'].plot()
Explanation: NEXRAD at O'Hare Zip 60666
End of explanation
wunderground = pd.read_csv('test-aug3/aug-12.csv')
wunderground['PrecipitationIn'] = wunderground['PrecipitationIn'].fillna(0.0)
wunderground['TimeCDT'] = pd.to_datetime(wunderground['TimeCDT'])
wunderground = wunderground.set_index(pd.DatetimeIndex(wunderground['TimeCDT']))
wund_hour = wunderground['PrecipitationIn'].resample('1H').max()
print(wund_hour.sum())
wund_hour.plot()
Explanation: Wunderground
End of explanation |
13,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What patterns do we see if we average a set of similar/related words(names) and find the words with highest cosine similarity with our average vector?
Step1: Let's see what we get
We will supply the names of all the children of Ned and Catelyn Stark and see what we get back as best avgs
Step2: And the top two best averages? Their parents
Step3: Spoilers
Step4: Model correctly predicted the relationship between the Two Families
Step5: Who's the usurper? a person who takes a position of power or importance illegally or by force.
Step6: Here we obtain words that are used in the same context as usurper or than have some similarity of usage with it. So the model is able to capture this kind of relationship as well.
Step7: Arya - John + Ghost = ?
Step8: Dimensionality reduction using tsne | Python Code:
def best_avgs(words, all_vecs,k=10):
from operator import itemgetter
## get word embeddings for the words in our input array
embs = np.array([thrones2vec[word] for word in words])
#calculate its average
avg = np.sum(embs,axis=0)/len(words)
# Cosine Similarity with every word vector in the corpus
denom = np.sqrt(np.sum(all_vecs*all_vecs,axis=1,keepdims=True)) \
* np.sqrt(np.sum(avg*avg))
similarity = all_vecs.dot(avg.T).reshape(all_vecs.shape[0],1) \
/ denom
similarity = similarity.reshape(1,all_vecs.shape[0])[0]
# Finding the 10 largest words with highest similarity
# Since we are averaging we might end up getting the input words themselves
# among the top values
# we need to make sure we get back len(words)+k closest words and then
# remove all input words we supplied
nClosest = k + len(words)
# Get indices of the most similar word vectors to our avgvector
ind = np.argpartition(similarity, -(nClosest))[-nClosest:]
names = [thrones2vec.index2word[indx] for indx in ind]
similarity = similarity[ind]
uniq = [(person,similar) for person,similar in zip(names,similarity) if person not in words]
return sorted(uniq,key=itemgetter(1),reverse=True)[:k]
Explanation: What patterns do we see if we average a set of similar/related words(names) and find the words with highest cosine similarity with our average vector?
End of explanation
children = ["Arya","Robb","Sansa","Bran","Jon"]
best_avgs(children, all_word_vecs, 10)
Explanation: Let's see what we get
We will supply the names of all the children of Ned and Catelyn Stark and see what we get back as best avgs
End of explanation
families = ["Lannister","Stark"]
best_avgs(families, all_word_vecs, 10)
Explanation: And the top two best averages? Their parents: Ned and Catelyn.
Math is Beautiful :)
See if we can get some context about two families from their best average vectors
End of explanation
families = ["Tully","Stark"]
best_avgs(families, all_word_vecs, 10)
Explanation: Spoilers
End of explanation
families = ["Lannister","Baratheon"]
best_avgs(families, all_word_vecs, 10)
Explanation: Model correctly predicted the relationship between the Two Families
End of explanation
thrones2vec.most_similar("usurper")
Explanation: Who's the usurper? a person who takes a position of power or importance illegally or by force.
End of explanation
thrones2vec.most_similar("Tyrion")
thrones2vec.most_similar("Dothraki")
def nearest_similarity_cosmul(start1, end1, end2):
similarities = thrones2vec.most_similar_cosmul(
positive=[end2, start1],
negative=[end1]
)
start2 = similarities[0][0]
print("{start1} is related to {end1}, as {start2} is related to {end2}".format(**locals()))
return start2
nearest_similarity_cosmul("woman","man","king")
nearest_similarity_cosmul("Jaime","Lannister","Stark")
thrones2vec.most_similar("Jaime")
Explanation: Here we obtain words that are used in the same context as usurper or than have some similarity of usage with it. So the model is able to capture this kind of relationship as well.
End of explanation
thrones2vec.most_similar(positive=['Ghost', 'Arya'], negative=['Jon'])
Explanation: Arya - John + Ghost = ?
End of explanation
Y = tsne(all_word_vecs.astype('float64'))
points = pd.DataFrame(
[
(word, coords[0], coords[1])
for word, coords in [
(word, Y[thrones2vec.vocab[word].index])
for word in thrones2vec.vocab
]
],
columns=["word", "x", "y"]
)
points.head(10)
sns.set_context("poster")
%pylab inline
points.plot.scatter("x", "y", s=10, figsize=(20, 12))
def plot_region(x_bounds, y_bounds):
slice = points[
(x_bounds[0] <= points.x) &
(points.x <= x_bounds[1]) &
(y_bounds[0] <= points.y) &
(points.y <= y_bounds[1])
]
inwords=[]
ax = slice.plot.scatter("x", "y", s=35, figsize=(10, 8))
for i, point in slice.iterrows():
inwords.append(point.word)
ax.text(point.x + 0.005, point.y + 0.005, point.word, fontsize=11)
print(", ".join(inwords))
plot_region(x_bounds=(-8.0,-6.0), y_bounds=(-29.0, -26.0))
points.loc[points["word"]=="Jaime",:]
plot_region(x_bounds=(28,34), y_bounds=(-5.0,-2.0))
def coords(word):
coord = points.loc[points["word"]==word,:].values[0]
return coord[1],coord[2]
coords("Jon")
def plot_close_to(word):
x,y = coords(word)
plot_region(x_bounds=(x-1.0,x+1.0), y_bounds=(y-1.0,y+1.0))
plot_close_to("apples")
plot_close_to("Winterfell")
plot_close_to("Payne")
for i in ["king","queen","man","woman"]:
print(coords(i))
plot_close_to("Needle")
Explanation: Dimensionality reduction using tsne
End of explanation |
13,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecules.
The creation of a predictive system happens in 3 steps
Step1: load data and convert it to graphs
Step2: 2 Vectorization
setup the vectorizer
Step3: extract features and build data matrix
Step4: 3 Modelling
Induce a predictor and evaluate its performance | Python Code:
from eden.util import load_target
y = load_target( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.target' )
Explanation: Classification
Consider a binary classification problem. The data and target files are available online. The domain of the problem is chemoinformatics. Data is about toxicity of 4K small molecules.
The creation of a predictive system happens in 3 steps:
data conversion: transform instances into a suitable graph format. This is done using specialized programs for each (domain, format) pair. In the example we have molecular graphs encoded using the gSpan format and we will therefore use the 'gspan' tool.
data vectorization: transform graphs into sparse vectors. This is done using the EDeN tool. The vectorizer accepts as parameters the (maximal) size of the fragments to be used as features, this is expressed as the pair 'radius' and the 'distance'. See for details: F. Costa, K. De Grave,''Fast Neighborhood Subgraph Pairwise Distance Kernel'', 27th International Conference on Machine Learning (ICML), 2010.
modelling: fit a predicitve system and evaluate its performance. This is done using the tools offered by the scikit library. In the example we will use a Stochastic Gradient Descent linear classifier.
In the following cells there is the code for each step.
Install the library
pip install git+https://github.com/fabriziocosta/EDeN.git --user
1 Conversion
load a target file
End of explanation
from eden.converter.graph.gspan import gspan_to_eden
graphs = gspan_to_eden( 'http://www.bioinf.uni-freiburg.de/~costa/bursi.gspan' )
Explanation: load data and convert it to graphs
End of explanation
from eden.graph import Vectorizer
vectorizer = Vectorizer( r=2,d=5 )
Explanation: 2 Vectorization
setup the vectorizer
End of explanation
%%time
X = vectorizer.transform( graphs )
print 'Instances: %d Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
Explanation: extract features and build data matrix
End of explanation
%%time
#induce a predictive model
from sklearn.linear_model import SGDClassifier
predictor = SGDClassifier(average=True, class_weight='auto', shuffle=True, n_jobs=-1)
from sklearn import cross_validation
scores = cross_validation.cross_val_score(predictor, X, y, cv=10, scoring='roc_auc')
import numpy as np
print('AUC ROC: %.4f +- %.4f' % (np.mean(scores),np.std(scores)))
Explanation: 3 Modelling
Induce a predictor and evaluate its performance
End of explanation |
13,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PROV-O Diagram Rendering Example
This example takes a PROV-O activity graph and uses the PROV Python library, which is an implementation of the Provenance Data Model by the World Wide Web Consortium, to create a graphical representations like PNG, SVG, PDF.
Prerequisites
python libraries - prov[dot]
jupyter
graphviz
Read a simple provenance document
We will use the Example 1 available on https
Step1: Create some setup variables filename and basename which will be used for the encoding of the outputs
Step2: Use the prov library to deserialize the example document
Step3: Graphics export (PNG and PDF)
In addition to the PROV-N output (as above), the document can be exported into a graphical representation with the help of the GraphViz. It is provided as a software package in popular Linux distributions, or can be downloaded for Windows and Mac.
Once you have GraphViz installed and the dot command available in your operating system's paths, you can save the document we have so far into a PNG file as follows.
Step4: The above saves the PNG file as article-prov.png in your current folder. If you're runing this tutorial in Jupyter Notebook, you can see it here as well.
Step5: Similarly, the above saves the document into a PDF file in your current working folder. Graphviz supports a wide ranges of raster and vector outputs, to which you can export your provenance documents created by the library. To find out what formats are available from your version, run dot -T? at the command line.
PROV-JSON export
PROV-JSON is a JSON representation for PROV that was designed for the ease of accessing various PROV elements in a PROV document and to work well with web applications. The format is natively supported by the library and is its default serialisation format.
Step6: You can also serialize the document directly to a file by providing a filename (below) or a Python File object. | Python Code:
from prov.model import ProvDocument
import prov.model as pm
Explanation: PROV-O Diagram Rendering Example
This example takes a PROV-O activity graph and uses the PROV Python library, which is an implementation of the Provenance Data Model by the World Wide Web Consortium, to create a graphical representations like PNG, SVG, PDF.
Prerequisites
python libraries - prov[dot]
jupyter
graphviz
Read a simple provenance document
We will use the Example 1 available on https://www.w3.org/TR/prov-o/ e.g. https://www.w3.org/TR/prov-o/#narrative-example-simple-1
To create a provenance document (a package of provenance statements or assertions), import ProvDocument class from prov.model:
End of explanation
filename = "rdf/prov-ex1.ttl"
basename = "prov-ex1"
Explanation: Create some setup variables filename and basename which will be used for the encoding of the outputs
End of explanation
# Create a new provenance document
d1 = pm.ProvDocument.deserialize(filename, format="rdf")
Explanation: Use the prov library to deserialize the example document
End of explanation
# visualize the graph
from prov.dot import prov_to_dot
dot = prov_to_dot(d1)
dot.write_png(basename + '.png')
Explanation: Graphics export (PNG and PDF)
In addition to the PROV-N output (as above), the document can be exported into a graphical representation with the help of the GraphViz. It is provided as a software package in popular Linux distributions, or can be downloaded for Windows and Mac.
Once you have GraphViz installed and the dot command available in your operating system's paths, you can save the document we have so far into a PNG file as follows.
End of explanation
from IPython.display import Image
Image(basename + '.png')
# Or save to a PDF
dot.write_pdf(basename + '.pdf')
Explanation: The above saves the PNG file as article-prov.png in your current folder. If you're runing this tutorial in Jupyter Notebook, you can see it here as well.
End of explanation
print(d1.serialize(indent=2))
Explanation: Similarly, the above saves the document into a PDF file in your current working folder. Graphviz supports a wide ranges of raster and vector outputs, to which you can export your provenance documents created by the library. To find out what formats are available from your version, run dot -T? at the command line.
PROV-JSON export
PROV-JSON is a JSON representation for PROV that was designed for the ease of accessing various PROV elements in a PROV document and to work well with web applications. The format is natively supported by the library and is its default serialisation format.
End of explanation
d1.serialize(basename + '.json')
Explanation: You can also serialize the document directly to a file by providing a filename (below) or a Python File object.
End of explanation |
13,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_sentences = source_text.split('\n')
target_sentences = [sentence + ' <EOS>' for sentence in target_text.split('\n')]
source_ids = [
[source_vocab_to_int[word]for word in line.split()]
for line in source_sentences
]
target_ids = [
[target_vocab_to_int[word] for word in line.split()]
for line in target_sentences
]
return source_ids, target_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
return (
tf.placeholder(tf.int32, [None, None], name='input'),
tf.placeholder(tf.int32, [None, None], name='targets'),
tf.placeholder(tf.float32, name='learning_rate'),
tf.placeholder(tf.float32, name='keep_prob')
)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# taking from here:
# https://github.com/udacity/deep-learning/blob/master/seq2seq/sequence_to_sequence_implementation.ipynb
go_id = target_vocab_to_int['<GO>']
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], go_id), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
_, state = tf.nn.dynamic_rnn(
cell, rnn_inputs, dtype=tf.float32
)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
# apply dropout on training
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope
)
return output_fn(train_pred)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn,
encoder_state,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
maximum_length,
vocab_size
)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell,
infer_decoder_fn,
scope=decoding_scope
)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# create RNN cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
rnn_cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
# create output function using lambda
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
with tf.variable_scope('decoding') as decoding_scope:
# use `decoding_layer_train` function to get the training logits
training_logits = decoding_layer_train(
encoder_state,
rnn_cell,
dec_embed_input,
sequence_length,
decoding_scope,
output_fn,
keep_prob
)
# Use `decoding_layer_infer` to get the inference logits
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(
encoder_state,
rnn_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
sequence_length,
vocab_size,
decoding_scope,
output_fn,
keep_prob
)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# apply embedding to the input data
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
# encode the input using `encoding_layer`
enc_layer = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
# prcess target data using `process_decoding_input`
dec_input = process_decoding_input(
target_data,
target_vocab_to_int,
batch_size
)
# apply embedding to the target data
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# decode the encoded input using the `decoding_layer`
train_logits, inf_logits = decoding_layer(
dec_embed_input,
dec_embeddings,
enc_layer,
target_vocab_size,
sequence_length,
rnn_size,
num_layers,
target_vocab_to_int,
keep_prob
)
return train_logits, inf_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 12
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# convert the sentence to lowercase
sentence_lower = sentence.lower()
# convert words into ids using vocab_to_int
unknown_id = vocab_to_int['<UNK>']
word_ids = [vocab_to_int.get(i, unknown_id) for i in sentence_lower.split()]
return word_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
13,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: 2
Step2: 3
Step3: 4
Step4: 5
Step5: 6 | Python Code:
# %sh
# wget https://raw.githubusercontent.com/jgoodall/cinevis/master/data/csvs/moviedata.csv
# ls -l
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
hollywood_movies = pd.read_csv('moviedata.csv')
print hollywood_movies.head()
print hollywood_movies['exclude'].value_counts()
hollywood_movies = hollywood_movies.drop('exclude', axis=1)
Explanation: https://www.dataquest.io/mission/185/challenge-data-visualization/
1: Introduction To The Data
In this challenge, you'll practice creating data visualizations using data on Hollywood movies that were released betwen 2007 to 2011. The goal is to better understand the underlying economics of Hollywood and explore the outlier nature of success of movies. The dataset was compiled by David McCandless and you can read about how the data was compiled here. You'll use a version of this dataset that was compiled by John Goodall, which can be downloaded from his Github repo here.
End of explanation
fig = plt.figure(figsize=(6, 10))
ax1 = fig.add_subplot(2, 1, 1)
ax1.scatter(hollywood_movies['Profitability'], hollywood_movies['Audience Rating'])
ax1.set(xlabel='Profitability', ylabel='Audience Rating', title='Hollywood Movies, 2007-2011')
ax2 = fig.add_subplot(2, 1, 2)
ax2.scatter(hollywood_movies['Audience Rating'], hollywood_movies['Profitability'])
ax2.set(xlabel='Audience Rating', ylabel='Profitability', title='Hollywood Movies, 2007-2011')
plt.show()
Explanation: 2: Scatter Plots - Profitability And Audience Ratings
Let's generate 2 scatter plots to better understand the relationship between the profitability of a movie and how an audience rated it.
End of explanation
from pandas.tools.plotting import scatter_matrix
normal_movies = hollywood_movies[hollywood_movies['Film'] != 'Paranormal Activity']
scatter_matrix(normal_movies[['Profitability', 'Audience Rating']], figsize=(6,6))
plt.show()
Explanation: 3: Scatter Matrix - Profitability And Critic Ratings
Both scatter plots in the previous step contained 1 outlier data point, which caused the scale of both plots to be incredibly lopsided to accomodate for this one outlier. The movie in question is Paranormal Activity, and is widely known as the most profitable movie ever. The movie brought in $193.4 million in revenue with a budget of only $15,000. Let's filter out this movie so you can create useful visualizations with the rest of the data.
End of explanation
fig = plt.figure()
normal_movies.boxplot(['Critic Rating', 'Audience Rating'])
plt.show()
Explanation: 4: Box Plot - Audience And Critic Ratings
Let's use box plots to better understand the distributions of ratings by critics versus ratings by the audience.
Use the Pandas Dataframe method plot to generate boxplots for the Critic Rating and Audience Rating columns.
End of explanation
normal_movies = normal_movies.sort(columns='Year')
fig = plt.figure(figsize=(8,4))
ax1 = fig.add_subplot(1, 2, 1)
sns.boxplot(x=normal_movies['Year'], y=normal_movies['Critic Rating'], ax=ax1)
ax2 = fig.add_subplot(1, 2, 2)
sns.boxplot(x=normal_movies['Year'], y=normal_movies['Audience Rating'], ax=ax2)
plt.show()
Explanation: 5: Box Plot - Critic Vs Audience Ratings Per Year
Now that you've visualized the total distribution of both the ratings columns, visualize how this distribution changed year to year.
End of explanation
def is_profitable(row):
if row["Profitability"] <= 1.0:
return False
return True
normal_movies["Profitable"] = normal_movies.apply(is_profitable, axis=1)
fig = plt.figure(figsize=(8,6))
ax1 = fig.add_subplot(1, 2, 1)
sns.boxplot(x=normal_movies['Profitable'], y=normal_movies['Audience Rating'], ax=ax1)
ax2 = fig.add_subplot(1, 2, 2)
sns.boxplot(x=normal_movies['Profitable'], y=normal_movies['Critic Rating'], ax=ax2)
plt.show()
Explanation: 6: Box Plots - Profitable Vs Unprofitable Movies
Many Hollywood movies aren't profitable and it's interesting to understand the role of ratings in a movie's profitability. You first need to separate the movies into those were that profitable and those that weren't.
We've created a new Boolean column called Profitable with the following specification:
False if the value for Profitability is less than or equal to 1.0. <br/>
True if the value for Profitability is greater than or equal to 1.0.
End of explanation |
13,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step2: Download the data from the source website if necessary.
Step3: Read the data into a string.
Step4: Build the dictionary and replace rare words with UNK token.
Step5: Function to generate a training batch for the skip-gram model.
Step6: Train a skip-gram model. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import urllib
import zipfile
from matplotlib import pylab
from sklearn.manifold import TSNE
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 5
The goal of this exercise is to train a skip-gram model over Text8 data.
End of explanation
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
print statinfo.st_size
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
Explanation: Download the data from the source website if necessary.
End of explanation
def read_data(filename):
f = zipfile.ZipFile(filename)
for name in f.namelist():
return f.read(name).split()
f.close()
words = read_data(filename)
print 'Data size', len(words)
Explanation: Read the data into a string.
End of explanation
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print 'Most common words (+UNK)', count[:5]
print 'Sample data', data[:10]
del words # Hint to reduce memory.
Explanation: Build the dictionary and replace rare words with UNK token.
End of explanation
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size / num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
for i in range(8):
print batch[i], '->', labels[i, 0]
print reverse_dictionary[batch[i]], '->', reverse_dictionary[labels[i, 0]]
Explanation: Function to generate a training batch for the skip-gram model.
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(xrange(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default():
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print "Initialized"
average_loss = 0
for step in xrange(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print "Average loss at step", step, ":", average_loss
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = "Nearest to %s:" % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log = "%s %s," % (log, close_word)
print log
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in xrange(1, num_points+1)]
plot(two_d_embeddings, words)
Explanation: Train a skip-gram model.
End of explanation |
13,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Don't forget to delete the hdmi_out and hdmi_in when finished
Text Overlay Filter Example
In this notebook, we will demonstrate how to use the overlay filter. The overlay filter scrolls text across the video stream. The text, size and color can be controlled in the Jupiter notebook. In order to do this we take advantage of many writable registers in the pynq.
<img src="data/text.png"/>
This filter has a font stored in BRAM. The filter grabs the characters needed based on the ascii codes given. Once retrieved the font’s size and color can be changed. These are controlled by registers. A counter is then used to change the Text location as it moves across the screen. The speed of the scroll is based on a register.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
Step1: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
Step2: 3. Program the board and User interface
For this filter we use the user interface to program the board. The interface will be explain below. Run the follow script to start the interface.
Step3: 5. User interface instructions
Buttons | Python Code:
from pynq.drivers.video import HDMI
from pynq import Bitstream_Part
from pynq.board import Register
from pynq import Overlay
Overlay("demo.bit").download()
Explanation: Don't forget to delete the hdmi_out and hdmi_in when finished
Text Overlay Filter Example
In this notebook, we will demonstrate how to use the overlay filter. The overlay filter scrolls text across the video stream. The text, size and color can be controlled in the Jupiter notebook. In order to do this we take advantage of many writable registers in the pynq.
<img src="data/text.png"/>
This filter has a font stored in BRAM. The filter grabs the characters needed based on the ascii codes given. Once retrieved the font’s size and color can be changed. These are controlled by registers. A counter is then used to change the Text location as it moves across the screen. The speed of the scroll is based on a register.
1. Download base overlay to the board
Ensure that the camera is not connected to the board. Run the following script to provide the PYNQ with its base overlay.
End of explanation
hdmi_in = HDMI('in')
hdmi_out = HDMI('out', frame_list=hdmi_in.frame_list)
hdmi_out.mode(2)
hdmi_out.start()
hdmi_in.start()
Explanation: 2. Connect camera
Physically connect the camera to the HDMI-in port of the PYNQ. Run the following code to instruct the PYNQ to capture the video from the camera and to begin streaming video to your monitor (connected to the HDMI-out port).
End of explanation
import ipywidgets as widgets
R0 =Register(0)
R1 =Register(1)
R2 =Register(2)
R3 =Register(3)
R4 =Register(4)
R5 =Register(5)
R6 =Register(6)
R7 =Register(7)
R8 =Register(8)
R9 =Register(9)
R10 =Register(10)
R11 =Register(11)
R12 =Register(12)
R13 =Register(13)
R14 =Register(14)
R15 =Register(15)
R16 =Register(16)
R17 =Register(17)
R18 =Register(18)
R19 =Register(19)
R20 =Register(20)
R21 =Register(21)
R22 =Register(22)
R23 =Register(23)
R24 =Register(24)
R0.write(0)
R1.write(150)
R2.write(1150)
R3.write(32)
R4.write(0)
R5.write(0)
R6.write(0)
R7.write(0)
R8.write(1)
R0_s = widgets.IntSlider(
value=0,
min=0,
max=1000,
step=1,
description='Y axis',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R1_s = widgets.IntSlider(
value=150,
min=130,
max=400,
step=1,
description='Left Limit',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R2_s = widgets.IntSlider(
value=1150,
min=500,
max=1200,
step=1,
description='Right Limit',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R3_s = widgets.IntSlider(
value=32,
min=0,
max=64,
step=1,
description='# Chars',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R4_s = widgets.IntSlider(
value=1,
min=0,
max=4,
step=1,
description='Text size',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R5_s = widgets.IntSlider(
value=0,
min=0,
max=255,
step=1,
description='Red',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='red'
)
R6_s = widgets.IntSlider(
value=0,
min=0,
max=255,
step=1,
description='Green',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='green'
)
R7_s = widgets.IntSlider(
value=0,
min=0,
max=255,
step=1,
description='Blue',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='blue'
)
R8_s = widgets.IntSlider(
value=1,
min=1,
max=3,
step=1,
description='Speed',
disabled=False,
continuous_update=True,
orientation='vertical',
readout=True,
readout_format='i',
slider_color='black'
)
R9_s = widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='Text:',
disabled=False
)
def update_r0(*args):
R0.write(R0_s.value)
R0_s.observe(update_r0, 'value')
def update_r1(*args):
R1.write(R1_s.value)
R1_s.observe(update_r1, 'value')
def update_r2(*args):
R2.write(R2_s.value)
R2_s.observe(update_r2, 'value')
def update_r3(*args):
R3.write(R3_s.value)
R3_s.observe(update_r3, 'value')
def update_r4(*args):
R4.write(R4_s.value)
R4_s.observe(update_r4, 'value')
def update_r5(*args):
R5.write(R5_s.value)
R5_s.observe(update_r5, 'value')
def update_r6(*args):
R6.write(R6_s.value)
R6_s.observe(update_r6, 'value')
def update_r7(*args):
R7.write(R7_s.value)
R7_s.observe(update_r7, 'value')
def update_r8(*args):
if (R8_s.value == 3):
R8.write(4)
else :R8.write(R8_s.value)
R8_s.observe(update_r8, 'value')
def update_textRegisters(*args):
complete = R9_s.value
if (len(complete) > 64):
complete = complete[:(64-len(complete))]
for x in range(0,64-len(complete)):
complete = ' ' + complete
text = 0
for c in complete[:-60]:
text = text << 8
text += ord(c)
R9.write(text)
text = 0
for c in complete[4:-56]:
text = text << 8
text += ord(c)
R10.write(text)
text = 0
for c in complete[8:-52]:
text = text << 8
text += ord(c)
R11.write(text)
text = 0
for c in complete[12:-48]:
text = text << 8
text += ord(c)
R12.write(text)
text = 0
for c in complete[16:-44]:
text = text << 8
text += ord(c)
R13.write(text)
text = 0
for c in complete[20:-40]:
text = text << 8
text += ord(c)
R14.write(text)
text = 0
for c in complete[24:-36]:
text = text << 8
text += ord(c)
R15.write(text)
text = 0
for c in complete[28:-32]:
text = text << 8
text += ord(c)
R15.write(text)
text = 0
for c in complete[32:-28]:
text = text << 8
text += ord(c)
R17.write(text)
text = 0
for c in complete[36:-24]:
text = text << 8
text += ord(c)
R18.write(text)
text = 0
for c in complete[40:-20]:
text = text << 8
text += ord(c)
R19.write(text)
text = 0
for c in complete[44:-16]:
text = text << 8
text += ord(c)
R20.write(text)
text = 0
for c in complete[48:-12]:
text = text << 8
text += ord(c)
R21.write(text)
text = 0
for c in complete[52:-8]:
text = text << 8
text += ord(c)
R22.write(text)
text = 0
for c in complete[56:-4]:
text = text << 8
text += ord(c)
R23.write(text)
text = 0
for c in complete[60:]:
text = text << 8
text += ord(c)
R24.write(text)
R9_s.observe(update_textRegisters, 'value')
from IPython.display import clear_output
from ipywidgets import Button, HBox, VBox
words = ['HDMI Reset','Program']
items = [Button(description=w) for w in words]
def on_hdmi_clicked(b):
hdmi_out.stop()
hdmi_in.stop()
hdmi_out.start()
hdmi_in.start()
def on_program_clicked(b):
Bitstream_Part("text_p.bit").download()
olrd_str = "orld"
lo_W_str = "lo W"
hel_str = "Hel"
olrd = 0
for c in olrd_str:
olrd = olrd << 8
olrd += ord(c)
lo_W = 0
for c in lo_W_str:
lo_W = lo_W << 8
lo_W += ord(c)
hel = 0
for c in hel_str:
hel = hel << 8
hel += ord(c)
R0.write(0)
R1.write(150)
R2.write(1150)
R3.write(32)
R4.write(0)
R5.write(0)
R6.write(0)
R7.write(0)
R8.write(1)
R22.write(hel)
R23.write(lo_W)
R24.write(olrd)
items[0].on_click(on_hdmi_clicked)
items[1].on_click(on_program_clicked)
widgets.HBox([VBox([items[0], items[1]]),R0_s,R1_s, R2_s, R3_s, R4_s, R5_s, R6_s, R7_s, R8_s, R9_s])
Explanation: 3. Program the board and User interface
For this filter we use the user interface to program the board. The interface will be explain below. Run the follow script to start the interface.
End of explanation
hdmi_out.stop()
hdmi_in.stop()
del hdmi_out
del hdmi_in
Explanation: 5. User interface instructions
Buttons:
HDMI Reset - Resets the HDMI
Program - Programs and starts the text overlay
Sliders:
Y axis - Where the text appears vertically
Left Limit - Where the text leaves the screen on the x axis
Right Limit - Where the text enters the screen on the x axis
#Chars - The number of Charaters shown on the screen. The max it 64. If your text is shorter then this number, blank text will be added in font of the text. If your text is longer then this number the font of the text will be cut off.
Text Size - Size of text, This increases the font size by a factor of 2.
Red, Green, Blue - Font color
Speed - The speed of the scrolling
Text box:
Text - The text to appear on the filter, max length of 64 characters.
(Note: to avoid any potential glitching, set the text size to 1 when changing the left and right limits or speed. If glitching does occur it will only happen during one iteration. If this is not the case and glitching continues, hit the program button again.)
5. Clean up
When you are done playing with filter, run the following code to stop the video stream
End of explanation |
13,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Portfolio Optimization
“Modern Portfolio Theory (MPT), a hypothesis put forth by Harry Markowitz in his paper “Portfolio Selection,” (published in 1952 by the Journal of Finance) is an investment theory based on the idea that risk-averse investors can construct portfolios to optimize or maximize expected return based on a given level of market risk, emphasizing that risk is an inherent part of higher reward. It is one of the most important and influential economic theories dealing with finance and investment.
Monte Carlo Simulation for Optimization Search
We could randomly try to find the optimal portfolio balance using Monte Carlo simulation
Step1: Simulating Thousands of Possible Allocations
Step2: Log Returns vs Arithmetic Returns
We will now switch over to using log returns instead of arithmetic returns, for many of our use cases they are almost the same,but most technical analyses require detrending/normalizing the time series and using log returns is a nice way to do that.
Log returns are convenient to work with in many of the algorithms we will encounter.
For a full analysis of why we use log returns, check this great article.
Step3: Single Run for Some Random Allocation
Step4: Great! Now we can just run this many times over!
Step5: Plotting the data
Step7: Mathematical Optimization
There are much better ways to find good allocation weights than just guess and check! We can use optimization functions to find the ideal weights mathematically!
Functionalize Return and SR operations
Step8: To fully understand all the parameters, check out
Step9: Optimization works as a minimization function, since we actually want to maximize the Sharpe Ratio, we will need to turn it negative so we can minimize the negative sharpe (same as maximizing the postive sharpe)
Step10: All Optimal Portfolios (Efficient Frontier)
The efficient frontier is the set of optimal portfolios that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal, because they do not provide enough return for the level of risk. Portfolios that cluster to the right of the efficient frontier are also sub-optimal, because they have a higher level of risk for the defined rate of return.
Efficient Frontier http | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Download and get Daily Returns
aapl = pd.read_csv('AAPL_CLOSE',
index_col = 'Date',
parse_dates = True)
cisco = pd.read_csv('CISCO_CLOSE',
index_col = 'Date',
parse_dates = True)
ibm = pd.read_csv('IBM_CLOSE',
index_col = 'Date',
parse_dates = True)
amzn = pd.read_csv('AMZN_CLOSE',
index_col = 'Date',
parse_dates = True)
stocks = pd.concat([aapl, cisco, ibm, amzn],
axis = 1)
stocks.columns = ['aapl','cisco','ibm','amzn']
stocks.head()
mean_daily_ret = stocks.pct_change(1).mean()
mean_daily_ret
stocks.pct_change(1).corr()
Explanation: Portfolio Optimization
“Modern Portfolio Theory (MPT), a hypothesis put forth by Harry Markowitz in his paper “Portfolio Selection,” (published in 1952 by the Journal of Finance) is an investment theory based on the idea that risk-averse investors can construct portfolios to optimize or maximize expected return based on a given level of market risk, emphasizing that risk is an inherent part of higher reward. It is one of the most important and influential economic theories dealing with finance and investment.
Monte Carlo Simulation for Optimization Search
We could randomly try to find the optimal portfolio balance using Monte Carlo simulation
End of explanation
stocks.head()
stock_normed = stocks/stocks.iloc[0]
stock_normed.plot()
stock_daily_ret = stocks.pct_change(1)
stock_daily_ret.head()
Explanation: Simulating Thousands of Possible Allocations
End of explanation
log_ret = np.log(stocks / stocks.shift(1))
log_ret.head()
log_ret.hist(bins = 100,
figsize = (12, 6));
plt.tight_layout()
log_ret.describe().transpose()
log_ret.mean() * 252
# Compute pairwise covariance of columns
log_ret.cov()
log_ret.cov() * 252 # multiply by days
Explanation: Log Returns vs Arithmetic Returns
We will now switch over to using log returns instead of arithmetic returns, for many of our use cases they are almost the same,but most technical analyses require detrending/normalizing the time series and using log returns is a nice way to do that.
Log returns are convenient to work with in many of the algorithms we will encounter.
For a full analysis of why we use log returns, check this great article.
End of explanation
# Set seed (optional)
np.random.seed(101)
# Stock Columns
print('Stocks')
print(stocks.columns)
print('\n')
# Create Random Weights
print('Creating Random Weights')
weights = np.array(np.random.random(4))
print(weights)
print('\n')
# Rebalance Weights
print('Rebalance to sum to 1.0')
weights = weights / np.sum(weights)
print(weights)
print('\n')
# Expected Return
print('Expected Portfolio Return')
exp_ret = np.sum(log_ret.mean() * weights) *252
print(exp_ret)
print('\n')
# Expected Variance
print('Expected Volatility')
exp_vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
print(exp_vol)
print('\n')
# Sharpe Ratio
SR = exp_ret/exp_vol
print('Sharpe Ratio')
print(SR)
Explanation: Single Run for Some Random Allocation
End of explanation
num_ports = 15000
all_weights = np.zeros((num_ports, len(stocks.columns)))
ret_arr = np.zeros(num_ports)
vol_arr = np.zeros(num_ports)
sharpe_arr = np.zeros(num_ports)
for ind in range(num_ports):
# Create Random Weights
weights = np.array(np.random.random(4))
# Rebalance Weights
weights = weights / np.sum(weights)
# Save Weights
all_weights[ind,:] = weights
# Expected Return
ret_arr[ind] = np.sum((log_ret.mean() * weights) *252)
# Expected Variance
vol_arr[ind] = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
# Sharpe Ratio
sharpe_arr[ind] = ret_arr[ind] / vol_arr[ind]
sharpe_arr.max()
sharpe_arr.argmax()
all_weights[1419,:]
max_sr_ret = ret_arr[1419]
max_sr_vol = vol_arr[1419]
Explanation: Great! Now we can just run this many times over!
End of explanation
plt.figure(figsize = (12, 8))
plt.scatter(vol_arr,
ret_arr,
c = sharpe_arr,
cmap = 'plasma')
plt.colorbar(label = 'Sharpe Ratio')
plt.xlabel('Volatility')
plt.ylabel('Return')
# Add red dot for max SR
plt.scatter(max_sr_vol,
max_sr_ret,
c = 'red',
s = 50,
edgecolors = 'black')
Explanation: Plotting the data
End of explanation
def get_ret_vol_sr(weights):
Takes in weights, returns array or return,volatility, sharpe ratio
weights = np.array(weights)
ret = np.sum(log_ret.mean() * weights) * 252
vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
sr = ret/vol
return np.array([ret, vol, sr])
from scipy.optimize import minimize
Explanation: Mathematical Optimization
There are much better ways to find good allocation weights than just guess and check! We can use optimization functions to find the ideal weights mathematically!
Functionalize Return and SR operations
End of explanation
help(minimize)
Explanation: To fully understand all the parameters, check out:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
End of explanation
def neg_sharpe(weights):
return get_ret_vol_sr(weights)[2] * -1
# Contraints
def check_sum(weights):
'''
Returns 0 if sum of weights is 1.0
'''
return np.sum(weights) - 1
# By convention of minimize function it should be a function that returns zero for conditions
cons = ({'type' : 'eq', 'fun': check_sum})
# 0-1 bounds for each weight
bounds = ((0, 1), (0, 1), (0, 1), (0, 1))
# Initial Guess (equal distribution)
init_guess = [0.25, 0.25, 0.25, 0.25]
# Sequential Least SQuares Programming (SLSQP).
opt_results = minimize(neg_sharpe,
init_guess,
method = 'SLSQP',
bounds = bounds,
constraints = cons)
opt_results
opt_results.x
get_ret_vol_sr(opt_results.x)
Explanation: Optimization works as a minimization function, since we actually want to maximize the Sharpe Ratio, we will need to turn it negative so we can minimize the negative sharpe (same as maximizing the postive sharpe)
End of explanation
# Our returns go from 0 to somewhere along 0.3
# Create a linspace number of points to calculate x on
frontier_y = np.linspace(0, 0.3, 100) # Change 100 to a lower number for slower computers!
def minimize_volatility(weights):
return get_ret_vol_sr(weights)[1]
frontier_volatility = []
for possible_return in frontier_y:
# function for return
cons = ({'type':'eq','fun': check_sum},
{'type':'eq','fun': lambda w: get_ret_vol_sr(w)[0] - possible_return})
result = minimize(minimize_volatility,
init_guess,
method = 'SLSQP',
bounds = bounds,
constraints = cons)
frontier_volatility.append(result['fun'])
plt.figure(figsize = (12, 8))
plt.scatter(vol_arr,
ret_arr,
c = sharpe_arr,
cmap = 'plasma')
plt.colorbar(label = 'Sharpe Ratio')
plt.xlabel('Volatility')
plt.ylabel('Return')
# Add frontier line
plt.plot(frontier_volatility,
frontier_y,
'g--',
linewidth = 3)
Explanation: All Optimal Portfolios (Efficient Frontier)
The efficient frontier is the set of optimal portfolios that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal, because they do not provide enough return for the level of risk. Portfolios that cluster to the right of the efficient frontier are also sub-optimal, because they have a higher level of risk for the defined rate of return.
Efficient Frontier http://www.investopedia.com/terms/e/efficientfrontier
End of explanation |
13,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cvxpylayers tutorial
Step1: Parametrized convex optimization problem
$$
\begin{array}{ll} \mbox{minimize} & f_0(x;\theta)\
\mbox{subject to} & f_i(x;\theta) \leq 0, \quad i=1, \ldots, m\
& A(\theta)x=b(\theta),
\end{array}
$$
with variable $x \in \mathbf{R}^n$ and parameters $\theta\in\Theta\subseteq\mathbf{R}^p$
objective and inequality constraints $f_0, \ldots, f_m$ are convex in $x$ for each $\theta$, i.e., their graphs curve upward
equality constraints are linear
for a given value of $\theta$, find a value for $x$ that minimizes objective, while satisfying constraints
we can efficiently solve these globally with near total reliability
Solution map
Solution $x^\star$ is an implicit function of $\theta$
When unique, define solution map as function
$x^\star = \mathcal S(\theta)$
Need to call numerical solver to evaluate
This function is often differentiable
In a series of papers we showed how to analytically differentiate this function, using the implicit function theorem
Benefits of analytical differentiation
Step2: The gradient is simply
Step3: Median example
Finding the median of a vector
Step4: Elastic-net regression example
We are given training data $(x_i, y_i){i=1}^{N}$,
where $x_i\in\mathbf{R}$ are inputs and $y_i\in\mathbf{R}$ are outputs.
Suppose we fit a model for this regression problem by solving the elastic-net problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum{i=1}^N (ax_i + b - y_i)^2 + \lambda |a| + \alpha a^2,
\end{array}
\label{eq | Python Code:
import cvxpy as cp
import matplotlib.pyplot as plt
import numpy as np
import torch
from cvxpylayers.torch import CvxpyLayer
torch.set_default_dtype(torch.double)
np.set_printoptions(precision=3, suppress=True)
Explanation: Cvxpylayers tutorial
End of explanation
n = 7
# Define variables & parameters
x = cp.Variable()
y = cp.Parameter(n)
# Define objective and constraints
objective = cp.sum_squares(y - x)
constraints = []
# Synthesize problem
prob = cp.Problem(cp.Minimize(objective), constraints)
# Set parameter values
y.value = np.random.randn(n)
# Solve problem in one line
prob.solve(requires_grad=True)
print("solution:", "%.3f" % x.value)
print("analytical solution:", "%.3f" % np.mean(y.value))
Explanation: Parametrized convex optimization problem
$$
\begin{array}{ll} \mbox{minimize} & f_0(x;\theta)\
\mbox{subject to} & f_i(x;\theta) \leq 0, \quad i=1, \ldots, m\
& A(\theta)x=b(\theta),
\end{array}
$$
with variable $x \in \mathbf{R}^n$ and parameters $\theta\in\Theta\subseteq\mathbf{R}^p$
objective and inequality constraints $f_0, \ldots, f_m$ are convex in $x$ for each $\theta$, i.e., their graphs curve upward
equality constraints are linear
for a given value of $\theta$, find a value for $x$ that minimizes objective, while satisfying constraints
we can efficiently solve these globally with near total reliability
Solution map
Solution $x^\star$ is an implicit function of $\theta$
When unique, define solution map as function
$x^\star = \mathcal S(\theta)$
Need to call numerical solver to evaluate
This function is often differentiable
In a series of papers we showed how to analytically differentiate this function, using the implicit function theorem
Benefits of analytical differentiation: works with nonsmooth objective/constraints, low memory usage, don't compound errors
CVXPY
High level domain-specific language (DSL) for convex optimization
Define variables, parameters, objective and constraints
Synthesize into problem object, then call solve method
We've added derivatives to CVXPY (forward and backward)
CVXPYlayers
* Convert CVXPY problems into callable, differentiable Pytorch and Tensorflow modules in one line
Applications
learning convex optimization models (structured prediction): https://stanford.edu/~boyd/papers/learning_copt_models.html
learning decision-making policies (reinforcement learning): https://stanford.edu/~boyd/papers/learning_cocps.html
machine learning hyper-parameter tuning and feature engineering: https://stanford.edu/~boyd/papers/lsat.html
repairing infeasible or unbounded optimization problems: https://stanford.edu/~boyd/papers/auto_repair_cvx.html
as protection layers in neural networks: http://physbam.stanford.edu/~fedkiw/papers/stanford2019-10.pdf
custom neural network layers (sparsemax, csoftmax, csparsemax, LML): https://locuslab.github.io/2019-10-28-cvxpylayers/
and many more...
Average example
Find the average of a vector:
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \sum_{i=1}^n (y_i - x)^2
\end{array}
\end{equation}
Variable $x$, parameters $y\in\mathbf{R}^n$
The solution map is clearly:
$$x=\sum_{i=1}^n y_i / n$$
End of explanation
# Set gradient wrt x
x.gradient = np.array([1.])
# Differentiate in one line
prob.backward()
print("gradient:", y.gradient)
print("analytical gradient:", np.ones(y.size) / n)
Explanation: The gradient is simply:
$$\nabla_y x = (1/n)\mathbf{1}$$
End of explanation
n = 7
# Define variables & parameters
x = cp.Variable()
y = cp.Parameter(n)
# Define objective and constraints
objective = cp.norm1(y - x)
constraints = []
# Synthesize problem
prob = cp.Problem(cp.Minimize(objective), constraints)
# Set parameter values
y.value = np.random.randn(n)
# Solve problem in one line
prob.solve(requires_grad=True)
print("solution:", "%.3f" % x.value)
print("analytical solution:", "%.3f" % np.median(y.value))
# Set gradient wrt x
x.gradient = np.array([1.])
# Differentiate in one line
prob.backward()
print("gradient:", y.gradient)
g = np.zeros(y.size)
g[y.value == np.median(y.value)] = 1.
print("analytical gradient:", g)
Explanation: Median example
Finding the median of a vector:
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \sum_{i=1}^n |y_i - x|,
\end{array}
\end{equation}
Variable $x$, parameters $y\in\mathbf{R}^n$
Solution:
$$x=\mathbf{median}(y)$$
Gradient (no duplicates):
$$(\nabla_y x)_i = \begin{cases}
1 & y_i = \mathbf{median}(y) \
0 & \text{otherwise}.
\end{cases}$$
End of explanation
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
torch.manual_seed(0)
np.random.seed(0)
n = 2
N = 60
X, y = make_blobs(N, n, centers=np.array([[2, 2], [-2, -2]]), cluster_std=3)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
torch.from_numpy, [Xtrain, Xtest, ytrain, ytest])
Xtrain.requires_grad_(True)
m = Xtrain.shape[0]
a = cp.Variable((n, 1))
b = cp.Variable((1, 1))
X = cp.Parameter((m, n))
Y = ytrain.numpy()[:, np.newaxis]
log_likelihood = (1. / m) * cp.sum(
cp.multiply(Y, X @ a + b) - cp.logistic(X @ a + b)
)
regularization = - 0.1 * cp.norm(a, 1) - 0.1 * cp.sum_squares(a)
prob = cp.Problem(cp.Maximize(log_likelihood + regularization))
fit_logreg = CvxpyLayer(prob, [X], [a, b])
torch.manual_seed(0)
np.random.seed(0)
n = 1
N = 60
X = np.random.randn(N, n)
theta = np.random.randn(n)
y = X @ theta + .5 * np.random.randn(N)
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=.5)
Xtrain, Xtest, ytrain, ytest = map(
torch.from_numpy, [Xtrain, Xtest, ytrain, ytest])
Xtrain.requires_grad_(True)
m = Xtrain.shape[0]
# set up variables and parameters
a = cp.Variable(n)
b = cp.Variable()
X = cp.Parameter((m, n))
Y = cp.Parameter(m)
lam = cp.Parameter(nonneg=True)
alpha = cp.Parameter(nonneg=True)
# set up objective
loss = (1/m)*cp.sum(cp.square(X @ a + b - Y))
reg = lam * cp.norm1(a) + alpha * cp.sum_squares(a)
objective = loss + reg
# set up constraints
constraints = []
prob = cp.Problem(cp.Minimize(objective), constraints)
# convert into pytorch layer in one line
fit_lr = CvxpyLayer(prob, [X, Y, lam, alpha], [a, b])
# this object is now callable with pytorch tensors
fit_lr(Xtrain, ytrain, torch.zeros(1), torch.zeros(1))
# sweep over values of alpha, holding lambda=0, evaluating the gradient along the way
alphas = np.logspace(-3, 2, 200)
test_losses = []
grads = []
for alpha_vals in alphas:
alpha_tch = torch.tensor([alpha_vals], requires_grad=True)
alpha_tch.grad = None
a_tch, b_tch = fit_lr(Xtrain, ytrain, torch.zeros(1), alpha_tch)
test_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()
test_loss.backward()
test_losses.append(test_loss.item())
grads.append(alpha_tch.grad.item())
plt.semilogx()
plt.plot(alphas, test_losses, label='test loss')
plt.plot(alphas, grads, label='analytical gradient')
plt.plot(alphas[:-1], np.diff(test_losses) / np.diff(alphas), label='numerical gradient', linestyle='--')
plt.legend()
plt.xlabel("$\\alpha$")
plt.show()
# sweep over values of lambda, holding alpha=0, evaluating the gradient along the way
lams = np.logspace(-3, 2, 200)
test_losses = []
grads = []
for lam_vals in lams:
lam_tch = torch.tensor([lam_vals], requires_grad=True)
lam_tch.grad = None
a_tch, b_tch = fit_lr(Xtrain, ytrain, lam_tch, torch.zeros(1))
test_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()
test_loss.backward()
test_losses.append(test_loss.item())
grads.append(lam_tch.grad.item())
plt.semilogx()
plt.plot(lams, test_losses, label='test loss')
plt.plot(lams, grads, label='analytical gradient')
plt.plot(lams[:-1], np.diff(test_losses) / np.diff(lams), label='numerical gradient', linestyle='--')
plt.legend()
plt.xlabel("$\\lambda$")
plt.show()
# compute the gradient of the test loss wrt all the training data points, and plot
plt.figure(figsize=(10, 6))
a_tch, b_tch = fit_lr(Xtrain, ytrain, torch.tensor([.05]), torch.tensor([.05]), solver_args={"eps": 1e-8})
test_loss = (Xtest @ a_tch.flatten() + b_tch - ytest).pow(2).mean()
test_loss.backward()
a_tch_test, b_tch_test = fit_lr(Xtest, ytest, torch.tensor([0.]), torch.tensor([0.]), solver_args={"eps": 1e-8})
plt.scatter(Xtrain.detach().numpy(), ytrain.numpy(), s=20)
plt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')
plt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')
Xtrain_np = Xtrain.detach().numpy()
Xtrain_grad_np = Xtrain.grad.detach().numpy()
ytrain_np = ytrain.numpy()
for i in range(Xtrain_np.shape[0]):
plt.arrow(Xtrain_np[i], ytrain_np[i],
-.1 * Xtrain_grad_np[i][0], 0.)
plt.legend()
plt.show()
# move the training data points in the direction of their gradients, and see the train line get closer to the test line
plt.figure(figsize=(10, 6))
Xtrain_new = torch.from_numpy(Xtrain_np - .15 * Xtrain_grad_np)
a_tch, b_tch = fit_lr(Xtrain_new, ytrain, torch.tensor([.05]), torch.tensor([.05]), solver_args={"eps": 1e-8})
plt.scatter(Xtrain_new.detach().numpy(), ytrain.numpy(), s=20)
plt.plot([-5, 5], [-3*a_tch.item() + b_tch.item(),3*a_tch.item() + b_tch.item()], label='train')
plt.plot([-5, 5], [-3*a_tch_test.item() + b_tch_test.item(), 3*a_tch_test.item() + b_tch_test.item()], label='test')
plt.legend()
plt.show()
Explanation: Elastic-net regression example
We are given training data $(x_i, y_i){i=1}^{N}$,
where $x_i\in\mathbf{R}$ are inputs and $y_i\in\mathbf{R}$ are outputs.
Suppose we fit a model for this regression problem by solving the elastic-net problem
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \frac{1}{N}\sum{i=1}^N (ax_i + b - y_i)^2 + \lambda |a| + \alpha a^2,
\end{array}
\label{eq:trainlinear}
\end{equation}
where $\lambda,\alpha>0$ are hyper-parameters.
We hope that the test loss $\mathcal{L}^{\mathrm{test}}(a,b) =
\frac{1}{M}\sum_{i=1}^M (a\tilde x_i + b - \tilde y_i)^2$ is small, where
$(\tilde x_i, \tilde y_i)_{i=1}^{M}$ is our test set.
First, we set up our problem, where ${x_i, y_i}_{i=1}^N$, $\lambda$, and $\alpha$ are our parameters.
End of explanation |
13,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
Tasks for those who "feel like a pro"
Step1: data containers
list
tuple
set
dictionary
for more details see docs
Step2: Indexing starts with zero.
General indexing rule (mind the brackets)
Step3: tuples
immutable type!
Step4: sets
Immutable type. Stores only unique elements.
Step5: dictionaries
Step7: Functions
general patterns
Step8: function is just another object (like almost everything in python)
Step9: Guidence on how to create meaningful docstring
Step10: lambda evaluation
Step11: Don't assign lambda expressions to variables. If you need named instance - create standard function with def
Step12: vs
Step13: Numpy - scientific computing
Building matrices and vectors
Step14: Basic manipulations
matvec
Step15: broadcasting
Step16: forcing dtype
Step17: converting dtypes
Step18: shapes (singletons)
mind dimensionality!
Step19: adding new dimension
Step20: Indexing, slicing
Step21: Guess what is the output
Step22: Guess what is the output
Step23: Reshaping
Step24: reshape always returns view!
Step25: Boolean indexing
Step26: Useful numpy functions
eye, ones, zeros, diag
Example
Step27: reducers
Step28: numpy math functions
Step29: managing output
Step30: Meshes
linspace, meshgrid
Let's produce a function
$$
f(x, y) = sin(x+y)
$$
on some mesh.
Step31: Scipy - scientific computing 2
Building sparse matrix
Step32: How does scipy represent sparse matrix?
Step33: Sparse matrix stores only non-zero elements (and their indices)
Step34: Restoring full matrix
Step35: Popular (not sparse) matrices
Step36: Timing - measuring performance
Simplest way to measure time
Step37: You can also use %%timeit magic to measure run time of the whole cell
Step38: Storing timings in a separate variable
Avoid using time.time() or time.clock() directly as their behaviour's different depending on platform; default_timer makes the best choice for you. It measures wall time though, e.g. not very precise.
Step39: Let's make the code less redundant
Step40: timeit with -o parameter
more details on different parameters
Step41: Our new benchmark procedure
Step42: Matplotlib - plotting in python
don't forget to check
* http
Step43: %matplotlib inline ensures all graphs are plotted inside your notebook
Global controls
(more at http
Step44: Combined plot
Step45: Think, why
Step46: Even simpler way - also gives you granular control on plot objects
Step47: Plot formatting
matplotlib has a number of different options for styling your plot
Step48: Subplots
for advanced usage of subplots start here
* http
Step49: Manual control of subplots
Step50: Task
Step51: method 1
Step52: method 2
Step53: method 3
Step54: method 4
Step55: Task 2
Step56: Hankel matrix | Python Code:
greeting = 'Hello'
guest = "John"
my_string = 'Hello "John"'
named_greeting = 'Hello, {name}'.format(name=guest)
named_greeting2 = '{}, {}'.format(greeting, guest)
print named_greeting
print named_greeting2
Explanation: Table of Contents
Tasks for those who "feel like a pro":
Learning Resources
Online
Reading (in the future)
Programming in python
Writing code
Some anti-patterns
Python basics
Basic types
variables
strings
data containers
lists
tuples
sets
dictionaries
Functions
general patterns
functions as arguments
lambda evaluation
Numpy - scientific computing
Building matrices and vectors
Basic manipulations
matvec
broadcasting
forcing dtype
converting dtypes
shapes (singletons)
adding new dimension
Indexing, slicing
View vs Copy
Reshaping
Boolean indexing
Useful numpy functions
reducers: sum, mean, max, min, all, any
numpy math functions
managing output
Meshes
Scipy - scientific computing 2
Building sparse matrix
How does scipy represent sparse matrix?
Restoring full matrix
Popular (not sparse) matrices:
Timing - measuring performance
Simplest way to measure time
Storing timings in a separate variable
timeit with -o parameter
Matplotlib - plotting in python
Configuring matplotlib
Global controls
Combined plot
Combined plot "one-liner"
Plot formatting
Subplots
Iterating over subplots
Manual control of subplots
Other topics
Solutions
Tasks for those who "feel like a pro":
TASK 1
Write the code to enumerate items in the list:
* items are not ordered
* items are not unique
* don't use loops
* try to be as short as possible (not considering import statements)
Example:
Input
```
items = ['foo', 'bar', 'baz', 'foo', 'baz', 'bar']
```
Output
```
something like:
[0, 1, 2, 0, 2, 1]
```
TASK 2
For each element in a list [0, 1, 2, ..., N] build all possible pairs with other elements of that list.
exclude "self-pairing" (e.g. 0-0, 1-1, 2-2)
don't use loops
try to be as short as possible (not considering import statements)
Example:
Input:
[0, 1, 2, 3] or just 4
Output:
```
0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3
1, 2, 3, 0, 2, 3, 0, 1, 3, 0, 1, 2
```
Learning Resources
Online
The Hitchhiker’s Guide to Python
http://docs.python-guide.org/en/latest/
Hard way is easier http://learnpythonthehardway.org
Google python class
https://developers.google.com/edu/python/
Python tutorial
https://docs.python.org/2/tutorial/
Python Tutor - code visualizing (developed by MIT)
http://pythontutor.com/
If you feel lost: CodeAcademy https://www.codecademy.com/en/tracks/python
Learning by doing!
Reading (in the future)
Al Sweigart, "Automate the Boring Stuff with Python", https://automatetheboringstuff.com
Mark Lutz, "Python Pocket Reference" (250 pages)
Mark Lutz, "Learning Python" (1600 pages!)
Programming in python
Writing code
code should be readable first!
style guides
PEP8 (PEP = Python Enhancement Proposal) http://legacy.python.org/dev/peps/pep-0008/
writing idiomatic code http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html
Some anti-patterns
looping through dictionaries
http://docs.quantifiedcode.com/python-anti-patterns/performance/index.html
using wildcard imports (from ... import *)
http://docs.quantifiedcode.com/python-anti-patterns/maintainability/from_module_import_all_used.html
Using single letter to name your variables
http://docs.quantifiedcode.com/python-anti-patterns/maintainability/using_single_letter_as_variable_name.html
Comparing things to None the wrong way
http://docs.quantifiedcode.com/python-anti-patterns/readability/comparison_to_none.html
Comparing things to True the wrong way
http://docs.quantifiedcode.com/python-anti-patterns/readability/comparison_to_true.html
Using type() to compare types
http://docs.quantifiedcode.com/python-anti-patterns/readability/do_not_compare_types_use_isinstance.html
Using an unpythonic loop
http://docs.quantifiedcode.com/python-anti-patterns/readability/using_an_unpythonic_loop.html
Using CamelCase in function names
http://docs.quantifiedcode.com/python-anti-patterns/readability/using_camelcase_in_function_names.html
Python basics
Verify your python version by running
python
python --version
This notebook is written in pyhton 2.
Basic types
variables
```python
a = b = 3
c, d = 4, 5
c, d = d, c
```
strings
End of explanation
fruit_list = ['apple', 'orange', 'peach', 'mango', 'bananas', 'pineapple']
name_length = [len(fruit) for fruit in fruit_list]
print name_length
name_with_p = [fruit for fruit in fruit_list if fruit[0]=='p'] #even better: fruit.startswith('p')
numbered_fruits = []
for i, fruit in enumerate(fruit_list):
numbered_fruits.append('{}.{}'.format(i, fruit))
numbered_fruits
Explanation: data containers
list
tuple
set
dictionary
for more details see docs: https://docs.python.org/2/tutorial/datastructures.html
lists
End of explanation
numbered_fruits[0] = None
numbered_fruits[1:4]
numbered_fruits[1:-1:2]
numbered_fruits[::-1]
Explanation: Indexing starts with zero.
General indexing rule (mind the brackets): [start:stop:step]
End of explanation
p_fruits = (name_with_p[1], name_with_p[0])
p_fruits[1] = 'mango'
single_number_tuple = 3,
single_number_tuple
single_number_tuple + (2,) + (1, 0)
Explanation: tuples
immutable type!
End of explanation
set([0, 1, 2, 1, 1, 1, 3])
Explanation: sets
Immutable type. Stores only unique elements.
End of explanation
fruit_list = ['apple', 'orange', 'mango', 'banana', 'pineapple']
quantities = [3, 5, 2, 3, 4]
order_fruits = {fruit: num \
for fruit, num in zip(fruit_list, quantities)}
order_fruits
order_fruits['pineapple'] = 2
order_fruits
print order_fruits.keys()
print order_fruits.values()
for fruit, amount in order_fruits.iteritems():
print 'Buy {num} {entity}s'.format(num=amount, entity=fruit)
Explanation: dictionaries
End of explanation
def my_func(var1, var2, default_var1=0, default_var2 = False):
This is a generic example of python a function.
You can see this string when do call: my_func?
#do something with vars
if not default_var2:
result = var1
elif default_var1 == 0:
result = var1
else:
result = var1 + var2
return result
Explanation: Functions
general patterns
End of explanation
print 'Function {} has the following docstring:\n{}'\
.format(my_func.func_name, my_func.func_doc)
Explanation: function is just another object (like almost everything in python)
End of explanation
def function_over_function(func, *args, **kwargs):
function_result = func(*args, **kwargs)
return function_result
function_over_function(my_func, 3, 5, default_var1=1, default_var2=True)
Explanation: Guidence on how to create meaningful docstring:
https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt#docstring-standard
functions as arguments
End of explanation
function_over_function(lambda x, y, factor=10: (x+y)*factor, 1, 2, 5)
Explanation: lambda evaluation
End of explanation
my_simple_func = lambda x: x+1
Explanation: Don't assign lambda expressions to variables. If you need named instance - create standard function with def
End of explanation
def my_simple_func(x):
return x + 1
Explanation: vs
End of explanation
import numpy as np
matrix_from_list = np.array([[1, 3, 4],
[2, 0, 5],
[4, 4, 1],
[0, 1, 0]])
vector_from_list = np.array([2, 1, 3])
print 'The matrix is\n{matrix}\n\nthe vector is\n{vector}'\
.format(vector=vector_from_list, matrix=matrix_from_list)
Explanation: Numpy - scientific computing
Building matrices and vectors
End of explanation
matrix_from_list.dot(vector_from_list)
Explanation: Basic manipulations
matvec
End of explanation
matrix_from_list + vector_from_list
Explanation: broadcasting
End of explanation
single_precision_vector = np.array([1, 3, 5, 2], dtype=np.float32)
single_precision_vector.dtype
Explanation: forcing dtype
End of explanation
vector_from_list.dtype
vector_from_list.astype(np.int16)
Explanation: converting dtypes
End of explanation
row_vector = np.array([[1,2,3]])
print 'New vector {} has dimensionality {}'\
.format(row_vector, row_vector.shape)
print 'The dot-product is: ', matrix_from_list.dot(row_vector)
singleton_vector = row_vector.squeeze()
print 'Squeezed vector {} has shape {}'.format(singleton_vector, singleton_vector.shape)
matrix_from_list.dot(singleton_vector)
Explanation: shapes (singletons)
mind dimensionality!
End of explanation
print singleton_vector[:, np.newaxis]
mat = np.arange(12)
mat.reshape(-1, 4)
mat
print singleton_vector[:, None]
Explanation: adding new dimension
End of explanation
vector12 = np.arange(12)
vector12
Explanation: Indexing, slicing
End of explanation
matrix43 = vector12.reshape(4, 3)
matrix43
Explanation: Guess what is the output:
python
vector12[:3]
vector12[-1]
vector12[:-2]
vector12[3:7]
vector12[::2]
vector12[::-1]
End of explanation
matrix43_copy = matrix43[:]
Explanation: Guess what is the output:
python
matrix43[:, 0]
matrix43[-1, :]
matrix43[::2, :]
matrix43[:3, :-1]
matrix43[3:, 1]
Unlike Matlab, numpy arrays are column-major (or C-major) by default, not row-major (or F-major).
View vs Copy
Working with views is more efficient and is a preferred way.
view is returned whenever basic slicing is used
more details at http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
making copy is simple:
End of explanation
matrix_to_reshape = np.random.randint(10, 99, size=(6, 4))
matrix_to_reshape
reshaped_matrix = matrix_to_reshape.reshape(8, 3)
reshaped_matrix
Explanation: Reshaping
End of explanation
reshaped_matrix[-1, 0] = 1
np.set_printoptions(formatter={'all':lambda x: '_{}_'.format(x) if x < 10 else str(x)})
matrix_to_reshape[:]
np.set_printoptions()
Explanation: reshape always returns view!
End of explanation
idx = matrix43 > 4
matrix43[idx]
Explanation: Boolean indexing
End of explanation
def three_diagonal(N):
A = np.zeros((N, N), dtype=np.int)
for i in range(N):
A[i, i] = -2
if i > 0:
A[i, i-1] = 1
if i < N-1:
A[i, i+1] = 1
return A
print three_diagonal(5)
def numpy_three_diagonal(N):
main_diagonal = -2 * np.eye(N)
suddiag_value = np.ones(N-1,)
lower_subdiag = np.diag(suddiag_value, k=-1)
upper_subdiag = np.diag(suddiag_value, k=1)
result = main_diagonal + lower_subdiag + upper_subdiag
return result.astype(np.int)
numpy_three_diagonal(5)
Explanation: Useful numpy functions
eye, ones, zeros, diag
Example:
Build three-diagonal matrix with -2's on main diagonal and 1's and subdiagonals
Is this code valid?
End of explanation
A = numpy_three_diagonal(5)
A[0, -1] = 5
A[-1, 0] = 3
print A
print A.sum()
print A.min()
print A.max(axis=0)
print A.sum(axis=0)
print A.mean(axis=1)
print (A > 4).any(axis=1)
Explanation: reducers: sum, mean, max, min, all, any
End of explanation
print np.pi
args = np.arange(0, 2.5*np.pi, 0.5*np.pi)
print np.sin(args)
print np.round(np.sin(args), decimals=2)
Explanation: numpy math functions
End of explanation
'{}, {:.1%}, {:e}, {:.2f}, {:.0f}'.format(*np.sin(args))
np.set_printoptions(formatter={'all':lambda x: '{:.2f}'.format(x)})
print np.sin(args)
np.set_printoptions()
Explanation: managing output
End of explanation
linear_index = np.linspace(0, np.pi, 10, endpoint=True)
mesh_x, mesh_y = np.meshgrid(linear_index, linear_index)
values_3D = np.sin(mesh_x + mesh_y)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
ax.plot_wireframe(mesh_x, mesh_y, values_3D)
ax.view_init(azim=-45, elev=30)
plt.title('The plot of $f(x, y) = sin(x+y)$')
Explanation: Meshes
linspace, meshgrid
Let's produce a function
$$
f(x, y) = sin(x+y)
$$
on some mesh.
End of explanation
import scipy.sparse as sp
def scipy_three_diagonal(N):
main_diagonal = -2 * np.ones(N, )
suddiag_values = np.ones(N-1,)
diagonals = [main_diagonal, suddiag_values, suddiag_values]
# Another option: use sp.eye(N) and add subdiagonals
offsets = [0, 1, -1]
result = sp.diags(diagonals, offsets, shape=(N, N), format='coo')
return result
my_sparse_matrix = scipy_three_diagonal(5)
Explanation: Scipy - scientific computing 2
Building sparse matrix
End of explanation
my_sparse_matrix
Explanation: How does scipy represent sparse matrix?
End of explanation
print my_sparse_matrix
Explanation: Sparse matrix stores only non-zero elements (and their indices)
End of explanation
my_sparse_matrix.toarray()
my_sparse_matrix.A
Explanation: Restoring full matrix
End of explanation
from scipy.linalg import toeplitz, hankel
hankel(xrange(4), [-1, -2, -3, -4])
toeplitz(xrange(4))
Explanation: Popular (not sparse) matrices:
End of explanation
N = 1000
%timeit three_diagonal(N)
%timeit numpy_three_diagonal(N)
%timeit scipy_three_diagonal(N)
Explanation: Timing - measuring performance
Simplest way to measure time
End of explanation
%%timeit
N = 1000
calc = three_diagonal(N)
calc = scipy_three_diagonal(N)
del calc
Explanation: You can also use %%timeit magic to measure run time of the whole cell
End of explanation
from timeit import default_timer as timer
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
timings = {bench:[] for bench in bench_names}
for n in dims:
start_time = timer()
calc = three_diagonal(n)
time_delta = timer() - start_time
timings['loop'].append(time_delta)
start_time = timer()
calc = numpy_three_diagonal(n)
time_delta = timer() - start_time
timings['numpy'].append(time_delta)
start_time = timer()
calc = scipy_three_diagonal(n)
time_delta = timer() - start_time
timings['scipy'].append(time_delta)
Explanation: Storing timings in a separate variable
Avoid using time.time() or time.clock() directly as their behaviour's different depending on platform; default_timer makes the best choice for you. It measures wall time though, e.g. not very precise.
End of explanation
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
timings = {bench_name: [] for bench_name in bench_names}
def timing_machine(func, *args, **kwargs):
start_time = timer()
result = func(*args, **kwargs)
time_delta = timer() - start_time
return time_delta
for n in dims:
timings['loop'].append(timing_machine(three_diagonal, n))
timings['numpy'].append(timing_machine(numpy_three_diagonal, n))
timings['scipy'].append(timing_machine(scipy_three_diagonal, n))
Explanation: Let's make the code less redundant
End of explanation
timeit_result = %timeit -q -r 5 -o three_diagonal(10)
print 'Best of {} runs: {:.8f}s'.format(timeit_result.repeat,
timeit_result.best)
Explanation: timeit with -o parameter
more details on different parameters:
https://ipython.org/ipython-doc/dev/interactive/magics.html#magic-timeit
End of explanation
dims = [300, 1000, 3000, 10000]
bench_names = ['loop', 'numpy', 'scipy']
bench_funcs = [three_diagonal, numpy_three_diagonal, scipy_three_diagonal]
timings_best = {bench_name: [] for bench_name in bench_names}
for bench_name, bench_func in zip(bench_names, bench_funcs):
print '\nMeasuring {}'.format(bench_func.func_name)
for n in dims:
print n,
time_result = %timeit -q -o bench_func(n)
timings_best[bench_name].append(time_result.best)
Explanation: Our new benchmark procedure
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Matplotlib - plotting in python
don't forget to check
* http://matplotlib.org/users/pyplot_tutorial.html
* http://matplotlib.org/gallery.html
* http://matplotlib.org/examples/index.html
Configuring matplotlib
End of explanation
# plt.rcParams.update({'axes.labelsize': 'large'})
plt.rcParams.update({'font.size': 14})
Explanation: %matplotlib inline ensures all graphs are plotted inside your notebook
Global controls
(more at http://matplotlib.org/users/customizing.html)
End of explanation
plt.figure(figsize=(10,8))
for bench_name, values in timings_best.iteritems():
plt.semilogy(dims, values, label=bench_name)
plt.legend(loc='best')
plt.title('Benchmarking results with best of timeit', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
plt.figure(figsize=(10,8))
for bench_name, values in timings.iteritems():
plt.semilogy(dims, values, label=bench_name)
plt.legend(loc='best')
plt.title('Benchmarking results with default_timer', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
Explanation: Combined plot
End of explanation
k = len(timings_best)
iter_xyf = [item for sublist in zip([dims]*k,
timings_best.values(),
list('rgb'))\
for item in sublist]
plt.figure(figsize=(10, 8))
plt.semilogy(*iter_xyf)
plt.legend(timings_best.keys(), loc=2, frameon=False)
plt.title('Benchmarking results - "one-liner"', y=1.03)
plt.xlabel('Matrix dimension size')
plt.ylabel('Time, s')
Explanation: Think, why:
* "loop" was faster then "numpy"
* "scipy" is almost constant
* results for default_timer and "best of timeit" are different
You might want to read the docs:
* https://docs.python.org/2/library/timeit.html#timeit.default_timer
* https://docs.python.org/2/library/time.html#time.clock and https://docs.python.org/2/library/time.html#time.time
Remark: starting from python 3.3 it's recommended to use time.perf_counter() and time.process_time()
https://docs.python.org/3/library/time.html#time.perf_counter
Also note, that for advanced benchmarking it's better to use profiling tools.
Combined plot "one-liner"
Use plt.plot? to get detailed info on function usage.
Task: given lists of x-values, y-falues and plot format strings, plot all three graphs in one line.
Hint: use list comprehensions
End of explanation
plt.figure(figsize=(10, 8))
figs = [plt.semilogy(dims, values, label=bench_name)\
for bench_name, values in timings.iteritems()];
ax0, = figs[0]
ax0.set_dashes([5, 10, 20, 10, 5, 10])
ax1, = figs[1]
ax1.set_marker('s')
ax1.set_markerfacecolor('r')
ax2, = figs[2]
ax2.set_linewidth(6)
ax2.set_alpha(0.3)
ax2.set_color('m')
Explanation: Even simpler way - also gives you granular control on plot objects
End of explanation
all_markers = [
'.', # point
',', # pixel
'o', # circle
'v', # triangle down
'^', # triangle up
'<', # triangle_left
'>', # triangle_right
'1', # tri_down
'2', # tri_up
'3', # tri_left
'4', # tri_right
'8', # octagon
's', # square
'p', # pentagon
'*', # star
'h', # hexagon1
'H', # hexagon2
'+', # plus
'x', # x
'D', # diamond
'd', # thin_diamond
'|', # vline
]
all_linestyles = [
'-', # solid line style
'--', # dashed line style
'-.', # dash-dot line style
':', # dotted line style
'None'# no line
]
all_colors = [
'b', # blue
'g', # green
'r', # red
'c', # cyan
'm', # magenta
'y', # yellow
'k', # black
'w', # white
]
Explanation: Plot formatting
matplotlib has a number of different options for styling your plot
End of explanation
n = len(timings)
experiment_names = timings.keys()
fig, axes = plt.subplots(1, n, sharey=True, figsize=(16,4))
colors = np.random.choice(list('rgbcmyk'), n, replace=False)
markers = np.random.choice(all_markers, n, replace=False)
lines = np.random.choice(all_linestyles, n, replace=False)
for ax_num, ax in enumerate(axes):
key = experiment_names[ax_num]
ax.semilogy(dims, timings[key], label=key,
color=colors[ax_num],
marker=markers[ax_num],
markersize=8,
linestyle=lines[ax_num],
lw=3)
ax.set_xlabel('matrix dimension')
ax.set_title(key)
axes[0].set_ylabel('Time, s')
plt.suptitle('Benchmarking results', fontsize=16, y=1.03)
Explanation: Subplots
for advanced usage of subplots start here
* http://matplotlib.org/examples/pylab_examples/subplots_demo.html
* http://matplotlib.org/users/tight_layout_guide.html
* http://matplotlib.org/users/gridspec.html
Iterating over subplots
End of explanation
plt.figure()
plt.subplot(211)
plt.plot([1,2,3])
plt.subplot(212)
plt.plot([2,5,4])
Explanation: Manual control of subplots
End of explanation
items = ['foo', 'bar', 'baz', 'foo', 'baz', 'bar']
Explanation: Task: create subplot with 2 columns and 2 rows. Leave bottom left quarter empty. Scipy and numpy benchmarks should go into top row.
Other topics
function wrappers and decorators
installing packages
importing modules
ipyton magic
qtconsole
environment
extensions
profiles (deprecated in jupyter)
profiling
debugging
cython, numba
openmp
OOP
python 2 vs python 3
plotting in python - palletes and colormaps, styles
pandas (presenting results)
numpy strides, contiguousness, vectorize function, broadcasting, saving output
magic functions (applied to line and to code cell)
jupyter configuration
Solutions
Task 1
End of explanation
from collections import defaultdict
item_ids = defaultdict(lambda: len(item_ids))
map(item_ids.__getitem__, items)
Explanation: method 1
End of explanation
import pandas as pd
pd.DataFrame({'items': items}).groupby('items', sort=False).grouper.group_info[0]
Explanation: method 2
End of explanation
import numpy as np
np.unique(items, return_inverse=True)[1]
Explanation: method 3
End of explanation
last = 0
counts = {}
result = []
for item in items:
try:
count = counts[item]
except KeyError:
counts[item] = count = last
last += 1
result.append(count)
result
Explanation: method 4
End of explanation
N = 1000
from itertools import permutations
%timeit list(permutations(xrange(N), 2))
Explanation: Task 2
End of explanation
import numpy as np
from scipy.linalg import hankel
def pairs_idx(n):
return np.vstack((np.repeat(xrange(n), n-1), hankel(xrange(1, n), xrange(-1, n-1)).ravel()))
%timeit pairs_idx(N)
Explanation: Hankel matrix: $a_{ij} = a_{i-1, j+1}$
End of explanation |
13,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of Preliminary Trail Data Pulled from Cavendish Balance
Step1: The weird behavior at the beginning occured when we were making an alteration to the experimental setup itself (doing one of our many adjustments to try and zero out our $\theta_e$. We can ignore this as it is not indicative of our data and look at where it moves into a damped harmonic oscialltion.
Step2: Trying out the scipy.optimzie library to fit this to a decaying sinuisodal curve.
Step3: $$b = 1.41 \times 10 ^{-3} \frac{1}{s}$$ | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import math as m
from scipy.signal import argrelextrema as argex
plt.style.use('ggplot')
data_dir = '../data/'
trial_data = np.loadtxt(data_dir+'20171025_cavendish_new_wire_free_decay.txt', delimiter='\t')
plt.plot(trial_data[:,0], trial_data[:,1])
plt.title("Trial Data from Cavendish Balance")
plt.ylabel("Anglular Positon (mrads)")
plt.xlabel("Time (s)")
Explanation: Analysis of Preliminary Trail Data Pulled from Cavendish Balance
End of explanation
x_data = trial_data[:,0]
y_data = trial_data[:,1]
Explanation: The weird behavior at the beginning occured when we were making an alteration to the experimental setup itself (doing one of our many adjustments to try and zero out our $\theta_e$. We can ignore this as it is not indicative of our data and look at where it moves into a damped harmonic oscialltion.
End of explanation
from scipy.optimize import curve_fit
def decay(t, a, b, w, phi, theta_0):
return a*np.exp(-b*t)*np.cos(w*t + phi) + theta_0
popt, pcov = curve_fit(decay, x_data, y_data , p0 = (50, 1.3e-3, 3e-2, -6e-1, 0))
popt
plt.plot(x_data, y_data, 'r-', linewidth = 0.3, label = "Raw Data")
# plt.plot(x_data, decay(x_data, *popt), 'g-', label = 'Fit of a*np.exp(-b*t)*np.cos(w*t + phi) + theta_0')
plt.title("Free Oscillation Data, No Large Masses")
plt.ylabel("Angle, Not Calibrated (mrad)")
plt.xlabel("Time (s)")
plt.legend()
plt.plot(x_data[:800], y_data[:800], 'rp', linewidth = 5, label = "Raw Data")
plt.plot(x_data[:800], decay(x_data[:800], *popt), 'g-', label = 'Fit of a*np.exp(-b*t)*np.cos(w*t + phi) + theta_0')
plt.title("Free Oscillation Data, No Large Masses")
plt.ylabel("Angle, Not Calibrated (mrad)")
plt.xlabel("Time (s)")
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
round(popt[1],5)
Explanation: Trying out the scipy.optimzie library to fit this to a decaying sinuisodal curve.
End of explanation
np.pi*1.57E11*(0.08E-3)**4/(32*52.4E-3)
np.pi*1.57E11*(25E-6)**4/(32*80E-3)
Explanation: $$b = 1.41 \times 10 ^{-3} \frac{1}{s}$$
End of explanation |
13,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Delayed yield
Introduction
Delayed yield is a phenomenon of well drawdown in a confined aquifer, which seems to follow two differnt Theis curves, the first corresponding to the Theis curve belonging to the situation with a confined aqufier, the second corresponding to the Theis curve belonging to the situation with phreatic water.
A set of numerically computed delayed yield curves that show the phenomenon is presented in the figure.
Figure
Step1: Figure
Step7: Next we define a class piezometer, that allows us to store information pertaining to individual piezometers, so that we can easily plot them, use their data together with their meta data. Defining such a class prevents clutter. It's always to good idea to define a class or a function when this prevents duplicating code. Duplicated code is always error-prone and very hard to update and maintain.
Step8: The function invK0 defined below is the inverse of the K0 bessel function. It takes the value of K0(z) and computes z. To do this, Newton Raphson iterations are used. Look up the Newton Raphson method on the internet (Wikipedia), if you don't know it. It's a basic numerical method.
Step9: The next function reads in the data for all piezometers and returns a list of piezometer objects, that is, objects of the class defined above.
Step10: Now read the data, returning the piezometers in a list, called "piezoms".
After that show the lines on a semilog graph, firrst as they are read and then together with the best linear approximation.
Step11: Intepretation of the pumping test
The data on double log scales should reveal a picture of the delayed type curves. The first part of the lines should mimic Theis, the mediate part should tend to a horizontal line as predicts the Hantush' type curves and the last part should also mimic the later part of a Theis curve. One sees however, that it is difficult draw exact conclusions. From the continuously, almost constant rising curves, it is obvious, on the other hand, that no Theis like curve could be fitted. The same is true for the Hantush curves. The only thing we observe that gives a hint towards delayed yield is the mid part of the green line that tends to horizontal but later on starts to rise again.
We also observe some increase of drawdown at the very end of the test, at least for the piezometers at 10 and 30 m. We don't know what this is, because non of our models will give such and upward decline. But it may very well be caused by wheather influences, other boundaries, and even bounce-back of an impervious boundary far away. Therefore, without any further information regarding the conditions of the test, we ignore this deviation for our analyis.
What could help is drawing the drawdown data on linear scale but keep the time data on logarithmic scale. From the behavior of Theis curves, we know that the drawdown will follow straight lines on such graphs. That is the drawdown per log cycle is the constant and the same for all piezometers, i.e. irrespective of their distance from the well. Clearly, differences may exist in this respect between individual piezometers, due to heterogeneity of the aquifer. But we expect those to be small at the scale of some tens of meters that pertain to these piezometers.
The best way it thus to fit straight parallel lines through all of the data. Keeping the lines parallel is based on the fact that they have to be parallel in a homogeneous system, and it minimizes errors where fitting such lines is uncertain. The result is given in the figure above.
From this we conclude that the data provide indeed information on the early and late Theis curves. The first thing to conclude is that the ratio of the specific yield and the elastic storage coefficient equals the shift between the two straight lines drawn throug the green points.
$$ \frac {S_y} {S_e} = \frac {3 \times 10^{1}} {2.2 \times 10^{0}} \approx 14 $$
We may then compute the transmissivity and the storage coefficient using the log approximation of the Theis curve.
The change of the drawdown per logcycle is also read from the straigh lines, it is is 0.115 m
$$ s_{10t} - s_{t} = \frac Q {4 \pi kD} \ln 10 $$
So that
$$ kD = \frac Q {4 \pi} \frac {\ln 10} {0.115} $$
With $Q = 53$ m3/h = 0.0147 m3/s, this yields
Step12: The essential difference between the results of Neuman (using the semilog method) and the indipendently derived figures, is the steady state drawdown, i.e. the Hantush case, that would pertain to the situation in which the water table would be fixed by continuous additional supply of water. That figure is difficult to obtain from the data. Given the curves a figure of 0.22 m for the r-10 m piezometer would seem valid, but 0.285 m is needed to make the results fit with those of Neuman.
Now that we have the data, we can plot both the Theis and Hantush curves together with the data to verify the match.
Step13: In conclusion, the curves analysed here deviate little from the measurements. We had to approximate the steady-state drawdown without delayed yield, to obtain a value for $ r/\lambda $. The value chosen was 0.22 m, whereas the value that follows from the results of Neuman would be .28 m, corresponding to the horizontal branch of the Hantush curves. Neuman's results do not seem to agree with the Hantush curve that is expected to match. It is, however, unclear what the reason for this difference is. It could be tested using a numerical model. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.special import expi, k0, k1 # exp. integral and two bessel functions
from wells import Wh # Hantush well function defined in module wells
def W(u): return -expi(-u) # Theis well function
Se = 1e-3
Sy = 1e-1
u = np.logspace(-5., 2., 81)
ue = u
uy = ue * Sy/Se
ax = plt.figure().add_subplot(111)
ax.set(xlabel="1/u", ylabel="W(u)", xscale="log", yscale="log",
ylim=(1e-6, 10),
title="$W(u_e)$ and $W(u_y)$")
ax.plot(1/u, W(ue), 'r', label="W(ue)")
ax.plot(1/u, W(uy), 'b', label="W(uy)")
ax.legend(loc='best')
plt.show()
Explanation: Delayed yield
Introduction
Delayed yield is a phenomenon of well drawdown in a confined aquifer, which seems to follow two differnt Theis curves, the first corresponding to the Theis curve belonging to the situation with a confined aqufier, the second corresponding to the Theis curve belonging to the situation with phreatic water.
A set of numerically computed delayed yield curves that show the phenomenon is presented in the figure.
Figure: Some numerically computed delayed yield drawdown curves.
The delayed yield is caused by vertical resistance in the aquifer (or of a covering layer), making release of water from the water table decline slow relative to that from elastic strorage. Due to this, the drawdown starts off and spreads as if the aqufier was confined, first like a Theis curve, and then like a Hantush curve seeking a steady-state value. However, this steady state is not reachted, because the water table itself starts declining, like if the aquifer were phreatic. At later times the drawdown has become so slow, that this water table decline can easily cope with the water table decline and no further delay is observed, causing the drawdown to follow the Theis curve that belongs to the specific yield instead of the elastic storage.
This phenomenon is important when pumping tests are carried out in water table aquifers or aquifers covered by a semi-confined toplayer above which the water level is not maintained. Such tests, of short, may not show the delayed yield, which may lead to the hastely drawn conclusion that the "hantush" drawdown that seems to establish soon after the start of the test, is the final drawdown. However, if we would have continued our test longer, we would have cearly seen the darwdown increasing again and following the second Theis curve. The delayed drawdown can be substantially larger than the early time elastic drawdown and false conclusions may be drawn if this phenomenon is not anticipated by the designers of the pumping test.
Delayed yield: two Theis curves
The drawdown in a confined aquifer follows Theis. The same is true for that in an unconfined aquifer under the condition that the drawdown is not too great relative to the thickness of the aquifer, such that this thickness can be assumed constant. The difference between the two drawdowns (noting that the transmissivity is the same in both cases), is the delay of the unconfined drawdown relative to that in the confined aquifer.
Let's analyze this by starting with the Theis drawdown solution
$$ s(r,t) = \frac Q {4 \pi kD} W( u ) \,\,\,\, with \,\,\,\, u = \frac {r^2 S} {4 kD t} $$
The two cases differ by their storage coefficient, which is $S_e$ for the confined case and $S_y$, the specific yield, for the unconfined case. So we have
$$ u_{conf} = \frac {r^2 S_2} {4 kD t}\,\,\,\,\, and \,\,\,\,\, u_{unconf} = \frac {r^2 S_y} {4 kD t} $$
and, therefore
$$ \frac {u_{unconf}} {u_{conf}} = \frac {S_y} {S_e} $$
Given that $S_y$ is two orders of magnitude larger than $S_e$, we see that both curves are the same, except that the curve for the unconfined case is two orders of magnitude delayed with respect to the first. One can see this as the time in the unconfined case has to be $\frac {S_y} {S_e}$ times that of the confined case to get the same $W(u)$ value and, therefore the same drawdown.
We have also seen what the radius of influence was. We got it from the logarithmic approximation of the Theis dradown, which was
$$ s(r, t) \approx \frac Q {4 \pi kD } \ln \left( \frac {2.25 kD t} {r^2 S} \right) $$
and realizing that s=0 when the argument of the $\ln(\cdots)$ equals 1, so:
$$ r = \sqrt {\frac {2.25 kD t} S } $$
If we draw the two curves on double log scale, as usual $W(u)$ versus $1/u$ then dividing $u$ by a factor. i.e. multiplyin $1/u$ by that factor implies shifting the drawdown curve to the right, but without changing its shape.
We see here that if $S_y$ is two orders of magnitude larger than $S_e$, the radius of influence is one order of magniture larger. So the radius of influence grows in the order of 10 times faster in the confined case than it does in the unconfined case.
This leads to delayed yield in situations where the hydraulic vertical resistance in the aquifer is relatively high. In that case the immediate drawdown is dominated by the elastic Theis case and spreads rapidly from the well. But as soon as the elastic drawdown establishes itself, will the free water table decline and providing water to the flow in the aquifer. After some time, the free water table will adapt to the elastic drawdown in the aquifer and the system will behave as if it were unconfined. So initially the head in the aquifer will behave like the Theis formula with the elastic storage coefficient, but in the long run it will be have like the Theis formula with the unconfined storage coefficient, the specific yield. Of course, there is a transition between the two curves.
End of explanation
import csv
import wells
W = wells.W # Theis
Wh = wells.Wh # Hantush
# Global info and prefrences for this project
proj = "Pumping test at St. Pardon de Conques (1965)"
folder = './ptestDelayed'
p_names = 'ptst02', 'ptst10', 'ptst30'
l_names = 'semilog02_late', 'semilog10_late', 'semilog30_late', 'semilog10_early'
plotsettings = {'xlabel':'t [s]', 'ylabel':'s [m]', 'xscale':'log',
'xlim':(1e-1, 1e6), 'title':proj}
pltset0 = plotsettings.copy(); pltset0['yscale']='log'; pltset0['ylim'] =(1e-2, 1.0)
pltset1 = plotsettings.copy(); pltset1['yscale']='linear'; pltset1['ylim']=(0.,0.9)
Explanation: Figure: The two Theis curves caused by delayed yield
We see that the blue curve is equal to the red one but shifted exactly a factor $S_y/S_e = 100$ to the right, i.e. it is delayed by a factor $S_y/S_e$.
Therefore, in an unconfined aquifer with some vertical resistance, we expect the drawdown to initially follow the red curve, the elastic drawdown curve, and after some transition time follow the blue curve, the unconfined drawdown curve.
We already know the behavior of the Hantush drawdown. If a constant head is maintained above the aquifer and leakage occurs through a leaky confining layer towards the aquifer proportional to the difference between the maintained head above and the lowered head in the aquifer, then the drawdown will after some time become constant. The Hantush drawdown curves all deviate from their initial Theis start to become horizontal, the level of which depends on the distance from the well, $r$, and the characteristic length $\lambda = \sqrt{ kD c}$ of the aquifer system. The higher the resistance, that is, the larger the $\lambda$, the longer will the drawdown follow the Theis curve and the larger will the final Hantush drawdown be.
The delayed yield situation is similar to that of Hantush until the drawdown in the overlying layer, of of the water table, becomes substantial, its supply deminishes and therefore, the drawdown must match that of the delayed Theis curve, the curve that belongs to the specific yield $S_y$.
A (famous) pumping test
The pumping test was carried out in the Gironde Valley in France in 1965. Bonnet at al [1970] published an analysis on teh basis of Boulton's theory. The aquifer is clayey in shallow depth and sandy to gravelly at larger depths. The aquifer bottom is at 13.75 m and the initial water table at 5.51 m, so that the wet thickness b=8.24 m.
The well was screened between 7 and 13.5 m and has a diameter of 0.32 m. The pumping lastted for 48h 50 min at a rate as 53 m3/a, but oscillated between 51 and 54.6 m3/h. Drawdowns were monitored at 10 and 30 m from the well.
Due to the large penetration, the effectof partial penetration can be neglected at 10 and 30 m. Although not probable, due to lack of information about the observation wells, their screens were assumed to be perforated over the entire depth of the aquifer. The results were consistent with this assumption, which may also mean that the vertical resistance within the coarser part of the aquifer may be neglected as it may very well be that the screen was only perforated in the coarser part of the aquifer.
The drawdown data are as follows:
End of explanation
class Piezometer:
Piezometer definition
clrs ='brgkmcy'
styles = ['-',';','--','-.']
markers = ['o','s','^','v','p','+','x','.']
c_cnt = -1
l_cnt = -1
m_cnt = -1
def __init__(self, name="", t=np.array([]), s=np.array([]),
dim_t='s', dim_s='m', color=None, linestyle=None, marker=None):
self.name = name
self.t = t
self.s = s
self.dim_t = dim_t
self.dim_s = dim_s
self.color = color
self.linestyle = linestyle
self.marker = marker
self.P = type(self) # the class
if color is None:
self.color = self.nextClr()
else:
self.color = color
if linestyle is None:
self.linestyle = self.nextStyle()
else:
self.linestyle = linestyle
if marker is None:
self.marker = self.nextMarker()
else:
self.marker = marker
def plot(self):
Plot the piezometer
ax =plt.gca()
lspec = self.color + self.marker + self.linestyle
ax.plot(self.t, self.s, lspec)
def nextClr(self):
Remembers which color previous piezometer used and chooses the next color
C = self.P
C.c_cnt += 1
if C.c_cnt == len(C.clrs):
C.c_cnt = 0
return C.clrs[C.c_cnt]
def nextStyle(self):
Remembers which line style the previous piezometer used and chooses the next style
C = self.P
C.l_cnt += 1
if C.l_cnt == len(C.styles):
C.l_cnt = 0
return C.styles[C.l_cnt]
def nextMarker(self):
Remembers which marker the previous piezometer used and chooses the next marker
C = self.P
C.m_cnt += 1
if C.m_cnt == len(C.markers):
C.m_cnt = 0
return C.markers[C.m_cnt]
Explanation: Next we define a class piezometer, that allows us to store information pertaining to individual piezometers, so that we can easily plot them, use their data together with their meta data. Defining such a class prevents clutter. It's always to good idea to define a class or a function when this prevents duplicating code. Duplicated code is always error-prone and very hard to update and maintain.
End of explanation
def invK0(K0, tol=1e-6, verbose=False):
"Return x if K0(x) is given (Newton Raphson)"
x = tol
for i in range(100):
f = K0 - k0(x)
facc = k1(x)
x = x - f/facc
if verbose:
print("{:3d} {:10.3g} {:10.3g}".format(i, x, f/facc))
if np.abs(f/facc) < tol:
break
return x
Explanation: The function invK0 defined below is the inverse of the K0 bessel function. It takes the value of K0(z) and computes z. To do this, Newton Raphson iterations are used. Look up the Newton Raphson method on the internet (Wikipedia), if you don't know it. It's a basic numerical method.
End of explanation
def get_data(folder, names, linestyle=None, marker=None):
"Reads in the data, returns them as a list of class Piezometer"
piezoms = []
for name in names:
p = Piezometer(name=name)
fName = folder + '/' + name + '.csv'
with open(fName) as csvfile:
datareader = csv.reader(csvfile, delimiter=',', quotechar="|")
t = []
s = []
for j, row in enumerate(datareader):
if j>5: # skip first 5 lines
t.append(row[0])
s.append(row[1])
p = Piezometer(name=name, t=np.array(t), s=np.array(s), linestyle=linestyle, marker=marker)
piezoms.append(p)
return piezoms
Explanation: The next function reads in the data for all piezometers and returns a list of piezometer objects, that is, objects of the class defined above.
End of explanation
# reads piezometers
piezoms = get_data(folder, p_names, linestyle='', marker='o')
# get straight lines drawn through piezometer data
piezlines = get_data(folder, l_names, linestyle='-', marker='')
# Show the data
# Double log scale
ax1 = plt.figure().add_subplot(111)
ax1.set(**pltset0)
ax1.grid(True)
for p in piezoms:
lspec = p.color + p.linestyle + p.marker
ax1.plot(p.t , p.s, lspec, label=p.name)
ax1.legend(loc='best', fontsize='small')
# Semilog scale
ax2 = plt.figure().add_subplot(111)
ax2.set(**pltset1)
ax2.grid(True)
for p in piezoms:
lspec = p.color + p.linestyle + p.marker
ax2.plot(p.t, p.s, lspec, label=p.name)
for pl in piezlines:
lspec = pl.color + pl.linestyle + pl.marker
ax2.plot(pl.t, pl.s, lspec, label=pl.name)
ax2.legend(loc='best', fontsize='small')
plt.show()
Explanation: Now read the data, returning the piezometers in a list, called "piezoms".
After that show the lines on a semilog graph, firrst as they are read and then together with the best linear approximation.
End of explanation
# compute it all
def all(author, proj, t0, s, ds, sig, D=8.24, Q=53., r=10.):
sph = 3600.
t0 = t0/sph # [h]
kD = Q / (4 * np.pi) * np.log(10) / ds
Sy = 2.25 * kD * t0 / r**2
Se = Sy / sig
besK0 = 2 * np.pi * kD * s / Q
Lam = r / invK0(besK0)
beta = (r/Lam)**2 # (r/lambda)**2
av = beta * (D/r)**2 # kz/kr
kr = kD/D
kz = kr * av
c = D/kz
print("\nUsing the data from {}:".format(author))
print("Q = {:10.3g} m3/h".format(Q))
print("kD = {:10.3g} m2/h".format(kD))
print("Sy = {:10.3g} [-]".format(Sy))
print("Se = {:10.3g} [-]".format(Se))
print("Sy/Se = {:10.3g} [-]".format(Sy/Se))
print("Se/Sy = {:10.3g} [-]".format(Se/Sy))
print("K0(r/lambda)= {:7.3g} [-]".format(besK0))
print("r/lambda = {:10.3g} [m]".format(r/Lam))
print("r = {:10.3g} [m]".format(r))
print("lambda = {:10.3g} [m]".format(Lam))
print("beta = {:10.3g} [-]".format(beta))
print("kz/kr = {:10.3g} [-]".format(kz/kr))
print("kr/kz = {:10.3g} [-]".format(kr/kz))
print("av = {:10.3g} [-]".format(av))
print("kr = {:10.3g} [m/h]".format(kr))
print("kz = {:10.3g} [m/h]".format(kz))
print("c = {:10.3g} [h]".format(c))
print()
return author, proj, Q/sph, r, D, kr/sph, kz/sph, c*sph, Sy, Se
# compute it all
# me (2016)
t0 = 30.# sec !
s = 0.22
ds = 0.115
sig = 14.
me = all('me', proj, t0, s, ds, sig)
# Neuman (1975)
t0 = 70 # sec !
s = 0.28
sig = 14.5
ds = 0.137
neuman = all('Neuman', proj, t0, s, ds, sig)
Explanation: Intepretation of the pumping test
The data on double log scales should reveal a picture of the delayed type curves. The first part of the lines should mimic Theis, the mediate part should tend to a horizontal line as predicts the Hantush' type curves and the last part should also mimic the later part of a Theis curve. One sees however, that it is difficult draw exact conclusions. From the continuously, almost constant rising curves, it is obvious, on the other hand, that no Theis like curve could be fitted. The same is true for the Hantush curves. The only thing we observe that gives a hint towards delayed yield is the mid part of the green line that tends to horizontal but later on starts to rise again.
We also observe some increase of drawdown at the very end of the test, at least for the piezometers at 10 and 30 m. We don't know what this is, because non of our models will give such and upward decline. But it may very well be caused by wheather influences, other boundaries, and even bounce-back of an impervious boundary far away. Therefore, without any further information regarding the conditions of the test, we ignore this deviation for our analyis.
What could help is drawing the drawdown data on linear scale but keep the time data on logarithmic scale. From the behavior of Theis curves, we know that the drawdown will follow straight lines on such graphs. That is the drawdown per log cycle is the constant and the same for all piezometers, i.e. irrespective of their distance from the well. Clearly, differences may exist in this respect between individual piezometers, due to heterogeneity of the aquifer. But we expect those to be small at the scale of some tens of meters that pertain to these piezometers.
The best way it thus to fit straight parallel lines through all of the data. Keeping the lines parallel is based on the fact that they have to be parallel in a homogeneous system, and it minimizes errors where fitting such lines is uncertain. The result is given in the figure above.
From this we conclude that the data provide indeed information on the early and late Theis curves. The first thing to conclude is that the ratio of the specific yield and the elastic storage coefficient equals the shift between the two straight lines drawn throug the green points.
$$ \frac {S_y} {S_e} = \frac {3 \times 10^{1}} {2.2 \times 10^{0}} \approx 14 $$
We may then compute the transmissivity and the storage coefficient using the log approximation of the Theis curve.
The change of the drawdown per logcycle is also read from the straigh lines, it is is 0.115 m
$$ s_{10t} - s_{t} = \frac Q {4 \pi kD} \ln 10 $$
So that
$$ kD = \frac Q {4 \pi} \frac {\ln 10} {0.115} $$
With $Q = 53$ m3/h = 0.0147 m3/s, this yields:
$$ kD = 0.023 \,\, m2/s = 84.4 \,\, m2/h = 2525\,\, m3/d $$
and
$$ s = \frac Q {2 \pi kD} \ln \left( \frac {2.25 kD t} { r^2 S } \right) $$
setting $s=0$, that is the argument to 1 and filling in $t$ for $s=0$ and $r$, i.e. $t=30$ s and $r=10$ m for the green data points yieding
$$ \frac {S_y} {kD} = \frac {2.25 \times 30} {10^2} = 0.675 $$
$$ S_y = 0.625 \, kD = 0.675 \times 0.023 = 1.6 \times 10^{-2} $$
and, therefore
$$ S_e = S_y / 14 = 1.13 \times 10^{-3} $$
The last property to estimate is the vertical resistance of the confining top of the aquifer. For this we need the horizontal branch of the best fitting Hantush type curve. We don't really see it in the data, but we could estimate it from the green datapoints at $s = 0.22$ m.
With this we may directly use the steady state solution for a well in a semi-confined aquifer
$$ s(r) = \frac Q {2 \pi kD} K_0 \frac r \lambda $$
$$ K_0 \frac r \lambda = 2 \pi kD \frac {s(r)} Q = 2 \pi 0.023 \times \frac {0.22} {0.0147} = 2.16 $$
By some trial and error this yields
$$ \frac r \lambda \approx 0.131 \rightarrow \lambda = \frac {10} {0.131} = 76.3 m $$
and, therfore
$$ kDc = \lambda^2 \rightarrow c = \frac {76.3^2} {0.023} = 253000 \,\,s = 70.4 \,\, h $$
Neuman uses the term $\beta = \frac {k_z} {k_r} \frac {r^2} {D^2}$ which can be converted as follows:
$$ \beta = \frac {k_z} {k_r} \frac {r^2} {D^2} = \frac {r^2} {k_r D} \frac {k_z} {D} = \frac {r^2} {k_r D c} = \frac {r^2} {\lambda^2} $$
$$ \beta = \left( \frac {10} {76.3} \right)^2 = 0.0172 $$
$$ \frac {k_z} {k_r} = \beta \frac {D^2} {r^2} = 0.0172 \times \frac {8.24^2} {10^2} = 0.0117 $$
$$ \frac {k_r} {k_z} = 85.7 $$
$$ k_r = \frac {kD} D = \frac {0.023} {8.24} = 2.8 \times 10^{-3} \,\, m/s = 10.0 \,\, m/h $$
$$ k_z = \frac {k_r} {85.7} = 3.27 \times 10^{-5} \,\, m/s = 0.12\,\, m/h $$
$$ c = \frac D {k_z} = \frac {8.24} {3.27 \times 10^{-5}} = 252000\,\, sec = 70.0 \,\, h$$
End of explanation
#def plotResult(author, proj, Q, r, Lam, c, kD, Sy, Se, t, plotset):
def plotResult(author, proj, Q, r, D, kr, kz, c, Sy, Se, t, plotset):
kD = kr * D
Lam = np.sqrt(kD * c)
ue = r**2 *Se /(4 * kD * t)
uy = r**2 *Sy /(4 * kD * t)
sh = Q/ (4 * np.pi * kD) * Wh(ue, r/Lam)
se = Q/ (4 * np.pi * kD) * W(ue)
sy = Q/ (4 * np.pi * kD) * W(uy)
ax = plt.figure().add_subplot(111)
ax.set(**plotset)
ax.set_title(proj + ", analyzed by " + author)
ax.grid(True)
for p in piezoms:
lspec = p.color + p.linestyle + p.marker
ax.plot(p.t , p.s, lspec, label=p.name)
ax.plot(t, sh.ravel(), label='Hantush')
ax.plot(t, se, label='Theis elastic')
ax.plot(t, sy, label='Theis spec. yld')
ax.legend(loc='best', fontsize='x-small')
plt.show()
return ax
sph = 3600. # seconds per hour
t = np.logspace(1, 6, 51) # seconds
ax3 = plotResult(*me, t, pltset1)
ax4 = plotResult(*neuman, t, pltset1)
Explanation: The essential difference between the results of Neuman (using the semilog method) and the indipendently derived figures, is the steady state drawdown, i.e. the Hantush case, that would pertain to the situation in which the water table would be fixed by continuous additional supply of water. That figure is difficult to obtain from the data. Given the curves a figure of 0.22 m for the r-10 m piezometer would seem valid, but 0.285 m is needed to make the results fit with those of Neuman.
Now that we have the data, we can plot both the Theis and Hantush curves together with the data to verify the match.
End of explanation
modules = '/Users/Theo/GRWMODELS/Python_projects/mfpy/modules/'
import sys
if not modules in sys.path:
sys.path.insert(0, modules)
import mfgrid as grid
import fdm_t
# author, r, Q, D, kr, kz, c, Sy, Se
t = np.logspace(-1, 6, 71)
ax3 = plt.figure().add_subplot(111)
ax4 = plt.figure().add_subplot(111)
def numMdl(author, proj, Q, r, D, kr, kz, c, Sy, Se, t, ax, piezoms=None):
print(author)
x = np.logspace(-1, 4, 51)
y = np.array([-0.5, 0.5])
z = np.array([0.01, 0.0, -D])
gr = grid.Grid(x, y, z, axial=True)
Kr = gr.const(kr)
Kz = gr.const(kr); Kz[:,:,0] = gr.dz[0] / c / 2.
Ss = gr.const(Se/D)
Ss[:,:,0] = Sy/gr.dz[0]
IBOUND = gr.const(1)
FQ = gr.const(0.)
FH = gr.const(0.)
FQ[0,0,-1] = Q
out = fdm_t.fdm3t(gr, t, (Kr, Kr, Kz), Ss, FQ, FH, IBOUND)
# Get heads in lower layer
phi = out.Phi.reshape((len(t), gr.Nx, gr.Nz))
phi = out.Phi[:,0,:,-1]
# interpolate r on x-grid
up = np.interp(r, gr.xm, np.arange(gr.Nx))
u, iu = up - int(up), int(up)
# interpolate phi to get data exactly on x=r
phi_t = phi[:,iu] + u * (phi[:,iu+1] - phi[:,iu])
if not piezoms is None:
for p in piezoms:
lspec = p.color + p.linestyle + p.marker
ax.plot(p.t , p.s, lspec, label=p.name)
ax.plot(t, phi_t, 'b', linewidth=2, label='numerical')
ax.legend(loc='best', fontsize='small')
ax.set(xlabel='t [s]', ylabel='s [m]', xscale='log', yscale='log', ylim=(1e-2, 1.),
title = proj + ', analyzed by ' + author)
ax.grid(True)
return out
out1 = numMdl(*me , t, ax3, piezoms)
out2 = numMdl(*neuman, t, ax4, piezoms)
plt.show()
Explanation: In conclusion, the curves analysed here deviate little from the measurements. We had to approximate the steady-state drawdown without delayed yield, to obtain a value for $ r/\lambda $. The value chosen was 0.22 m, whereas the value that follows from the results of Neuman would be .28 m, corresponding to the horizontal branch of the Hantush curves. Neuman's results do not seem to agree with the Hantush curve that is expected to match. It is, however, unclear what the reason for this difference is. It could be tested using a numerical model.
End of explanation |
13,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dataset
Initial data are read from an image, then n_data samples will be extracted from the data.
The image contains 200x200 = 40k pixels
We will extract 400k random points from the image and build a pandas.DataFrame
This mimics the sampling process of a spacecraft for example
Step1: Data Visualization
[It is a downsampled version of the dataset, the full version would take around 1 minute per plot to visualize...]
Does this dataset make sense for you? can you guess the original imgage? | Python Code:
image_df = pd.DataFrame(image.reshape(-1,image.shape[-1]),columns=['red','green','blue'])
image_df.describe()
n_data = image.reshape(-1,image.shape[-1]).shape[0]*10 # 10 times the original number of pixels : overkill!
x = np.random.random_sample(n_data)*image.shape[1]
y = np.random.random_sample(n_data)*image.shape[0]
data = pd.DataFrame({'x' : x, 'y' : y })
# extract the random point from the original image and add some noise
for index,name in zip(*(range(image.shape[-1]),['red','green','blue'])):
data[name] = image[data.y.astype(int),data.x.astype(int),index]+np.random.rand(n_data)*.1
data.describe().T
Explanation: Dataset
Initial data are read from an image, then n_data samples will be extracted from the data.
The image contains 200x200 = 40k pixels
We will extract 400k random points from the image and build a pandas.DataFrame
This mimics the sampling process of a spacecraft for example : looking at a target (Earth or another body) and getting way more data points you need to reconstruct a coherent representation.
Moreover, visualize 400k x 3 columns of point is difficult, thus we will multibin the DataFrame to 200 bins on the x and 200 on the y direction, calculate the average for each bin and return 200x200 array of data in output.
The multibin.MultiBinnedDataFrame could generate as many dimension as one like, the 2D example here is for the sake of representation.
End of explanation
pd.tools.plotting.scatter_matrix(data.sample(n=1000), alpha=0.5 , lw=0, figsize=(12, 12), diagonal='hist');
# Let's multibinning!
# functions we want to apply on the data in a single multidimensional bin:
aggregated_functions = {
'red' : {'elements' : len ,'average' : np.average},
'green' : {'average' : np.average},
'blue' : {'average' : np.average}
}
# the columns we want to have in output:
out_columns = ['red','green','blue']
# define the bins for sepal_length
group_variables = collections.OrderedDict([
('y',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[0], 'n_bins' : image.shape[0]})),
('x',mb.bingenerator({ 'start' : 0 ,'stop' : image.shape[1], 'n_bins' : image.shape[1]}))
])
# I use OrderedDict to have fixed order, a normal dict is fine too.
# that is the object collecting all the data that define the multi binning
mbdf = mb.MultiBinnedDataFrame(binstocolumns = True,
dataframe = data,
group_variables = group_variables,
aggregated_functions = aggregated_functions,
out_columns = out_columns)
mbdf.MBDataFrame.describe().T
# reconstruct the multidimensional array defined by group_variables
outstring = []
for key,val in mbdf.group_variables.iteritems():
outstring.append('{} bins ({})'.format(val['n_bins'],key))
key = 'red_average'
print '{} array = {}'.format(key,' x '.join(outstring))
print
print mbdf.col_df_to_array(key)
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(figsize=[16,10], ncols=2, nrows=2)
cm = plt.get_cmap('jet')
key = 'red_elements'
imgplot = ax1.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax1)
ax1.set_title('elements per bin')
ax1.grid(False)
key = 'red_average'
imgplot = ax2.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax2)
ax2.set_title(key)
ax2.grid(False)
key = 'green_average'
imgplot = ax3.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax3)
ax3.set_title(key)
ax3.grid(False)
key = 'blue_average'
imgplot = ax4.imshow(mbdf.col_df_to_array(key), cmap = cm,
interpolation='none',origin='lower')
plt.colorbar(imgplot, orientation='vertical', ax = ax4)
ax4.set_title(key)
ax4.grid(False)
rgb_image_dict = mbdf.all_df_to_array()
rgb_image = rgb_image_dict['red_average']
for name in ['green_average','blue_average']:
rgb_image = np.dstack((rgb_image,rgb_image_dict[name]))
fig, (ax1,ax2) = plt.subplots(figsize=[16,10], ncols=2)
ax1.imshow(255-rgb_image,interpolation='bicubic',origin='lower')
ax1.set_title('MultiBinnedDataFrame')
ax2.imshow(image ,interpolation='bicubic',origin='lower')
ax2.set_title('Original Image')
Explanation: Data Visualization
[It is a downsampled version of the dataset, the full version would take around 1 minute per plot to visualize...]
Does this dataset make sense for you? can you guess the original imgage?
End of explanation |
13,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Now we'll create an empty lc dataset
Step3: Running Compute
Now we'll compute synthetics at the times provided using the default options
Step4: Plotting
Now we can simply plot the resulting synthetic light curve. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
Explanation: Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
Explanation: Adding Datasets
Now we'll create an empty lc dataset:
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
Now we'll compute synthetics at the times provided using the default options
End of explanation
afig, mplfig = b['mylc@model'].plot(show=True)
afig, mplfig = b['mylc@model'].plot(x='phases', show=True)
Explanation: Plotting
Now we can simply plot the resulting synthetic light curve.
End of explanation |
13,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
Step1: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
Step2: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
Step3: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
Step4: Inline question 1 | Python Code:
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
from __future__ import print_function
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Image features exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.
All of your work for this exercise will be done in this notebook.
End of explanation
from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
Explanation: Load data
Similar to previous exercises, we will load CIFAR-10 data from disk.
End of explanation
from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
Explanation: Extract Features
For each image we will compute a Histogram of Oriented
Gradients (HOG) as well as a color histogram using the hue channel in HSV
color space. We form our final feature vector for each image by concatenating
the HOG and color histogram feature vectors.
Roughly speaking, HOG should capture the texture of the image while ignoring
color information, and the color histogram represents the color of the input
image while ignoring texture. As a result, we expect that using both together
ought to work better than using either alone. Verifying this assumption would
be a good thing to try for the bonus section.
The hog_feature and color_histogram_hsv functions both operate on a single
image and return a feature vector for that image. The extract_features
function takes a set of images and a list of feature functions and evaluates
each feature function on each image, storing the results in a matrix where
each column is the concatenation of all feature vectors for a single image.
End of explanation
# Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [5e4, 5e5, 5e6]
results = {}
best_val = -1
best_svm = None
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
for lr in learning_rates:
for rg in regularization_strengths:
svm = LinearSVM()
svm.train(X_train_feats, y_train, learning_rate=lr, reg=rg,
num_iters=3000, verbose=False)
y_train_pred = svm.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = svm.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_svm = svm
results[(lr, rg)] = (train_acc, val_acc)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print(test_accuracy)
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show()
Explanation: Train SVM on features
Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
End of explanation
print(X_train_feats.shape)
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 1536
num_classes = 10
learning_rates = [5e-1]
regularization_strengths = [1e-3]
best_net = None
best_val = -1
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
for lr in learning_rates:
for rg in regularization_strengths:
nn = TwoLayerNet(input_dim, hidden_dim, num_classes)
nn.train(X_train_feats, y_train, X_val_feats, y_val,
num_iters=5000, batch_size=512,
learning_rate=lr, learning_rate_decay=0.9,
reg=rg, verbose=False)
y_train_pred = nn.predict(X_train_feats)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = nn.predict(X_val_feats)
val_acc = np.mean(y_val == y_val_pred)
if val_acc > best_val:
best_val = val_acc
best_net = nn
results[(lr, rg)] = (train_acc, val_acc)
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, rg, train_acc, val_acc))
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (best_net.predict(X_test_feats) == y_test).mean()
print(test_acc)
Explanation: Inline question 1:
Describe the misclassification results that you see. Do they make sense?
It seems that picture that misclassified is similar to labeled class by color space.
Neural Network on image features
Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels.
For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
End of explanation |
13,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Pipelines
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step13: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step14: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step15: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step18: Define custom model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline
Step19: Compile the pipeline
Next, compile the pipeline.
Step20: Run the pipeline
Next, run the pipeline.
Step21: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running | Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI Pipelines: model train, upload, and deploy using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components in conjunction with an experimental run_as_aiplatform_custom_job method, to build a Vertex AI Pipelines workflow that trains a custom model, uploads the model as a Model resource, creates an Endpoint resource, and deploys the Model resource to the Endpoint resource.
Dataset
The dataset used for this tutorial is Cloud Public Dataset Program London Bikes Rental combined with NOAA weather data
The dataset predicts the duration of the bike rental.
Objective
In this tutorial, you create an custom model using a pipeline with components from google_cloud_pipeline_components and a custom pipeline component you build.
In addition, you'll use the kfp.v2.google.experimental.run_as_aiplatform_custom_job method to train a custom model.
The steps performed include:
Train a custom model.
Uploads the trained model as a Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
(From that page, see also the CustomPythonPackageTrainingJobRunOp and CustomContainerTrainingJobRunOp components, which similarly run 'custom' training, but as with the related google.cloud.aiplatform.CustomContainerTrainingJob and google.cloud.aiplatform.CustomPythonPackageTrainingJob methods from the Vertex AI SDK, also upload the trained model).
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
Explanation: Install the latest GA version of google-cloud-pipeline-components library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/bikes_weather".format(BUCKET_NAME)
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
import kfp
from kfp.v2.dsl import component
Explanation: Additional imports.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
hp_dict: str = '{"num_hidden_layers": 3, "hidden_size": 32, "learning_rate": 0.01, "epochs": 1, "steps_per_epoch": -1}'
data_dir: str = "gs://aju-dev-demos-codelabs/bikes_weather/"
TRAINER_ARGS = ["--data-dir", data_dir, "--hptune-dict", hp_dict]
# create working dir to pass to job spec
WORKING_DIR = f"{PIPELINE_ROOT}/{TIMESTAMP}"
MODEL_DISPLAY_NAME = f"train_deploy{TIMESTAMP}"
print(TRAINER_ARGS, WORKING_DIR, MODEL_DISPLAY_NAME)
@kfp.dsl.pipeline(name="train-endpoint-deploy" + TIMESTAMP)
def pipeline(
project: str = PROJECT_ID,
model_display_name: str = MODEL_DISPLAY_NAME,
serving_container_image_uri: str = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
):
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.custom_job import \
CustomTrainingJobOp
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
custom_job_task = CustomTrainingJobOp(
project=project,
display_name="model-training",
worker_pool_specs=[
{
"containerSpec": {
"args": TRAINER_ARGS,
"env": [{"name": "AIP_MODEL_DIR", "value": WORKING_DIR}],
"imageUri": "gcr.io/google-samples/bw-cc-train:latest",
},
"replicaCount": "1",
"machineSpec": {
"machineType": "n1-standard-16",
"accelerator_type": aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
"accelerator_count": 2,
},
}
],
)
import_unmanaged_model_task = importer_node.importer(
artifact_uri=WORKING_DIR,
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
},
},
).after(custom_job_task)
model_upload_op = ModelUploadOp(
project=project,
display_name=model_display_name,
unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"],
)
model_upload_op.after(import_unmanaged_model_task)
endpoint_create_op = EndpointCreateOp(
project=project,
display_name="pipelines-created-endpoint",
)
ModelDeployOp(
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
dedicated_resources_machine_type="n1-standard-16",
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
)
Explanation: Define custom model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint.
Note: While not shown in this example, the model deploy will create an endpoint if one is not provided.
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular regression_pipeline.json".replace(" ", "_"),
)
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
DISPLAY_NAME = "bikes_weather_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular regression_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm tabular_regression_pipeline.json
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
<a href="https://storage.googleapis.com/amy-jo/images/mp/train_endpoint_deploy.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/train_endpoint_deploy.png" width="75%"/></a>
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
13,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
Step3: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
Step4: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
Step5: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
Step6: Influence on Light Curves
Step7: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Finite Time of Integration (fti)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
print(b['exptime'])
Explanation: Relevant Parameters
An 'exptime' parameter exists for each lc dataset and is set to 0.0 by default. This defines the exposure time that should be used when fti is enabled. As stated in its description, the time stamp of each datapoint is defined to be the time of mid-exposure. Note that the exptime applies to all times in the dataset - if times have different exposure-times, then they must be split into separate datasets manually.
End of explanation
b['exptime'] = 1, 'hr'
Explanation: Let's set the exposure time to 1 hr to make the convolution obvious in our 1-day default binary.
End of explanation
print(b['fti_method'])
b['fti_method'] = 'oversample'
Explanation: An 'fti_method' parameter exists for each set of compute options and each lc dataset. By default this is set to 'none' - meaning that the exposure times are ignored during b.run_compute().
End of explanation
print(b['fti_oversample'])
Explanation: Once we set fti_method to be 'oversample', the corresponding 'fti_oversample' parameter(s) become visible. This option defines how many different time-points PHOEBE should sample over the width of the exposure time and then average to return a single flux point. By default this is set to 5.
Note that increasing this number will result in better accuracy of the convolution caused by the exposure time - but increases the computation time essentially linearly. By setting to 5, our computation time will already be almost 5 times that when fti is disabled.
End of explanation
b.run_compute(fti_method='none', irrad_method='none', model='fti_off')
b.run_compute(fti_method='oversample', irrad_method='none', model='fit_on')
Explanation: Influence on Light Curves
End of explanation
afig, mplfig = b.plot(show=True, legend=True)
Explanation: The phase-smearing (convolution) caused by the exposure time is most evident in areas of the light curve with sharp derivatives, where the flux changes significantly over the course of the single exposure. Here we can see that the 1-hr exposure time significantly changes the observed shapes of ingress and egress as well as the observed depth of the eclipse.
End of explanation |
13,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Multiple Risk Factors
The example is based on a multiple, correlated risk factors, all (for the ease of exposition) geometric_brownian_motion objects.
Step2: Using 2,500 paths and monthly discretization for the example.
Step3: Options Modeling
We model a certain number of derivative instruments with the following major assumptions.
Step4: Portfolio Modeling
The derivatives_portfolio object we compose consists of multiple derivatives positions. Each option differs with respect to the strike and the risk factor it is dependent on.
Step5: Portfolio Valuation
First, the derivatives portfolio with sequential valuation.
Step6: The call of the get_values method to value all instruments.
Step7: Risk Analysis
Full distribution of portfolio present values illustrated via histogram.
Step8: Some statistics via pandas.
Step9: The delta risk report.
Step10: The vega risk report.
Step11: Visualization of Results
Selected results visualized.
Step12: Sample paths for three underlyings. | Python Code:
from dx import *
import time
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
%matplotlib inline
np.random.seed(10000)
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Quite Complex Portfolios
This part illustrates that you can model, value and risk manage quite complex derivatives portfolios with DX Analytics.
End of explanation
mer = market_environment(name='me', pricing_date=dt.datetime(2015, 1, 1))
mer.add_constant('initial_value', 0.01)
mer.add_constant('volatility', 0.1)
mer.add_constant('kappa', 2.0)
mer.add_constant('theta', 0.05)
mer.add_constant('paths', 100) # dummy
mer.add_constant('frequency', 'M') # dummy
mer.add_constant('starting_date', mer.pricing_date)
mer.add_constant('final_date', dt.datetime(2015, 12, 31)) # dummy
ssr = stochastic_short_rate('ssr', mer)
plt.figure(figsize=(10, 6))
plt.plot(ssr.process.time_grid, ssr.process.get_instrument_values()[:, :10]);
plt.gcf().autofmt_xdate()
# market environments
me = market_environment('gbm', dt.datetime(2015, 1, 1))
# geometric Brownian motion
me.add_constant('initial_value', 36.)
me.add_constant('volatility', 0.2)
me.add_constant('currency', 'EUR')
# jump diffusion
me.add_constant('lambda', 0.4)
me.add_constant('mu', -0.4)
me.add_constant('delta', 0.2)
# stochastic volatility
me.add_constant('kappa', 2.0)
me.add_constant('theta', 0.3)
me.add_constant('vol_vol', 0.5)
me.add_constant('rho', -0.5)
Explanation: Multiple Risk Factors
The example is based on a multiple, correlated risk factors, all (for the ease of exposition) geometric_brownian_motion objects.
End of explanation
# valuation environment
val_env = market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('paths', 2500)
val_env.add_constant('frequency', 'M')
val_env.add_curve('discount_curve', ssr)
val_env.add_constant('starting_date', dt.datetime(2015, 1, 1))
val_env.add_constant('final_date', dt.datetime(2016, 12, 31))
# add valuation environment to market environments
me.add_environment(val_env)
no = 50 # 50 different risk factors in total
risk_factors = {}
for rf in range(no):
# random model choice
sm = np.random.choice(['gbm', 'jd', 'sv'])
key = '%3d_%s' % (rf + 1, sm)
risk_factors[key] = market_environment(key, me.pricing_date)
risk_factors[key].add_environment(me)
# random initial_value
risk_factors[key].add_constant('initial_value',
np.random.random() * 40. + 20.)
# radnom volatility
risk_factors[key].add_constant('volatility',
np.random.random() * 0.6 + 0.05)
# the simulation model to choose
risk_factors[key].add_constant('model', sm)
correlations = []
keys = sorted(risk_factors.keys())
for key in keys[1:]:
correlations.append([keys[0], key, np.random.choice([-0.1, 0.0, 0.1])])
correlations[:3]
Explanation: Using 2,500 paths and monthly discretization for the example.
End of explanation
me_option = market_environment('option', me.pricing_date)
# choose from a set of maturity dates (month ends)
maturities = pd.date_range(start=me.pricing_date,
end=val_env.get_constant('final_date'),
freq='M').to_pydatetime()
me_option.add_constant('maturity', np.random.choice(maturities))
me_option.add_constant('currency', 'EUR')
me_option.add_environment(val_env)
Explanation: Options Modeling
We model a certain number of derivative instruments with the following major assumptions.
End of explanation
# 5 times the number of risk factors
# as portfolio positions/instruments
pos = 5 * no
positions = {}
for i in range(pos):
ot = np.random.choice(['am_put', 'eur_call'])
if ot == 'am_put':
otype = 'American single'
payoff_func = 'np.maximum(%5.3f - instrument_values, 0)'
else:
otype = 'European single'
payoff_func = 'np.maximum(maturity_value - %5.3f, 0)'
# random strike
strike = np.random.randint(36, 40)
underlying = sorted(risk_factors.keys())[(i + no) % no]
name = '%d_option_pos_%d' % (i, strike)
positions[name] = derivatives_position(
name=name,
quantity=np.random.randint(1, 10),
underlyings=[underlying],
mar_env=me_option,
otype=otype,
payoff_func=payoff_func % strike)
# number of derivatives positions
len(positions)
Explanation: Portfolio Modeling
The derivatives_portfolio object we compose consists of multiple derivatives positions. Each option differs with respect to the strike and the risk factor it is dependent on.
End of explanation
port = derivatives_portfolio(
name='portfolio',
positions=positions,
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
parallel=True) # sequential calculation
port.val_env.get_list('cholesky_matrix')
Explanation: Portfolio Valuation
First, the derivatives portfolio with sequential valuation.
End of explanation
%time res = port.get_statistics(fixed_seed=True)
res.set_index('position', inplace=False)
Explanation: The call of the get_values method to value all instruments.
End of explanation
%time pvs = port.get_present_values()
plt.figure(figsize=(10, 6))
plt.hist(pvs, bins=30);
plt.xlabel('portfolio present values')
plt.ylabel('frequency')
Explanation: Risk Analysis
Full distribution of portfolio present values illustrated via histogram.
End of explanation
pdf = pd.DataFrame(pvs)
pdf.describe()
Explanation: Some statistics via pandas.
End of explanation
%%time
deltas, benchmark = port.get_port_risk(Greek='Delta', fixed_seed=True, step=0.2,
risk_factors=risk_factors.keys()[:4])
risk_report(deltas)
Explanation: The delta risk report.
End of explanation
%%time
vegas, benchmark = port.get_port_risk(Greek='Vega', fixed_seed=True, step=0.2,
risk_factors=risk_factors.keys()[:3])
risk_report(vegas)
Explanation: The vega risk report.
End of explanation
res[['pos_value', 'pos_delta', 'pos_vega']].hist(bins=30, figsize=(9, 6))
plt.ylabel('frequency')
Explanation: Visualization of Results
Selected results visualized.
End of explanation
paths_0 = port.underlying_objects.values()[0]
paths_0.generate_paths()
paths_1 = port.underlying_objects.values()[1]
paths_1.generate_paths()
paths_2 = port.underlying_objects.values()[2]
paths_2.generate_paths()
pa = 5
plt.figure(figsize=(10, 6))
plt.plot(port.time_grid, paths_0.instrument_values[:, :pa], 'b');
print 'Paths for %s (blue)' % paths_0.name
plt.plot(port.time_grid, paths_1.instrument_values[:, :pa], 'r.-');
print 'Paths for %s (red)' % paths_1.name
plt.plot(port.time_grid, paths_2.instrument_values[:, :pa], 'g-.', lw=2.5);
print 'Paths for %s (green)' % paths_2.name
plt.ylabel('risk factor level')
plt.gcf().autofmt_xdate()
Explanation: Sample paths for three underlyings.
End of explanation |
13,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flax Imagenet Example
<a href="https
Step3: Imports / Helpers
Step7: Dataset
Step8: Training from scratch
Step9: Load pre-trained model
Step10: Inference | Python Code:
# Install ml-collections & latest Flax version from Github.
!pip install -q clu ml-collections git+https://github.com/google/flax
example_directory = 'examples/imagenet'
editor_relpaths = ('configs/default.py', 'input_pipeline.py', 'models.py', 'train.py')
repo, branch = 'https://github.com/google/flax', 'main'
# (If you run this code in Jupyter[lab], then you're already in the
# example directory and nothing needs to be done.)
#@markdown **Fetch newest Flax, copy example code**
#@markdown
#@markdown **If you select no** below, then the files will be stored on the
#@markdown *ephemeral* Colab VM. **After some time of inactivity, this VM will
#@markdown be restarted an any changes are lost**.
#@markdown
#@markdown **If you select yes** below, then you will be asked for your
#@markdown credentials to mount your personal Google Drive. In this case, all
#@markdown changes you make will be *persisted*, and even if you re-run the
#@markdown Colab later on, the files will still be the same (you can of course
#@markdown remove directories inside your Drive's `flax/` root if you want to
#@markdown manually revert these files).
if 'google.colab' in str(get_ipython()):
import os
os.chdir('/content')
# Download Flax repo from Github.
if not os.path.isdir('flaxrepo'):
!git clone --depth=1 -b $branch $repo flaxrepo
# Copy example files & change directory.
mount_gdrive = 'no' #@param ['yes', 'no']
if mount_gdrive == 'yes':
DISCLAIMER = 'Note : Editing in your Google Drive, changes will persist.'
from google.colab import drive
drive.mount('/content/gdrive')
example_root_path = f'/content/gdrive/My Drive/flax/{example_directory}'
else:
DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!'
example_root_path = f'/content/{example_directory}'
from IPython import display
display.display(display.HTML(
f'<h1 style="color:red;" class="blink">{DISCLAIMER}</h1>'))
if not os.path.isdir(example_root_path):
os.makedirs(example_root_path)
!cp -r flaxrepo/$example_directory/* "$example_root_path"
os.chdir(example_root_path)
from google.colab import files
for relpath in editor_relpaths:
s = open(f'{example_root_path}/{relpath}').read()
open(f'{example_root_path}/{relpath}', 'w').write(
f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s)
files.view(f'{example_root_path}/{relpath}')
# Note : In Colab, above cell changed the working direcoty.
!pwd
Explanation: Flax Imagenet Example
<a href="https://colab.research.google.com/github/google/flax/blob/main/examples/imagenet/imagenet.ipynb" ><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Demonstration notebook for
https://github.com/google/flax/tree/main/examples/imagenet
The Flax Notebook Workflow:
Run the entire notebook end-to-end and check out the outputs.
This will open Python files in the right-hand editor!
You'll be able to interactively explore metrics in TensorBoard.
Change config and train for different hyperparameters. Check out the
updated TensorBoard plots.
Update the code in train.py. Thanks to %autoreload, any changes you
make in the file will automatically appear in the notebook. Some ideas to
get you started:
Change the model.
Log some per-batch metrics during training.
Add new hyperparameters to configs/default.py and use them in
train.py.
At any time, feel free to paste code from train.py into the notebook
and modify it directly there!
Setup
End of explanation
# TPU setup : Boilerplate for connecting JAX to TPU.
import os
if 'google.colab' in str(get_ipython()) and 'COLAB_TPU_ADDR' in os.environ:
# Make sure the Colab Runtime is set to Accelerator: TPU.
import requests
if 'TPU_DRIVER_MODE' not in globals():
url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206'
resp = requests.post(url)
TPU_DRIVER_MODE = 1
# The following is required to use TPU Driver as JAX's backend.
from jax.config import config
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR']
print('Registered TPU:', config.FLAGS.jax_backend_target)
else:
print('No TPU detected. Can be changed under "Runtime/Change runtime type".')
import json
from absl import logging
import flax
import jax
import jax.numpy as jnp
from matplotlib import pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
logging.set_verbosity(logging.INFO)
assert len(jax.devices()) == 8, f'Expected 8 TPU cores : {jax.devices()}'
# Helper functions for images.
def show_img(img, ax=None, title=None):
Shows a single image.
if ax is None:
ax = plt.gca()
img *= tf.constant(input_pipeline.STDDEV_RGB, shape=[1, 1, 3], dtype=img.dtype)
img += tf.constant(input_pipeline.MEAN_RGB, shape=[1, 1, 3], dtype=img.dtype)
img = np.clip(img.numpy().astype(int), 0, 255)
ax.imshow(img)
ax.set_xticks([])
ax.set_yticks([])
if title:
ax.set_title(title)
def show_img_grid(imgs, titles):
Shows a grid of images.
n = int(np.ceil(len(imgs)**.5))
_, axs = plt.subplots(n, n, figsize=(3 * n, 3 * n))
for i, (img, title) in enumerate(zip(imgs, titles)):
show_img(img, axs[i // n][i % n], title)
# Local imports from current directory - auto reload.
# Any changes you make to train.py will appear automatically.
%load_ext autoreload
%autoreload 2
import input_pipeline
import models
import train
from configs import default as config_lib
Explanation: Imports / Helpers
End of explanation
# We load "imagenette" that has similar pictures to "imagenet2012" but can be
# downloaded automatically and is much smaller.
dataset_builder = tfds.builder('imagenette')
dataset_builder.download_and_prepare()
ds = dataset_builder.as_dataset('train')
dataset_builder.info
# Utilities to help with Imagenette labels.
![ ! -f mapping_imagenet.json ] && wget --no-check-certificate https://raw.githubusercontent.com/ozendelait/wordnet-to-json/master/mapping_imagenet.json
with open('mapping_imagenet.json') as f:
mapping_imagenet = json.load(f)
# Mapping imagenette label name to imagenet label index.
imagenette_labels = {
d['v3p0']: d['label']
for d in mapping_imagenet
}
# Mapping imagenette label name to human-readable label.
imagenette_idx = {
d['v3p0']: idx
for idx, d in enumerate(mapping_imagenet)
}
def imagenette_label(idx):
Returns a short human-readable string for provided imagenette index.
net = dataset_builder.info.features['label'].int2str(idx)
return imagenette_labels[net].split(',')[0]
def imagenette_imagenet2012(idx):
Returns the imagenet2012 index for provided imagenette index.
net = dataset_builder.info.features['label'].int2str(idx)
return imagenette_idx[net]
def imagenet2012_label(idx):
Returns a short human-readable string for provided imagenet2012 index.
return mapping_imagenet[idx]['label'].split(',')[0]
train_ds = input_pipeline.create_split(
dataset_builder, 128, train=True,
)
eval_ds = input_pipeline.create_split(
dataset_builder, 128, train=False,
)
train_batch = next(iter(train_ds))
{k: (v.shape, v.dtype) for k, v in train_batch.items()}
Explanation: Dataset
End of explanation
# Get a live update during training - use the "refresh" button!
# (In Jupyter[lab] start "tensorbaord" in the local directory instead.)
if 'google.colab' in str(get_ipython()):
%load_ext tensorboard
%tensorboard --logdir=.
config = config_lib.get_config()
config.dataset = 'imagenette'
config.model = 'ResNet18'
config.half_precision = True
batch_size = 512
config.learning_rate *= batch_size / config.batch_size
config.batch_size = batch_size
config
# Regenerate datasets with updated batch_size.
train_ds = input_pipeline.create_split(
dataset_builder, config.batch_size, train=True,
)
eval_ds = input_pipeline.create_split(
dataset_builder, config.batch_size, train=False,
)
# Takes ~1.5 min / epoch.
for num_epochs in (5, 10):
config.num_epochs = num_epochs
config.warmup_epochs = config.num_epochs / 10
name = f'{config.model}_{config.learning_rate}_{config.num_epochs}'
print(f'\n\n{name}')
state = train.train_and_evaluate(config, workdir=f'./models/{name}')
if 'google.colab' in str(get_ipython()):
#@markdown You can upload the training results directly to https://tensorbaord.dev
#@markdown
#@markdown Note that everbody with the link will be able to see the data.
upload_data = 'no' #@param ['yes', 'no']
if upload_data == 'yes':
!tensorboard dev upload --one_shot --logdir ./models --name 'Flax examples/mnist'
Explanation: Training from scratch
End of explanation
# Load model checkpoint from cloud.
from flax.training import checkpoints
config_name = 'v100_x8'
pretrained_path = f'gs://flax_public/examples/imagenet/{config_name}'
latest_checkpoint = checkpoints.natural_sort(
tf.io.gfile.glob(f'{pretrained_path}/checkpoint_*'))[0]
if not os.path.exists(os.path.basename(latest_checkpoint)):
tf.io.gfile.copy(latest_checkpoint, os.path.basename(latest_checkpoint))
!ls -lh checkpoint_*
# Load config that was used to train checkpoint.
import importlib
config = importlib.import_module(f'configs.{config_name}').get_config()
# Load models & state (takes ~1 min to load the model).
model_cls = getattr(models, config.model)
model = train.create_model(
model_cls=model_cls, half_precision=config.half_precision)
base_learning_rate = config.learning_rate * config.batch_size / 256.
steps_per_epoch = (
dataset_builder.info.splits['train'].num_examples // config.batch_size
)
learning_rate_fn = train.create_learning_rate_fn(
config, base_learning_rate, steps_per_epoch)
state = train.create_train_state(
jax.random.PRNGKey(0), config, model, image_size=input_pipeline.IMAGE_SIZE,
learning_rate_fn=learning_rate_fn)
state = train.restore_checkpoint(state, './')
Explanation: Load pre-trained model
End of explanation
# Load batch from imagenette eval set.
batch = next(iter(eval_ds))
{k: v.shape for k, v in batch.items()}
# Evaluate using model trained on imagenet.
logits = model.apply({'params': state.params, 'batch_stats': state.batch_stats}, batch['image'][:128], train=False)
# Find classification mistakes.
preds_labels = list(zip(logits.argmax(axis=-1), map(imagenette_imagenet2012, batch['label'])))
error_idxs = [idx for idx, (pred, label) in enumerate(preds_labels) if pred != label]
error_idxs
# The mistakes look all quite reasonable.
show_img_grid(
[batch['image'][idx] for idx in error_idxs[:9]],
[f'pred: {imagenet2012_label(preds_labels[idx][0])}\n'
f'label: {imagenet2012_label(preds_labels[idx][1])}'
for idx in error_idxs[:9]],
)
plt.tight_layout()
# Define parallelized inference function in separate cell so the the cached
# compilation can be used if below cell is executed multiple times.
@jax.pmap
def p_get_logits(images):
return model.apply({'params': state.params, 'batch_stats': state.batch_stats},
images, train=False)
eval_iter = train.create_input_iter(dataset_builder, config.batch_size,
input_pipeline.IMAGE_SIZE, tf.float32,
train=False, cache=True)
# Compute accuracy.
eval_steps = dataset_builder.info.splits['validation'].num_examples // config.batch_size
count = correct = 0
for step, batch in zip(range(eval_steps), eval_iter):
labels = [imagenette_imagenet2012(label) for label in batch['label'].flatten()]
logits = p_get_logits(batch['image'])
logits = logits.reshape([-1, logits.shape[-1]])
print(f'Step {step+1}/{eval_steps}...')
count += len(labels)
correct += (logits.argmax(axis=-1) == jnp.array(labels)).sum()
correct / count
Explanation: Inference
End of explanation |
13,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Note
Step2: Lesson
Step3: Project 1
Step4: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
Step5: TODO
Step6: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
Step7: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO
Step8: Examine the ratios you've calculated for a few words
Step9: Looking closely at the values you just calculated, we see the following
Step10: Examine the new ratios you've calculated for the same words from before
Step11: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
Step12: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
Step13: Project 2
Step14: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
Step15: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
Step16: TODO
Step17: Run the following cell. It should display (1, 74074)
Step18: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
Step20: TODO
Step21: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
Step23: TODO
Step24: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
Step25: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
Step29: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3
Step30: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
Step31: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
Step32: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
Step33: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
Step34: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
Step35: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step36: Project 4
Step37: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
Step38: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
Step42: Project 5
Step43: Run the following cell to recreate the network and train it once again.
Step44: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
Step45: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
Step46: Project 6
Step47: Run the following cell to train your network with a small polarity cutoff.
Step48: And run the following cell to test it's performance. It should be
Step49: Run the following cell to train your network with a much larger polarity cutoff.
Step50: And run the following cell to test it's performance.
Step51: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem" (this lesson)
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network (video only - nothing in notebook)
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset<a id='lesson_1'></a>
The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything.
End of explanation
len(reviews)
reviews[0]
labels[0]
Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a>
End of explanation
from collections import Counter
import numpy as np
Explanation: Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library.
End of explanation
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
End of explanation
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
End of explanation
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
End of explanation
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for word in total_counts.keys():
if(total_counts[word]>100):
pos_neg_ratios[word]=positive_counts[word]/ float(negative_counts[word]+1)
Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the ratios you've calculated for a few words:
End of explanation
# TODO: Convert ratios to logs
for word in pos_neg_ratios:
pos_neg_ratios[word]=np.log(pos_neg_ratios[word])
Explanation: Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
End of explanation
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
Explanation: Examine the new ratios you've calculated for the same words from before:
End of explanation
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios.
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything.
End of explanation
# TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None
vocab=[word for word in total_counts.keys()]
Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary.
End of explanation
vocab_size = len(vocab)
print(vocab_size)
Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074
End of explanation
from IPython.display import Image
Image(filename='sentiment_network_2.png')
Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer.
End of explanation
# TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = np.zeros((1,vocab_size))
Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns.
End of explanation
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
Explanation: Run the following cell. It should display (1, 74074)
End of explanation
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
End of explanation
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]]+=1
# TODO: count how many times each word is used in the given review and store the results in layer_0
Explanation: TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
End of explanation
update_input_layer(reviews[0])
layer_0
Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0.
End of explanation
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
# TODO: Your code here
if(label=='POSITIVE'):
return 1
else:
return 0
Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively.
End of explanation
labels[0]
get_target_for_label(labels[0])
Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively.
End of explanation
labels[1]
get_target_for_label(labels[1])
Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively.
End of explanation
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(label for label in labels)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for word,i in enumerate(self.review_vocab):
self.word2index[word]=i
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for label,i in enumerate(labels):
self.label2index[label]=i
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes,hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
self.layer_0 *= 0
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
for word in review.split(" "):
if(word in self.word2index.keys()):
self.layer_0[self.word2index[word]] += 1
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output*(1-output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review=training_reviews[i]
label=training_labels[i]
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
hidden_input=np.dot(self.layer_0,self.weights_0_1)
finalLayer_input=np.dot(hidden_input,self.weights_1_2)
finalLayer_output=self.sigmoid(finalLayer_input)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = self.get_target_for_label(label)-finalLayer_output
output_error_term=error*self.sigmoid_output_2_derivative(finalLayer_output)
hidden_error_term=output_error_term*self.weights_1_2
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
#self.weights_1_2 += self.learning_rate*(output_error_term*hidden_input)
self.weights_1_2 -= self.learning_rate*(output_error_term*hidden_input.T)
self.weights_0_1 -= self.learning_rate*(hidden_error_term*self.layer_0).T
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
if(finalLayer_output >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(finalLayer_output < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
#hidden_input=self.layer_0.dot(self.weights_0_1)
hidden_input=np.dot(self.layer_0,self.weights_0_1)
finalLayer_input=np.dot(hidden_input,self.weights_1_2)
finalLayer_output=self.sigmoid(finalLayer_input)
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
if(finalLayer_output[0]>0.5):
return 'POSITIVE'
else:
return 'NEGATIVE'
Explanation: End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.
End of explanation
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network.
End of explanation
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used.
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
Explanation: End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything.
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews_raw, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews_raw) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews_raw)):
# Get the next review and its correct label
review = training_reviews_raw[i]
label = training_labels[i]
training_review=set()
for word in review.split(" "):
training_review.add(self.word2index[word])
training_review=list(training_review)
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
# Hidden layer
self.layer_1 *= 0
for indices in training_review:
self.layer_1 += self.weights_0_1[indices]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate
# update hidden-to-output weights with gradient descent step
for indices in training_review:
self.weights_0_1[indices] -= self.learning_rate *layer_1_delta[0]
#self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate
# update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews_raw)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def train1(self, training_reviews_raw, training_labels):
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
Returns a POSITIVE or NEGATIVE prediction for the given review.
'''
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.layer_1 *= 0
reviewIndices=set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
reviewIndices.add(self.word2index[word])
# Hidden layer
for indices in reviewIndices:
self.layer_1 += self.weights_0_1[indices]
# Output layer
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
'''
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review.
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to recreate the network and train it once again.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
End of explanation
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
Explanation: End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a>
End of explanation
# TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions
Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a small polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance. It should be
End of explanation
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
Explanation: Run the following cell to train your network with a much larger polarity cutoff.
End of explanation
mlp.test(reviews[-1000:],labels[-1000:])
Explanation: And run the following cell to test it's performance.
End of explanation
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
Explanation: End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a>
End of explanation |
13,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a classifier to predict the wine color from wine quality attributes using this dataset
Step1: Split the data into features (x) and target (y, the last column in the table)
Remember you can cast the results into an numpy array and then slice out what you want
Step2: Create a decision tree with the data
Step3: Run 10-fold cross validation on the model
Step4: If you have time, calculate the feature importance and graph based on the code in the slides from last class
Use this tip for getting the column names from your cursor object | Python Code:
import pg8000
conn = pg8000.connect(host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com', database="training", port=5432, user='dot_student', password='qgis')
import pandas as pd
df = pd.read_sql("select * from winequality", conn)
df.head()
import numpy as np
data = df.as_matrix()
len(data)
Explanation: Create a classifier to predict the wine color from wine quality attributes using this dataset: http://archive.ics.uci.edu/ml/datasets/Wine+Quality
The data is in the database we've been using
host='training.c1erymiua9dx.us-east-1.rds.amazonaws.com'
database='training'
port=5432
user='dot_student'
password='qgis'
table name = 'winequality'
Query for the data and create a numpy array
End of explanation
lastColIndex = len(data[0])-1
x = [i[:lastColIndex] for i in data]
y = [i[lastColIndex] for i in data] # red or white
Explanation: Split the data into features (x) and target (y, the last column in the table)
Remember you can cast the results into an numpy array and then slice out what you want
End of explanation
from sklearn import tree
dt = tree.DecisionTreeClassifier()
dt.fit(x, y)
Explanation: Create a decision tree with the data
End of explanation
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(dt,x,y,cv=10)
scores
Explanation: Run 10-fold cross validation on the model
End of explanation
len(dt.feature_importances_)
column_names = list(df.columns)
column_names.pop(11)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(dt.feature_importances_, 'o')
plt.xticks(range(data.shape[1]),column_names, rotation=90)
plt.ylim(0,1)
Explanation: If you have time, calculate the feature importance and graph based on the code in the slides from last class
Use this tip for getting the column names from your cursor object
End of explanation |
13,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix and Covariance
The mat_handler.py module contains matrix class, which is the backbone of pyemu. The matrix class overloads all common mathematical operators and also uses an "auto-align" functionality to line up matrix objects for multiplication, addition, etc.
Step1: Here is the most basic instantiation of the matrix class
Step2: Here we will generate a matrix object with a random ndarray
Step3: File I/O with matrix
matrix supports several PEST-compatible I/O routines as well as some others
Step4: Matrix also implements a to_dataframe() and a to_sparse, which return pandas dataframe and a scipy.sparse (compressed sparse row) objects, respectively
Step5: Convience methods of Matrix
several cool things are implemented in Matrix and accessed through @property decorated methods. For example, the SVD components of a Matrix object are simply accessed by name. The SVD routine is called on demand and the components are cast to Matrix objects, all opaque to the user
Step6: The Matrix inverse operation is accessed the same way, but requires a square matrix
Step7: Manipulating Matrix shape
Matrix has lots of functionality to support getting submatrices by row and col names
Step8: extract() calls get() then drop()
Step9: Operator overloading
The operator overloading uses the auto-align functionality as well as the isdiagonal flag for super easy linear algebra. The "inner join" of the two objects is found and the rows and cols are aligned appropriately
Step10: The Cov derived type
The Cov type is designed specifically to handle covariance matrices. It makes some assumptions, such as the symmetry (and accordingly that row_names == col_names).
Step11: The Cov class supports several additional I/O routines, including the PEST uncertainty file (.unc)
Step12: We can also build cov objects implied by pest control file parameter bounds or observation weights | Python Code:
from __future__ import print_function
import os
import numpy as np
from pyemu import Matrix, Cov
Explanation: Matrix and Covariance
The mat_handler.py module contains matrix class, which is the backbone of pyemu. The matrix class overloads all common mathematical operators and also uses an "auto-align" functionality to line up matrix objects for multiplication, addition, etc.
End of explanation
m = Matrix()
Explanation: Here is the most basic instantiation of the matrix class:
End of explanation
a = np.random.random((5, 5))
row_names = []
[row_names.append("row_{0:02d}".format(i)) for i in range(5)]
col_names = []
[col_names.append("col_{0:02d}".format(i)) for i in range(5)]
m = Matrix(x=a, row_names=row_names, col_names=col_names)
print(m)
Explanation: Here we will generate a matrix object with a random ndarray
End of explanation
ascii_name = "mat_test.mat"
m.to_ascii(ascii_name)
m2 = Matrix.from_ascii(ascii_name)
print(m2)
bin_name = "mat_test.bin"
m.to_binary(bin_name)
m3 = Matrix.from_binary(bin_name)
print(m3)
Explanation: File I/O with matrix
matrix supports several PEST-compatible I/O routines as well as some others:
End of explanation
print(type(m.to_dataframe()))
m.to_dataframe() #looks really nice in the notebook!
Explanation: Matrix also implements a to_dataframe() and a to_sparse, which return pandas dataframe and a scipy.sparse (compressed sparse row) objects, respectively:
End of explanation
print(m.s) #the singular values of m cast into a matrix object. the SVD() is called on demand
m.s.to_ascii("test_sv.mat") #save the singular values to a PEST-compatible ASCII file
m.v.to_ascii("test_v.mat") #the right singular vectors of m.
m.u.to_dataframe()# a data frame of the left singular vectors of m
Explanation: Convience methods of Matrix
several cool things are implemented in Matrix and accessed through @property decorated methods. For example, the SVD components of a Matrix object are simply accessed by name. The SVD routine is called on demand and the components are cast to Matrix objects, all opaque to the user:
End of explanation
m.inv.to_dataframe()
Explanation: The Matrix inverse operation is accessed the same way, but requires a square matrix:
End of explanation
print(m.get(row_names="row_00",col_names=["col_01","col_03"]))
Explanation: Manipulating Matrix shape
Matrix has lots of functionality to support getting submatrices by row and col names:
End of explanation
from copy import deepcopy
m_copy = deepcopy(m)
sub_m = m_copy.extract(row_names="row_00",col_names=["col_01","col_03"])
m_copy.to_dataframe()
sub_m.to_dataframe()
Explanation: extract() calls get() then drop():
End of explanation
#a new matrix object that is not "aligned" with m
row_names = ["row_03","row_02","row_00"]
col_names = ["col_01","col_10","col_100"]
m_mix = Matrix(x=np.random.random((3,3)),row_names=row_names,col_names=col_names)
m_mix.to_dataframe()
m.to_dataframe()
prod = m * m_mix.T
prod.to_dataframe()
prod2 = m_mix.T * m
prod2.to_dataframe()
(m_mix + m).to_dataframe()
Explanation: Operator overloading
The operator overloading uses the auto-align functionality as well as the isdiagonal flag for super easy linear algebra. The "inner join" of the two objects is found and the rows and cols are aligned appropriately:
End of explanation
c = Cov(m.newx,m.row_names)
Explanation: The Cov derived type
The Cov type is designed specifically to handle covariance matrices. It makes some assumptions, such as the symmetry (and accordingly that row_names == col_names).
End of explanation
c.to_uncfile("test.unc")
c1 = Cov.from_uncfile("test.unc")
print(c1)
Explanation: The Cov class supports several additional I/O routines, including the PEST uncertainty file (.unc):
End of explanation
parcov = Cov.from_parbounds(os.path.join("henry","pest.pst"))
obscov = Cov.from_obsweights(os.path.join("henry","pest.pst"))
#to_dataframe for diagonal types builds a full matrix dataframe - can be costly
parcov.to_dataframe().head()
# notice the zero-weight obs have been assigned a really large uncertainty
obscov.to_dataframe().head()
Explanation: We can also build cov objects implied by pest control file parameter bounds or observation weights:
End of explanation |
13,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
d = {}
d['.'] = "||Period||"
d[','] = "||Comma||"
d['"'] = "||Quotation_Mark||"
d[';'] = "||Semicolon||"
d['!'] = "||Exclamation_Mark||"
d['?'] = "||Question_Mark||"
d['('] = "||Left_Parentheses||"
d[')'] = "||Right_Parentheses||"
d['--'] = "||Dash||"
d['\n'] = "||Return||"
return d
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='lr')
return (input_data, targets, lr)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
lstm_layer_count = 2
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layer_count)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return (cell, initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
input_data = get_embed(input_data, vocab_size, embed_dim)
cell, final_state = build_rnn(cell, input_data)
logits = tf.contrib.layers.fully_connected(cell, vocab_size,
weights_initializer=tf.random_uniform_initializer(0, 1),
biases_initializer=tf.zeros_initializer(), activation_fn=None)
return (logits , final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 36
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 128
# Sequence Length
seq_length = 12
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 64
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_data = loaded_graph.get_tensor_by_name("input:0")
initial_state = loaded_graph.get_tensor_by_name("initial_state:0")
final_state = loaded_graph.get_tensor_by_name("final_state:0")
probabilities = loaded_graph.get_tensor_by_name("probs:0")
return (input_data, initial_state, final_state, probabilities)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
return np.random.choice(list(int_to_vocab.values()), 1, p=probabilities)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SYDE 556/750
Step1: Some sort of mapping between neural activity and a state in the world
my location
head tilt
image
remembered location
Intuitively, we call this "representation"
In neuroscience, people talk about the 'neural code'
To formalize this notion, the NEF uses information theory (or coding theory)
Representation formalism
Value being represented
Step2: Rectified Linear Neuron
Step3: Leaky integrate-and-fire neuron
$ a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
Step4: Response functions
These are called "response functions"
How much neural firing changes with change in current
Similar for many classes of cells (e.g. pyramidal cells - most of cortex)
This is the $G_i$ function in the NEF
Step5: For mapping #1, the NEF uses a linear map
Step6: But that's not how people normally plot it
It might not make sense to sample every possible x
Instead they might do some subset
For example, what if we just plot the points around the unit circle?
Step7: That starts looking a lot more like the real data.
Notation
Encoding
$a_i = G_i[\alpha_i x \cdot e_i + J^{bias}_i]$
Decoding
$\hat{x} = \sum_i a_i d_i$
The textbook uses $\phi$ for $d$ and $\tilde \phi$ for $e$
We're switching to $d$ (for decoder) and $e$ (for encoder)
Decoder
But where do we get $d_i$ from?
$\hat{x}=\sum a_i d_i$
Find the optimal $d_i$
How?
Math
Solving for $d$
Minimize the average error over all $x$, i.e.,
$ E = \frac{1}{2}\int_{-1}^1 (x-\hat{x})^2 \; dx $
Substitute for $\hat{x}$
Step8: What happens to the error with more neurons?
Noise
Neurons aren't perfect
Axonal jitter
Neurotransmitter vesicle release failure (~80%)
Amount of neurotransmitter per vesicle
Thermal noise
Ion channel noise (# of channels open and closed)
Network effects
More information
Step9: What if we just increase the number of neurons? Will it help?
Taking noise into account
Include noise while solving for decoders
Introduce noise term $\eta$
$
\begin{align}
\hat{x} &= \sum_i(a_i+\eta)d_i \
E &= {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i(a_i+\eta)d_i\right)^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i - \sum \eta d_i \right)^2 \;dx d\eta
\end{align}
$
- Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron
- $\eta = \mathcal{N}(0, \sigma)$
- All the noise cross-terms disappear (independent)
$
\begin{align}
E &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i,j} d_i d_j <\eta_i \eta_j>\eta \
&= {1 \over 2} \int{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i} d_i d_i <\eta_i \eta_i>_\eta
\end{align}
$
Since the average of $\eta_i \eta_i$ noise is its variance (since the mean is zero), $\sigma^2$, we get
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
The practical result is that, when computing the decoder, we get
$
\begin{align}
\Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij}
\end{align}
$
Where $\delta_{ij}$ is the Kronecker delta
Step10: Number of neurons
What happens to the error with more neurons?
Note that the error has two parts
Step11: How good is the representation?
Step12: Possible questions
How many neurons do we need for a particular level of accuracy?
What happens with different firing rates?
What happens with different distributions of x-intercepts?
Example 2 | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('KE952yueVLA', width=720, height=400, loop=1, autoplay=0)
from IPython.display import YouTubeVideo
YouTubeVideo('lfNVv0A8QvI', width=720, height=400, loop=1, autoplay=0)
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Accompanying Readings: Chapter 2
NEF Principle 1 - Representation
Activity of neurons change over time
<img src=files/lecture2/spikes.jpg width=800px>
This probably means something
Sometimes it seems pretty clear what it means
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('hxdPdKbqm_I', width=720, height=400, loop=1, autoplay=0)
Explanation: Some sort of mapping between neural activity and a state in the world
my location
head tilt
image
remembered location
Intuitively, we call this "representation"
In neuroscience, people talk about the 'neural code'
To formalize this notion, the NEF uses information theory (or coding theory)
Representation formalism
Value being represented: $x$
Neural activity: $a$
Neuron index: $i$
Encoding and decoding
Have to define both to define a code
Lossless code (e.g. Morse Code):
encoding: $a = f(x)$
decodng: $x = f^{-1}(a)$
Lossy code:
encoding: $a = f(x)$
decoding: $\hat{x} = g(a) \approx x$
Distributed representation
Not just one neuron per $x$ value (or per $x$)
Many different $a$ values for a single $x$
Encoding: $a_i = f_i(x)$
Decoding: $\hat{x} = g(a_0, a_1, a_2, a_3, ...)$
Example: binary representation
Encoding (nonlinear):
$$
a_i = \begin{cases}
1 &\mbox{if } x \mod {2^{i}} > 2^{i-1} \
0 &\mbox{otherwise}
\end{cases}
$$
Decoding (linear):
$$
\hat{x} = \sum_i a_i 2^{i-1}
$$
Suppose: $x = 13$
Encoding:
$a_1 = 1$, $a_2 = 0$, $a_3 = 1$, $a_4 = 1$
Decoding:
$\hat{x} = 11+02+14+18 = 13$
Linear decoding
Write decoder as $\hat{x} = \sum_ia_i d_i$
Linear decoding is nice and simple
Works fine with non-linear encoding (!)
The NEF uses linear decoding, but what about the encoding?
Neuron encoding
$a_i = f_i(x)$
What do we know about neurons?
<img src=files/lecture1/NeuronStructure.jpg>
Firing rate goes up as total input current goes up
$a_i = G_i(J)$
What is $G_i$?
depends on how detailed a neuron model we want.
End of explanation
# Rectified linear neuron
%pylab inline
import numpy
import nengo
n = nengo.neurons.RectifiedLinear()
J = numpy.linspace(-1,1,100)
plot(J, n.rates(J, gain=10, bias=-5))
xlabel('J (current)')
ylabel('$a$ (Hz)');
Explanation: Rectified Linear Neuron
End of explanation
#assume this has been run
#%pylab inline
# Leaky integrate and fire
import numpy
import nengo
n = nengo.neurons.LIFRate(tau_rc=0.02, tau_ref=0.002) #n is a Nengo LIF neuron, these are defaults
J = numpy.linspace(-1,10,100)
plot(J, n.rates(J, gain=1, bias=-3))
xlabel('J (current)')
ylabel('$a$ (Hz)');
Explanation: Leaky integrate-and-fire neuron
$ a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
End of explanation
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate() #n is a Nengo LIF neuron
x = numpy.linspace(-100,0,100)
plot(x, n.rates(x, gain=1, bias=50), 'b') # x*1+50
plot(x, n.rates(x, gain=0.1, bias=10), 'r') # x*0.1+10
plot(x, n.rates(x, gain=0.5, bias=5), 'g') # x*0.05+5
plot(x, n.rates(x, gain=0.1, bias=4), 'c') #x*0.1+4))
xlabel('x')
ylabel('a');
Explanation: Response functions
These are called "response functions"
How much neural firing changes with change in current
Similar for many classes of cells (e.g. pyramidal cells - most of cortex)
This is the $G_i$ function in the NEF: it can be pretty much anything
Tuning Curves
Neurons seem to be sensitive to particular values of $x$
How are neurons 'tuned' to a representation? or...
What's the mapping between $x$ and $a$?
Recall 'place cells', and 'edge detectors'
Sometimes they are fairly straight forward:
<img src=files/lecture2/tuning_curve_auditory.gif>
But not often:
<img src=files/lecture2/tuning_curve.jpg>
<img src=files/lecture2/orientation_tuning.png>
Is there a general form?
Tuning curves (cont.)
The NEF suggests that there is...
Something generic and simple
That covers all the above cases (and more)
Let's start with the simpler case:
<img src=files/lecture2/tuning_curve_auditory.gif>
Note that the experimenters are graphing $a$, as a function of $x$
$x$ is much easier to measure than $J$
So, there are two mappings of interest:
$x$->$J$
$J$->$a$ (response function)
Together these give the tuning curve
$x$ is the volume of the sound in this case
Any ideas?
End of explanation
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate()
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
a = numpy.linspace(-1,1,50)
b = numpy.linspace(-1,1,50)
X,Y = numpy.meshgrid(a, b)
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_surface(X, Y, n.rates((X*e[0]+Y*e[1]), gain=1, bias=1.5),
linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)
Explanation: For mapping #1, the NEF uses a linear map:
$ J = \alpha x + J^{bias} $
But what about type (c) in this graph?
<img src=files/lecture2/tuning_curve.jpg>
Easy enough:
$ J = - \alpha x + J^{bias} $
But what about type(b)? Or these ones?
<img src=files/lecture2/orientation_tuning.png>
There's usually some $x$ which gives a maximum firing rate
...and thus a maximum $J$
Firing rate (and $J$) decrease as you get farther from the preferred $x$ value
So something like $J = \alpha [sim(x, x_{pref})] + J^{bias}$
What sort of similarity measure?
Let's think about $x$ for a moment
$x$ can be anything... scalar, vector, etc.
Does thinking of it as a vector help?
The Encoding Equation (i.e. Tuning Curves)
Here is the general form we use for everything (it has both 'mappings' in it)
$a_i = G_i[\alpha_i x \cdot e_i + J_i^{bias}] $
$\alpha$ is a gain term (constrained to always be positive)
$J^{bias}$ is a constant bias term
$e$ is the encoder, or the preferred direction vector
$G$ is the neuron model
$i$ indexes the neuron
To simplify life, we always assume $e$ is of unit length
Otherwise we could combine $\alpha$ and $e$
In the 1D case, $e$ is either +1 or -1
In higher dimensions, what happens?
End of explanation
import nengo
import numpy
n = nengo.neurons.LIFRate()
theta = numpy.linspace(0, 2*numpy.pi, 100)
x = numpy.array([numpy.cos(theta), numpy.sin(theta)])
plot(x[0],x[1])
axis('equal')
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
plot([0,e[0]], [0,e[1]],'r')
gain = 1
bias = 2.5
figure()
plot(theta, n.rates(numpy.dot(x.T, e), gain=gain, bias=bias))
plot([numpy.arctan2(e[1],e[0])],0,'rv')
xlabel('angle')
ylabel('firing rate')
xlim(0, 2*numpy.pi);
Explanation: But that's not how people normally plot it
It might not make sense to sample every possible x
Instead they might do some subset
For example, what if we just plot the points around the unit circle?
End of explanation
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 10
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqL2(reg=0)) #reg=0 means ignore noise
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
xhat = numpy.dot(A, d)
pyplot.plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
figure()
plot(x, xhat-x)
xlabel('$x$')
ylabel('$\hat{x}-x$')
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
Explanation: That starts looking a lot more like the real data.
Notation
Encoding
$a_i = G_i[\alpha_i x \cdot e_i + J^{bias}_i]$
Decoding
$\hat{x} = \sum_i a_i d_i$
The textbook uses $\phi$ for $d$ and $\tilde \phi$ for $e$
We're switching to $d$ (for decoder) and $e$ (for encoder)
Decoder
But where do we get $d_i$ from?
$\hat{x}=\sum a_i d_i$
Find the optimal $d_i$
How?
Math
Solving for $d$
Minimize the average error over all $x$, i.e.,
$ E = \frac{1}{2}\int_{-1}^1 (x-\hat{x})^2 \; dx $
Substitute for $\hat{x}$:
$
\begin{align}
E = \frac{1}{2}\int_{-1}^1 \left(x-\sum_i^N a_i d_i \right)^2 \; dx
\end{align}
$
Take the derivative with respect to $d_i$:
$
\begin{align}
{{\partial E} \over {\partial d_i}} &= {1 \over 2} \int_{-1}^1 2 \left[ x-\sum_j a_j d_j \right] (-a_i) \; dx \
{{\partial E} \over {\partial d_i}} &= - \int_{-1}^1 a_i x \; dx + \int_{-1}^1 \sum_j a_j d_j a_i \; dx
\end{align}
$
At the minimum (i.e. smallest error), $ {{\partial E} \over {\partial d_i}} = 0$
$
\begin{align}
\int_{-1}^1 a_i x \; dx &= \int_{-1}^1 \sum_j(a_j d_j a_i) \; dx \
\int_{-1}^1 a_i x \; dx &= \sum_j \left(\int_{-1}^1 a_i a_j \; dx\right)d_j
\end{align}
$
That's a system of $N$ equations and $N$ unknowns
In fact, we can rewrite this in matrix form
$ \Upsilon = \Gamma d $
where
$
\begin{align}
\Upsilon_i &= {1 \over 2} \int_{-1}^1 a_i x \;dx\
\Gamma_{ij} &= {1 \over 2} \int_{-1}^1 a_i a_j \;dx
\end{align}
$
Do we have to do the integral over all $x$?
Approximate the integral by sampling over $x$
$S$ is the number of $x$ values to use ($S$ for samples)
$
\begin{align}
\sum_x a_i x / S &= \sum_j \left(\sum_x a_i a_j /S \right)d_j \
\Upsilon &= \Gamma d
\end{align}
$
where
$
\begin{align}
\Upsilon_i &= \sum_x a_i x / S \
\Gamma_{ij} &= \sum_x a_i a_j / S
\end{align}
$
Notice that if $A$ is the matrix of activities (the firing rate for each neuron for each $x$ value), then $\Gamma=A^T A / S$ and $\Upsilon=A^T x / S$
So given
$ \Upsilon = \Gamma d $
then
$ d = \Gamma^{-1} \Upsilon $
or, equivalently
$ d_i = \sum_j \Gamma^{-1}_{ij} \Upsilon_j $
End of explanation
#Have to run previous python cell first
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
Explanation: What happens to the error with more neurons?
Noise
Neurons aren't perfect
Axonal jitter
Neurotransmitter vesicle release failure (~80%)
Amount of neurotransmitter per vesicle
Thermal noise
Ion channel noise (# of channels open and closed)
Network effects
More information: http://icwww.epfl.ch/~gerstner/SPNM/node33.html
How do we include this noise as well?
Make the neuron model more complicated
Simple approach: add gaussian random noise to $a_i$
Set noise standard deviation $\sigma$ to 20% of maximum firing rate
Each $a_i$ value for each $x$ value gets a different noise value added to it
What effect does this have on decoding?
End of explanation
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 100
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print 'RMSE', np.sqrt(np.average((x-xhat)**2))
Explanation: What if we just increase the number of neurons? Will it help?
Taking noise into account
Include noise while solving for decoders
Introduce noise term $\eta$
$
\begin{align}
\hat{x} &= \sum_i(a_i+\eta)d_i \
E &= {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i(a_i+\eta)d_i\right)^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i - \sum \eta d_i \right)^2 \;dx d\eta
\end{align}
$
- Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron
- $\eta = \mathcal{N}(0, \sigma)$
- All the noise cross-terms disappear (independent)
$
\begin{align}
E &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i,j} d_i d_j <\eta_i \eta_j>\eta \
&= {1 \over 2} \int{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i} d_i d_i <\eta_i \eta_i>_\eta
\end{align}
$
Since the average of $\eta_i \eta_i$ noise is its variance (since the mean is zero), $\sigma^2$, we get
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
The practical result is that, when computing the decoder, we get
$
\begin{align}
\Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij}
\end{align}
$
Where $\delta_{ij}$ is the Kronecker delta: http://en.wikipedia.org/wiki/Kronecker_delta
To simplfy computing this using matrices, this can be written as $\Gamma=A^T A /S + \sigma^2 I$
End of explanation
#%pylab inline
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 40
tau_rc = .2
tau_ref = .001
lif_model = nengo.LIFRate(tau_rc=tau_rc, tau_ref=tau_ref)
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates = Uniform(250,300),
neuron_type = lif_model)
sim = nengo.Simulator(model)
x, A = tuning_curves(neurons, sim)
plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)');
Explanation: Number of neurons
What happens to the error with more neurons?
Note that the error has two parts:
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
Error due to static distortion (i.e. the error introduced by the decoders themselves)
This is present regardless of noise
$
\begin{align}
E_{distortion} = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 dx
\end{align}
$
Error due to noise
$
\begin{align}
E_{noise} = \sigma^2 \sum_i d_i^2
\end{align}
$
What do these look like as number of neurons $N$ increases?
<img src="files/lecture2/repn_noise.png">
- Noise error is proportional to $1/N$
- Distortion error is proportional to $1/N^2$
- Remember this error $E$ is defined as
$ E = {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 dx $
So that's actually a squared error term
Also, as number of neurons is greater than 100 or so, the error is dominated by the noise term ($1/N$).
Examples
Methodology for building models with the Neural Engineering Framework (outlined in Chapter 1)
System Description: Describe the system of interest in terms of the neural data, architecture, computations, representations, etc. (e.g. response functions, tuning curves, etc.)
Design Specification: Add additional performance constraints (e.g. bandwidth, noise, SNR, dynamic range, stability, etc.)
Implement the model: Employ the NEF principles given the System Description and Design Specification
Example 1: Horizontal Eye Control (1D)
From http://www.nature.com/nrn/journal/v3/n12/full/nrn986.html
<img src="files/lecture2/horizontal_eye.jpg">
There are also neurons whose response goes the other way. All of the neurons are directly connected to the muscle controlling the horizontal direction of the eye, and that's the only thing that muscle does, so we're pretty sure this is what's being repreesnted.
System Description
We've only done the first NEF principle, so that's all we'll worry about
What is being represented?
$x$ is the horizontal position
Tuning curves: extremely linear (high $\tau_{RC}$, low $\tau_{ref}$)
some have $e=1$, some have $e=-1$
these are often called "on" and "off" neurons, respectively
Firing rates of up to 300Hz
Design Specification
Range of values for $x$: -60 degrees to +60 degrees
Normal levels of noise: $\sigma$ is 20% of maximum firing rate
the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum firing rate
Implementation
Examine the tuning curves
Then use principle 1
End of explanation
#Have to run previous code cell first
noise = 0.2
with model:
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=noise*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
print 'RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2)))
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1);
Explanation: How good is the representation?
End of explanation
import numpy
import nengo
n = nengo.neurons.LIFRate()
theta = numpy.linspace(-numpy.pi, numpy.pi, 100)
x = numpy.array([numpy.sin(theta), numpy.cos(theta)])
e = numpy.array([1.0, 0])
plot(theta*180/numpy.pi, n.rates(numpy.dot(x.T, e), bias=1, gain=0.2)) #bias 1->1.5
xlabel('angle')
ylabel('firing rate')
xlim(-180, 180)
show()
Explanation: Possible questions
How many neurons do we need for a particular level of accuracy?
What happens with different firing rates?
What happens with different distributions of x-intercepts?
Example 2: Arm Movements (2D)
Georgopoulos et al., 1982. "On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex."
<img src="files/lecture2/armmovement1.jpg">
<img src="files/lecture2/armmovement2.png">
<img src="files/lecture2/armtuningcurve.png">
System Description
What is being represented?
$x$ is the hand position
Note that this is different from what Georgopoulos talks about in this initial paper
Initial paper only looks at those 8 positions, so it only talks about direction of movement (angle but not magnitude)
More recent work in the same area shows the cells do respond to both (Fu et al, 1993; Messier and Kalaska, 2000)
Bell-shaped tuning curves
Encoders: randomly distributed around the unit circle
Firing rates of up to 60Hz
Design Specification
Range of values for $x$: Anywhere within a unit circle (or perhaps some other radius)
Normal levels of noise: $\sigma$ is 20% of maximum firing rate
the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum
Implementation
Examine the tuning curves
End of explanation |
13,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistiques Wikipedia - énoncé
On s'instéresse aux statistiques de consultations de Wikipédia
Step1: Récupération des données
Les statistiques sont disponibles pour chaque heure et chaque jour. Compressés, cela représente environ 60Mo. On regarde un fichier.
Step2: Ca va prend un petit peu de temps et d'espace de télécharger ces données.
Exercice 1 | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Statistiques Wikipedia - énoncé
On s'instéresse aux statistiques de consultations de Wikipédia : pageviews. Ce TD commence par récupération des données avant de s'intéresser aux séries temporelles.
End of explanation
import os
folder = "wikipv"
if not os.path.exists(folder):
os.mkdir(folder)
from mlstatpy.data.wikipedia import download_pageviews
import os
from datetime import datetime
%timeit -n1 -r1 download_pageviews(datetime(2016,9,1), folder=folder)
%load_ext pyensae
%head wikipv/pageviews-20160901-000000
os.stat("wikipv/pageviews-20160901-000000").st_size / 2**20, "Mo"
Explanation: Récupération des données
Les statistiques sont disponibles pour chaque heure et chaque jour. Compressés, cela représente environ 60Mo. On regarde un fichier.
End of explanation
from mlstatpy.data.wikipedia import download_pageviews
from datetime import datetime
folder = "wikipv"
for h in range(0, 24): # boucle sur les 24 heures de la journée
dt = datetime(2016,9,1,h)
print("téléchargement", dt, "début", datetime.now())
download_pageviews(dt, folder=folder)
Explanation: Ca va prend un petit peu de temps et d'espace de télécharger ces données.
Exercice 1 : parallélisation du téléchargement
Regarde le module multiprocessing et implémenter une version parallélisée du programme suivant. multiprocessing est la librairie standard mais il en existe beaucoup d'autres : ParallelProcessing, joblib.
End of explanation |
13,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Examples
One more time, I'll load the data from the NSFG.
Step2: And compute the distribution of birth weight for first babies and others.
Step3: We can plot the PMFs on the same scale, but it is hard to see if there is a difference.
Step4: PercentileRank computes the fraction of scores less than or equal to your_score.
Step5: If this is the list of scores.
Step6: And you got the 88, your percentile rank is 80.
Step7: Percentile takes a percentile rank and computes the corresponding percentile.
Step8: The median is the 50th percentile, which is 77.
Step9: Here's a more efficient way to compute percentiles.
Step10: Let's hope we get the same answer.
Step11: The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.
Step12: In this list
Step13: We can evaluate the CDF for various values
Step14: Here's an example using real data, the distribution of pregnancy length for live births.
Step15: Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.
Step16: Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.
Step17: In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.
Step18: In this example, we can see that first babies are slightly, but consistently, lighter than others.
We can use the CDF of birth weight to compute percentile-based statistics.
Step19: Again, the median is the 50th percentile.
Step20: The interquartile range is the interval from the 25th to 75th percentile.
Step21: We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.
Step22: If we draw a random sample from the observed weights and map each weigh to its percentile rank.
Step23: The resulting list of ranks should be approximately uniform from 0-1.
Step24: That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.
Step25: This confirms that the random sample has the same distribution as the original data.
Exercises
Exercise
Step26: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
live, firsts, others = first.MakeFrames()
Explanation: Examples
One more time, I'll load the data from the NSFG.
End of explanation
first_wgt = firsts.totalwgt_lb
first_wgt_dropna = first_wgt.dropna()
print('Firsts', len(first_wgt), len(first_wgt_dropna))
other_wgt = others.totalwgt_lb
other_wgt_dropna = other_wgt.dropna()
print('Others', len(other_wgt), len(other_wgt_dropna))
first_pmf = thinkstats2.Pmf(first_wgt_dropna, label='first')
other_pmf = thinkstats2.Pmf(other_wgt_dropna, label='other')
Explanation: And compute the distribution of birth weight for first babies and others.
End of explanation
width = 0.4 / 16
# plot PMFs of birth weights for first babies and others
thinkplot.PrePlot(2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Weight (pounds)', ylabel='PMF')
Explanation: We can plot the PMFs on the same scale, but it is hard to see if there is a difference.
End of explanation
def PercentileRank(scores, your_score):
count = 0
for score in scores:
if score <= your_score:
count += 1
percentile_rank = 100.0 * count / len(scores)
return percentile_rank
Explanation: PercentileRank computes the fraction of scores less than or equal to your_score.
End of explanation
t = [55, 66, 77, 88, 99]
Explanation: If this is the list of scores.
End of explanation
PercentileRank(t, 88)
Explanation: And you got the 88, your percentile rank is 80.
End of explanation
def Percentile(scores, percentile_rank):
scores.sort()
for score in scores:
if PercentileRank(scores, score) >= percentile_rank:
return score
Explanation: Percentile takes a percentile rank and computes the corresponding percentile.
End of explanation
Percentile(t, 50)
Explanation: The median is the 50th percentile, which is 77.
End of explanation
def Percentile2(scores, percentile_rank):
scores.sort()
index = percentile_rank * (len(scores)-1) // 100
return scores[index]
Explanation: Here's a more efficient way to compute percentiles.
End of explanation
Percentile2(t, 50)
Explanation: Let's hope we get the same answer.
End of explanation
def EvalCdf(sample, x):
count = 0.0
for value in sample:
if value <= x:
count += 1
prob = count / len(sample)
return prob
Explanation: The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.
End of explanation
t = [1, 2, 2, 3, 5]
Explanation: In this list
End of explanation
EvalCdf(t, 0), EvalCdf(t, 1), EvalCdf(t, 2), EvalCdf(t, 3), EvalCdf(t, 4), EvalCdf(t, 5)
Explanation: We can evaluate the CDF for various values:
End of explanation
cdf = thinkstats2.Cdf(live.prglngth, label='prglngth')
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='CDF', loc='upper left')
Explanation: Here's an example using real data, the distribution of pregnancy length for live births.
End of explanation
cdf.Prob(41)
Explanation: Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.
End of explanation
cdf.Value(0.5)
Explanation: Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.
End of explanation
first_cdf = thinkstats2.Cdf(firsts.totalwgt_lb, label='first')
other_cdf = thinkstats2.Cdf(others.totalwgt_lb, label='other')
thinkplot.PrePlot(2)
thinkplot.Cdfs([first_cdf, other_cdf])
thinkplot.Config(xlabel='Weight (pounds)', ylabel='CDF')
Explanation: In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.
End of explanation
weights = live.totalwgt_lb
live_cdf = thinkstats2.Cdf(weights, label='live')
Explanation: In this example, we can see that first babies are slightly, but consistently, lighter than others.
We can use the CDF of birth weight to compute percentile-based statistics.
End of explanation
median = live_cdf.Percentile(50)
median
Explanation: Again, the median is the 50th percentile.
End of explanation
iqr = (live_cdf.Percentile(25), live_cdf.Percentile(75))
iqr
Explanation: The interquartile range is the interval from the 25th to 75th percentile.
End of explanation
live_cdf.PercentileRank(10.2)
Explanation: We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.
End of explanation
sample = np.random.choice(weights, 100, replace=True)
ranks = [live_cdf.PercentileRank(x) for x in sample]
Explanation: If we draw a random sample from the observed weights and map each weigh to its percentile rank.
End of explanation
rank_cdf = thinkstats2.Cdf(ranks)
thinkplot.Cdf(rank_cdf)
thinkplot.Config(xlabel='Percentile rank', ylabel='CDF')
Explanation: The resulting list of ranks should be approximately uniform from 0-1.
End of explanation
resample = live_cdf.Sample(1000)
thinkplot.Cdf(live_cdf)
thinkplot.Cdf(thinkstats2.Cdf(resample, label='resample'))
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='CDF')
Explanation: That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.
End of explanation
# Solution
cdf.PercentileRank(8.5)
# Solution
other_cdf.PercentileRank(8.5)
Explanation: This confirms that the random sample has the same distribution as the original data.
Exercises
Exercise: How much did you weigh at birth? If you don’t know, call your mother or someone else who knows. Using the NSFG data (all live births), compute the distribution of birth weights and use it to find your percentile rank. If you were a first baby, find your percentile rank in the distribution for first babies. Otherwise use the distribution for others. If you are in the 90th percentile or higher, call your mother back and apologize.
End of explanation
# Solution
t = np.random.random(1000)
# Solution
pmf = thinkstats2.Pmf(t)
thinkplot.Pmf(pmf, linewidth=0.1)
thinkplot.Config(xlabel='Random variate', ylabel='PMF')
# Solution
cdf = thinkstats2.Cdf(t)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Random variate', ylabel='CDF')
Explanation: Exercise: The numbers generated by numpy.random.random are supposed to be uniform between 0 and 1; that is, every value in the range should have the same probability.
Generate 1000 numbers from numpy.random.random and plot their PMF. What goes wrong?
Now plot the CDF. Is the distribution uniform?
End of explanation |
13,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
You will think about and calculate permutation importance with a sample of data from the Taxi Fare Prediction competition.
We won't focus on data exploration or model building for now. You can just run the cell below to
- Load the data
- Divide the data into training and validation
- Build a model that predicts taxi fares
- Print a few rows for you to review
Step1: The following two cells may also be useful to understand the values in the training data
Step2: Question 1
The first model uses the following features
- pickup_longitude
- pickup_latitude
- dropoff_longitude
- dropoff_latitude
- passenger_count
Before running any code... which variables seem potentially useful for predicting taxi fares? Do you think permutation importance will necessarily identify these features as important?
Once you've thought about it, run q_1.solution() below to see how you might think about this before running the code.
Step3: Question 2
Create a PermutationImportance object called perm to show the importances from first_model. Fit it with the appropriate data and show the weights.
For your convenience, the code from the tutorial has been copied into a comment in this code cell.
Step4: Uncomment the lines below for a hint or to see the solution.
Step5: Question 3
Before seeing these results, we might have expected each of the 4 directional features to be equally important.
But, on average, the latitude features matter more than the longititude features. Can you come up with any hypotheses for this?
After you've thought about it, check here for some possible explanations
Step6: Question 4
Without detailed knowledge of New York City, it's difficult to rule out most hypotheses about why latitude features matter more than longitude.
A good next step is to disentangle the effect of being in certain parts of the city from the effect of total distance traveled.
The code below creates new features for longitudinal and latitudinal distance. It then builds a model that adds these new features to those you already had.
Fill in two lines of code to calculate and show the importance weights with this new set of features. As usual, you can uncomment lines below to check your code, see a hint or get the solution.
Step7: How would you interpret these importance scores? Distance traveled seems far more important than any location effects.
But the location still affects model predictions, and dropoff location now matters slightly more than pickup location. Do you have any hypotheses for why this might be? The techniques in the next lessons will help you` dive into this more.
Step8: Question 5
A colleague observes that the values for abs_lon_change and abs_lat_change are pretty small (all values are between -0.1 and 0.1), whereas other variables have larger values. Do you think this could explain why those coordinates had larger permutation importance values in this case?
Consider an alternative where you created and used a feature that was 100X as large for these features, and used that larger feature for training and importance calculations. Would this change the outputted permutaiton importance values?
Why or why not?
After you have thought about your answer, either try this experiment or look up the answer in the cell below.
Step9: Question 6
You've seen that the feature importance for latitudinal distance is greater than the importance of longitudinal distance. From this, can we conclude whether travelling a fixed latitudinal distance tends to be more expensive than traveling the same longitudinal distance?
Why or why not? Check your answer below. | Python Code:
# Loading data, dividing, modeling and EDA below
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
data = pd.read_csv('../input/new-york-city-taxi-fare-prediction/train.csv', nrows=50000)
# Remove data with extreme outlier coordinates or negative fares
data = data.query('pickup_latitude > 40.7 and pickup_latitude < 40.8 and ' +
'dropoff_latitude > 40.7 and dropoff_latitude < 40.8 and ' +
'pickup_longitude > -74 and pickup_longitude < -73.9 and ' +
'dropoff_longitude > -74 and dropoff_longitude < -73.9 and ' +
'fare_amount > 0'
)
y = data.fare_amount
base_features = ['pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count']
X = data[base_features]
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
first_model = RandomForestRegressor(n_estimators=50, random_state=1).fit(train_X, train_y)
# Environment Set-Up for feedback system.
from learntools.core import binder
binder.bind(globals())
from learntools.ml_explainability.ex2 import *
print("Setup Complete")
# show data
print("Data sample:")
data.head()
Explanation: Intro
You will think about and calculate permutation importance with a sample of data from the Taxi Fare Prediction competition.
We won't focus on data exploration or model building for now. You can just run the cell below to
- Load the data
- Divide the data into training and validation
- Build a model that predicts taxi fares
- Print a few rows for you to review
End of explanation
train_X.describe()
train_y.describe()
Explanation: The following two cells may also be useful to understand the values in the training data:
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_1.solution()
Explanation: Question 1
The first model uses the following features
- pickup_longitude
- pickup_latitude
- dropoff_longitude
- dropoff_latitude
- passenger_count
Before running any code... which variables seem potentially useful for predicting taxi fares? Do you think permutation importance will necessarily identify these features as important?
Once you've thought about it, run q_1.solution() below to see how you might think about this before running the code.
End of explanation
import eli5
from eli5.sklearn import PermutationImportance
# Make a small change to the code below to use in this problem.
# perm = PermutationImportance(my_model, random_state=1).fit(val_X, val_y)
# Check your answer
q_2.check()
# uncomment the following line to visualize your results
# eli5.show_weights(perm, feature_names = val_X.columns.tolist())
#%%RM_IF(PROD)%%
import eli5
from eli5.sklearn import PermutationImportance
perm = PermutationImportance(first_model, random_state=1).fit(val_X, val_y)
eli5.show_weights(perm, feature_names = base_features)
q_2.check()
Explanation: Question 2
Create a PermutationImportance object called perm to show the importances from first_model. Fit it with the appropriate data and show the weights.
For your convenience, the code from the tutorial has been copied into a comment in this code cell.
End of explanation
# q_2.hint()
# q_2.solution()
Explanation: Uncomment the lines below for a hint or to see the solution.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_3.solution()
Explanation: Question 3
Before seeing these results, we might have expected each of the 4 directional features to be equally important.
But, on average, the latitude features matter more than the longititude features. Can you come up with any hypotheses for this?
After you've thought about it, check here for some possible explanations:
End of explanation
# create new features
data['abs_lon_change'] = abs(data.dropoff_longitude - data.pickup_longitude)
data['abs_lat_change'] = abs(data.dropoff_latitude - data.pickup_latitude)
features_2 = ['pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'abs_lat_change',
'abs_lon_change']
X = data[features_2]
new_train_X, new_val_X, new_train_y, new_val_y = train_test_split(X, y, random_state=1)
second_model = RandomForestRegressor(n_estimators=30, random_state=1).fit(new_train_X, new_train_y)
# Create a PermutationImportance object on second_model and fit it to new_val_X and new_val_y
# Use a random_state of 1 for reproducible results that match the expected solution.
perm2 = ____
# show the weights for the permutation importance you just calculated
____
# Check your answer
q_4.check()
Explanation: Question 4
Without detailed knowledge of New York City, it's difficult to rule out most hypotheses about why latitude features matter more than longitude.
A good next step is to disentangle the effect of being in certain parts of the city from the effect of total distance traveled.
The code below creates new features for longitudinal and latitudinal distance. It then builds a model that adds these new features to those you already had.
Fill in two lines of code to calculate and show the importance weights with this new set of features. As usual, you can uncomment lines below to check your code, see a hint or get the solution.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_4.solution()
Explanation: How would you interpret these importance scores? Distance traveled seems far more important than any location effects.
But the location still affects model predictions, and dropoff location now matters slightly more than pickup location. Do you have any hypotheses for why this might be? The techniques in the next lessons will help you` dive into this more.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_5.solution()
Explanation: Question 5
A colleague observes that the values for abs_lon_change and abs_lat_change are pretty small (all values are between -0.1 and 0.1), whereas other variables have larger values. Do you think this could explain why those coordinates had larger permutation importance values in this case?
Consider an alternative where you created and used a feature that was 100X as large for these features, and used that larger feature for training and importance calculations. Would this change the outputted permutaiton importance values?
Why or why not?
After you have thought about your answer, either try this experiment or look up the answer in the cell below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_6.solution()
Explanation: Question 6
You've seen that the feature importance for latitudinal distance is greater than the importance of longitudinal distance. From this, can we conclude whether travelling a fixed latitudinal distance tends to be more expensive than traveling the same longitudinal distance?
Why or why not? Check your answer below.
End of explanation |
13,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Go down for licence and other metadata about this presentation
Licence
Unless stated otherwise all content is released under a [CC0]+BY licence. I'd appreciate it if you reference this but it is not necessary.
Using Ipython for presentations
I've created a short video showing how to use Ipython for presentations
Step1: Key activities
Inline things
Exporting
as html5
static - and locally served
Step2: Background
You need to install the RISE Ipython Library from Damián Avila for dynamic presentations
To convert and run this as a static presentation run the following command
Step5: To close this instances press control 'c' in the ipython notebook terminal console
Static presentations allow the presenter to see speakers notes (use the 's' key)
If running dynamically run the scripts below
Pre load some useful libraries | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('F4rFuIb1Ie4')
Explanation: Go down for licence and other metadata about this presentation
Licence
Unless stated otherwise all content is released under a [CC0]+BY licence. I'd appreciate it if you reference this but it is not necessary.
Using Ipython for presentations
I've created a short video showing how to use Ipython for presentations
End of explanation
%install_ext https://raw.githubusercontent.com/rasbt/python_reference/master/ipython_magic/watermark.py
%load_ext watermark
%watermark -a "Anthony Beck" -d -v -m -g
#List of installed conda packages
!conda list
#List of installed pip packages
!pip list
Explanation: Key activities
Inline things
Exporting
as html5
static - and locally served: - !ipython nbconvert Template.ipynb --to slides --post serve
SPEAKER NOTES work in the above
static - !ipython nbconvert Template.ipynb --to slides
as pdf
hosting:
on slideshare
I believe I should use the pdf
what about slideviewer[http://slideviewer.herokuapp.com/]
on github:
just create a git project with these presentations in them :-)
PDF output - hack from Damian
cd in the directory where your slideshow lives
add this custom.css file: https://gist.github.com/damianavila/6211198
run this little snippet: https://gist.github.com/damianavila/6211211
run python -m SimpleHTTPServer 8001
open Mozilla Firefox browser and point to localhost:8001
add ?print.pdf to the end of the url (ie, http://127.0.0.1:8001/your-ipynb.slides.html?print-pdf)
print to pdf (use Landscape orientation)
The environment
In order to replicate my environment you need to know what I have installed!
Set up watermark
This describes the versions of software used during the creation.
Please note that critical libraries can also be watermarked as follows:
python
%watermark -v -m -p numpy,scipy
End of explanation
!ipython nbconvert Template.ipynb --to slides --post serve
Explanation: Background
You need to install the RISE Ipython Library from Damián Avila for dynamic presentations
To convert and run this as a static presentation run the following command:
End of explanation
#Future proof python 2
from __future__ import print_function #For python3 print syntax
from __future__ import division
# def
import IPython.core.display
# A function to collect user input - ipynb_input(varname='username', prompt='What is your username')
def ipynb_input(varname, prompt=''):
Prompt user for input and assign string val to given variable name.
js_code = (
var value = prompt("{prompt}","");
var py_code = "{varname} = '" + value + "'";
IPython.notebook.kernel.execute(py_code);
).format(prompt=prompt, varname=varname)
return IPython.core.display.Javascript(js_code)
# inline
%pylab inline
Explanation: To close this instances press control 'c' in the ipython notebook terminal console
Static presentations allow the presenter to see speakers notes (use the 's' key)
If running dynamically run the scripts below
Pre load some useful libraries
End of explanation |
13,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We start by adding some data from an hdf file.
You will also need the python package h5py.
The data object keeps the loaded data, electrode geometry, ground truth, and offers some pre-processing functions.
Step1: Next we define the inverse problem parameters as a dictionary. Parameter's not specified will be filled with default values.
Step2: Now we are ready to use different source localization methods.
Step3: And visualize the results
Step4: Alternatively we can use the optimization methods built on top of CasADi package. And initialize the problem.
Step5: And use the optimization method as described in the thesis "Source localization for high-density microelectrode arrays" by Cem Uran. | Python Code:
# Data path/filename
t_ind = 38
data_path = '../data/'
file_name = data_path + 'data_sim_low.hdf5'
data_options = {'flag_cell': True, 'flag_electode': False}
data = data_in(file_name, **data_options)
Explanation: We start by adding some data from an hdf file.
You will also need the python package h5py.
The data object keeps the loaded data, electrode geometry, ground truth, and offers some pre-processing functions.
End of explanation
localization_options = {'p_vres':20, 'p_jlen':0, 'p_erad': 5, 't_ind': 38, 'flag_depthweighted': False}
loc = data_out(data, **localization_options)
Explanation: Next we define the inverse problem parameters as a dictionary. Parameter's not specified will be filled with default values.
End of explanation
loc.cmp_sloreta()
Explanation: Now we are ready to use different source localization methods.
End of explanation
loc.xres = loc.res[:, t_ind]
vis = visualize(data=data, loc=loc)
vis.show_snapshot()
Explanation: And visualize the results
End of explanation
optimization_options = {'p_vres':10, 'p_jlen':0, 'p_erad': 10,
'solver': p_solver,
'hessian': p_hessian,
'linsol': p_linsol,
'method': p_method,
't_ind': 35, 't_int': 1,
'sigma': float(p_sparse),
'flag_depthweighted': bool(int(p_norm)),
'flag_parallel': False,
'datafile_name': 'output_file',
'flag_lift_mask': False,
'flag_data_mask': True,
'flag_callback': True,
'flag_callback_plot': True,
'callback_steps': 40,
'p_dyn': float(p_dynamic)
}
opt = opt_out(data, **optimization_options)
Explanation: Alternatively we can use the optimization methods built on top of CasADi package. And initialize the problem.
End of explanation
opt.set_optimization_variables_thesis()
Explanation: And use the optimization method as described in the thesis "Source localization for high-density microelectrode arrays" by Cem Uran.
End of explanation |
13,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Classes
Point class
We will write a class for a point in a two dimensional Euclidian space ($\mathbb{R}^2$).
We start with the class definition (def) and the constructor (__init__) which defines the creation of a new class instance.
Note
Step2: Notice that when we send a Point to the console we get
Step4: Which is not useful, so we will define how Point is represented in the console using __repr__.
Step6: Next up we define a method to add two points. Addition is by elements - $(x_1, y_1) + (x_2, y_2) = (x_1+x_2, y_1+y_2)$.
We also allow to add an int, in which case we add the point to a another point with both coordinates equal to the argument value.
Step8: A nicer way to do it is to overload the addition operator + by defining the addition method name to a name Python reserves for addition - __add__ (those are double underscores)
Step9: We want to be a able to compare Points
Step11: So == checks by identity and > is not defined. Let us overload both these operators
Step12: First we check if two points are equal
Step13: Then if one is strictly smaller than the other
Step15: The addition operator + returns a new instance.
Next we will write a method that instead of returning a new instance, changes the current instance
Step17: We now write a method that given many points, checks if the current point is more extreme than the other points.
Note that the argument *points means that more than one argument may be given.
Step18: We can also use the method via the class instead of the instance, and give the instance of interest (the one that we want to know if it is the extreme) as the first argument self. Much like this, we can either do 'hi'.upper() or str.upper('hi').
Step21: Rectangle class
We will implement two classes for rectangles, and compare the two implementations.
First implementation - two points
The first implementation defines a rectangle by its lower left and upper right vertices.
Step23: Second implementation - point and dimensions
The second implementation defines a rectangle by the lower left point, the height and the width.
We define the exact same methods as in Rectangle1, with the same input and output, but different inner representation / implementation. | Python Code:
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
p = Point(1,2)
print("point", p.x, p.y)
origin = Point()
print("origin", origin.x, origin.y)
Explanation: Classes
Point class
We will write a class for a point in a two dimensional Euclidian space ($\mathbb{R}^2$).
We start with the class definition (def) and the constructor (__init__) which defines the creation of a new class instance.
Note:
The first argument to class methods (class functions) is always self, a reference to the instance.
The other arguments to __init__ have a default values 0.
We assert that the __init__ arguments are numbers.
End of explanation
p
Explanation: Notice that when we send a Point to the console we get:
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
Point(1,2)
Explanation: Which is not useful, so we will define how Point is represented in the console using __repr__.
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def add(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
Point(1,1).add(Point(2,2))
Point(1,1).add(2)
Explanation: Next up we define a method to add two points. Addition is by elements - $(x_1, y_1) + (x_2, y_2) = (x_1+x_2, y_1+y_2)$.
We also allow to add an int, in which case we add the point to a another point with both coordinates equal to the argument value.
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
Point(1,1) + Point(2,2)
Point(1,1) + 2
Explanation: A nicer way to do it is to overload the addition operator + by defining the addition method name to a name Python reserves for addition - __add__ (those are double underscores):
End of explanation
Point(1,2) == Point(2,1)
Point(1,2) == Point(1,2)
p = Point()
p == p
Point(1,2) > Point(2,1)
Explanation: We want to be a able to compare Points:
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __gt__(self, other):
return (self.x > other.x and self.y > other.y)
Explanation: So == checks by identity and > is not defined. Let us overload both these operators:
End of explanation
Point(1,0) == Point(1,2)
Point(1,0) == Point(1,0)
Explanation: First we check if two points are equal:
End of explanation
Point(1,0) > Point(1,2)
Explanation: Then if one is strictly smaller than the other:
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __gt__(self, other):
return (self.x > other.x and self.y > other.y)
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def increment(self, other):
'''this method changes self (add "inplace")'''
assert isinstance(other,Point)
self.x += other.x
self.y += other.y
p = Point(6.5, 7)
p + Point(1,2)
print(p)
p.increment(Point(1,2))
print(p)
Point(5,6) > Point(1,2)
Explanation: The addition operator + returns a new instance.
Next we will write a method that instead of returning a new instance, changes the current instance:
End of explanation
class Point():
Holds on a point (x,y) in the plane
def __init__(self, x=0, y=0):
assert isinstance(x, (int, float)) and isinstance(y, (int, float))
self.x = float(x)
self.y = float(y)
def __repr__(self):
return "Point(" + str(self.x) + ", " + str(self.y) + ")"
def __eq__(self, other):
return (self.x, self.y) == (other.x, other.y)
def __lt__(self, other):
return (self.x < other.x and self.y < other.y)
def __add__(self, other):
assert isinstance(other, (int, Point))
if isinstance(other, Point):
return Point(self.x + other.x , self.y + other.y)
else: # other is int, taken as (int, int)
return Point(self.x + other , self.y + other)
def increment(self, other):
'''this method changes self (add "inplace")'''
assert isinstance(other,Point)
self.x += other.x
self.y += other.y
def is_extreme(self, *points):
for point in points:
if not self > point:
return False
return True
p = Point(5, 6)
p.is_extreme(Point(1,1))
p.is_extreme(Point(1,1), Point(2,5), Point(6,2))
Explanation: We now write a method that given many points, checks if the current point is more extreme than the other points.
Note that the argument *points means that more than one argument may be given.
End of explanation
Point.is_extreme(Point(7,8), Point(1,1), Point(4,5), Point(2,3))
Explanation: We can also use the method via the class instead of the instance, and give the instance of interest (the one that we want to know if it is the extreme) as the first argument self. Much like this, we can either do 'hi'.upper() or str.upper('hi').
End of explanation
class Rectangle1():
Holds a parallel-axes rectangle by storing two points
lower left vertex - llv
upper right vertex - urv
def __init__(self, lower_left_vertex, upper_right_vertex):
assert isinstance(lower_left_vertex, Point)
assert isinstance(upper_right_vertex, Point)
assert lower_left_vertex < upper_right_vertex
self.llv = lower_left_vertex
self.urv = upper_right_vertex
def __repr__(self):
representation = "Rectangle with lower left {0} and upper right {1}"
return representation.format(self.llv, self.urv)
def dimensions(self):
height = self.urv.y - self.llv.y
width = self.urv.x - self.llv.x
return height, width
def area(self):
height, width = self.dimensions()
area = height * width
return area
def transpose(self):
Reflection with regard to the line passing through lower left vertex with angle 315 (-45) degrees
height, width = self.dimensions()
self.urv = self.llv
self.llv = Point(self.urv.x - height, self.urv.y - width)
rec = Rectangle1(Point(), Point(2,1))
print(rec)
print("Area:", rec.area())
print("Dimensions:", rec.dimensions())
rec.transpose()
print("Transposed:", rec)
Explanation: Rectangle class
We will implement two classes for rectangles, and compare the two implementations.
First implementation - two points
The first implementation defines a rectangle by its lower left and upper right vertices.
End of explanation
class Rectangle2():
Holds a parallel-axes rectangle by storing lower left point, height and width
def __init__(self, point, height, width):
assert isinstance(point, Point)
assert isinstance(height, (int,float))
assert isinstance(width, (int,float))
assert height > 0
assert width > 0
self.point = point
self.height = float(height)
self.width = float(width)
def __repr__(self):
representation = "Rectangle with lower left {0} and upper right {1}"
return representation.format(self.point, Point(self.point.x + self.width, self.point.y + self.height))
def dimensions(self):
return self.height, self.width
def area(self):
area = self.height * self.width
return area
def transpose(self):
self.point = Point(self.point.x - self.height , self.point.y - self.width)
self.height, self.width = self.width, self.height
rec = Rectangle2(Point(), 1, 2)
print(rec)
print("Area:", rec.area())
print("Dimensions:", rec.dimensions())
rec.transpose()
print("Transposed:", rec)
Explanation: Second implementation - point and dimensions
The second implementation defines a rectangle by the lower left point, the height and the width.
We define the exact same methods as in Rectangle1, with the same input and output, but different inner representation / implementation.
End of explanation |
13,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Part 1
Step2: 1.2) Finding Features
1.2.1) Find Candidate Features
Now that we have the DCIDs of all counties for your state of interest, let's figure out what features to use!
First, let's query Data Commons for data we're interested in using the build_multivariate_dataframe() method.
We'll start out with the following features
Step3: 1.2.1) Curate Features
Now that we've got a list of candidate features, we'll need to narrow it down to the features that will be most useful for our analysis.
How do you know which features are useful? One helpful tool is to use a correlation matrix. You can think of a correlation as a table of values with each features in the rows and columns. The further away from zero the value in any particular cell is, the more highly correlated the feature corresponding to its row and column are.
Run the following code block to generate a correlation matrix of the features you've chosen. A larger size/darker color denotes stronger correlation.
Step4: 1.2B) Why does the diagonal from top left to bottom right have such strong correlations?
1.2C) Which features correlate the most? Which features correlate the least?
1.2D) Which features do you think will be most useful for predicting life expectancy? Why?
1.2E) Do any features (life expectancy is not counted as a feature) correlate strongly with each other?
1.2F) If two features correlate very strongly with each other, would you want to include them both in your analysis? Why or why not?
1.2G) Using your answers for 1.2C - 1.2F, fill in the code box below with a filtered list of statistical variables that you think are best to use for our model. The code box will generate a new dataframe containing only our selected useful features.
Step5: Those DCID row names are not very accessible. Let's replace them with their human-readable names, by using the get_property_values() method to get a mapping of each county's dcid to its name.
Step6: 1.3) Data Visualization
Now that we have our features, it's time to explore the data more in depth! This step is extremely important. The more familiar we are with our data, the better models we can build, and the better equiped we will be to troubleshoot when something goes wrong.
1.2) For each feature, generate a plot or otherwise write code to answer each of the following
Step7: 1.3) Data Cleaning
Before proceeding with our model, we first need to clean our data. Sometimes our data comes to us incomplete, with missing values, or in a different unit than we were expecting. It's always best practice to look through your data to make sure there are no corrupt, inaccurate, or missing records. If we do find such entries, we need to replace, motify, or remove that data. The process of correcting or removing bad data is known as data cleaning.
Some common things to look out for
Step8: Part 2
Step9: 2.2) Feature Representations
If any of your data is discrete, getting a good encoding of discrete features is particularly important. You want to create “opportunities” for your model to find the underlying regularities.
2.2A) For each of the following encodings, name an example of data the encoding would work well on, as well as an example of data it would not work as well for. Explain your answers.
Numeric Assign each of these values a number, say 1.0/k, 2.0/k, . . . , 1.0.
Thermometer code Use a vector of length k binary variables, where we convert discrete input value $0 < j < k$ into a vector in which the first j values are 1.0 and the rest are 0.0.
Factored code If your discrete values can sensibly be decomposed into two parts, then it’s best to treat those as two separate features (choosing a separate encoding scheme for each).
One-hot code Use a vector of length k, where we convert discrete input value $0 < j < k$ into a vector in which all values are 0.0, except for the $j$-th, which is 1.0.
2.2B) Write a function that creates a one-hot encoding.
Step10: 2.3) Standardization
It is typically useful to scale numeric data, so that it tends to be in the range [−1, +1]. Without performing this transformation, if you have
one feature with much larger values than another, it will take the learning algorithm a lot of work to find parameters that can put them on an equal basis.
Typically, we use the transformation
$$ \phi(x) = \frac{\bar{x} − x}{\sigma} $$
where $\bar{x}$ is the average of the $x_i$, and $\sigma$ is the standard
deviation of the $x_i$.
The resulting feature values will have mean 0 and standard deviation 1. This transformation is sometimes called standardizing a variable.
2.3) Write code to standardize each of the features in your dataframe.
Step11: Part 3
Step12: Let's now see how accurate our model was on our test set. | Python Code:
# We need to install the Data Commons API, since they don't ship natively with
# most python installations.
# In Colab, we'll be installing the Data Commons python and pandas APIs through pip.
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
# We'll also install some nice libraries for some pretty plots
# Import the two methods from heatmap library to make pretty correlation plots
!pip install heatmapz --upgrade --quiet
# Imports
# Data Commons Python and Pandas APIs
import datacommons as dc
import datacommons_pandas as dcp
# For manipulating data
import pandas as pd
# For creating a model
from sklearn import linear_model
from sklearn.metrics import mean_squared_error as mse
from sklearn.model_selection import train_test_split
# For visualizations
import matplotlib.pyplot as plt
from heatmap import heatmap, corrplot
Explanation: <a href="https://colab.research.google.com/github/datacommonsorg/api-python/blob/master/notebooks/intro_data_science/Feature_Engineering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2022 Google LLC.
SPDX-License-Identifier: Apache-2.0
Exploring Feature Engineering
Welcome! In this lesson, we'll be exploring various techniques for feature engineering. We'll be walking through the steps one takes to set up your data for your machine learning models, starting with acquiring and exploring the data, working through different transformations and feature representation choices, and analyzing how those design decisions affect our model's results.
Learning Objectives:
In this lesson, we'll be covering
* Tools for data exploration and visualization
* Useful feature representations
* Useful feature transformations
* Why is feature engineering important?
Need extra help?
If you're new to Google Colab, take a look at this getting started tutorial.
To build more familiarity with the Data Commons API, check out these Data Commons Tutorials.
And for help with Pandas and manipulating data frames, take a look at the Pandas Documentation.
We'll be using the scikit-learn library for implementing our models today. Documentation can be found here.
As usual, if you have any other questions, please reach out to your course staff!
Part 0: Introduction and Setup
As a result of the COVID-19 pandemic, we have very detailed statistics on the number of COVID-19 cases across the United States. Many studies have been done on the medical bases of the disease, but it is widely known that societal factors, like public policy, can greatly affect case numbers. Today, we'll take advantage of Data Commons, an open-source project that allows us to easily combine data from multiple different sources, to analyze the impact of social factors on COVID-19 cases. While public policy is hard to quantify into a data point, perhaps we can find other social factors that correlate with the number of COVID-19 cases.
Our data science question: How do various social factors (median income, household size, etc.) affect the cummulative number of COVID-19 cases?
Run the following codeboxes to install and load the packages required.
End of explanation
# Choose your state:
your_state_dcid = "geoId/06" # Using California as an example # YOUR DCID HERE
# Get a list of all DCIDs for counties in that state.
county_dcids = dc.get_places_in([your_state_dcid], "County")[your_state_dcid]
print(county_dcids)
Explanation: Part 1: Acquiring Data
1.1) Setting the Scope
As a starting point, we'll keep the scope of our analysis to the United States. Your job will be to select a state of interest and query for data at the county level, across all counties for your state of choice.
In Data Commons, every concept has a unique identifier, called a DCID, that's needed when querying for data. First, let's grab the DCIDs of all counties for your state of choice. We can use the get_places_in() method to list the DCIDs for all counties in your state of choice easily!
1.1) Choose a US state to analyze. Use the Data Commons Graph Browser to find the DCID for your state of choice, then fill in the code box below with the DCID.
End of explanation
# Create a pandas dataframe containing data for each of the features
# listed above. Your dataframe should have the states going along the rows,
# with one column per feature.
stat_vars_to_query = [
"CumulativeCount_MedicalTest_ConditionCOVID_19_Positive",
"Count_Person",
"Count_Person_MarriedAndNotSeparated",
"Median_Income_Person",
"Count_Household_With4OrMorePerson"
]
raw_df = dcp.build_multivariate_dataframe(county_dcids, stat_vars_to_query)
display(raw_df)
Explanation: 1.2) Finding Features
1.2.1) Find Candidate Features
Now that we have the DCIDs of all counties for your state of interest, let's figure out what features to use!
First, let's query Data Commons for data we're interested in using the build_multivariate_dataframe() method.
We'll start out with the following features:
Population
Median Income
Number of Households with 4 or more people
And of course, since we're analyzing COVID-19 cases, we'll query for that too.
At this stage, we're just loading in data that is potentially interesting to include in our analysis.
1.2A) Take a look at this list of all Statistical Variables. Find at least 3 more variables to add to your analysis.
Note: not all variables are available for all locations. If you notice the dataframe has some missing columns, try a different variable!
End of explanation
# Generate a correlation matrix plot
# We'll use the heatmapz package to draw a nice one.
plt.figure(figsize=(8, 8))
corrplot(raw_df.corr(), size_scale=300);
Explanation: 1.2.1) Curate Features
Now that we've got a list of candidate features, we'll need to narrow it down to the features that will be most useful for our analysis.
How do you know which features are useful? One helpful tool is to use a correlation matrix. You can think of a correlation as a table of values with each features in the rows and columns. The further away from zero the value in any particular cell is, the more highly correlated the feature corresponding to its row and column are.
Run the following code block to generate a correlation matrix of the features you've chosen. A larger size/darker color denotes stronger correlation.
End of explanation
filtered_stat_vars_to_query = [
"LifeExpectancy_Person",
"Count_Person",
"Count_Person_MarriedAndNotSeparated",
"Median_Income_Person",
"Count_Household_With4OrMorePerson"
]
# Get data from Data Commons
filtered_df = dcp.build_multivariate_dataframe(county_dcids, stat_vars_to_query)
display(filtered_df)
Explanation: 1.2B) Why does the diagonal from top left to bottom right have such strong correlations?
1.2C) Which features correlate the most? Which features correlate the least?
1.2D) Which features do you think will be most useful for predicting life expectancy? Why?
1.2E) Do any features (life expectancy is not counted as a feature) correlate strongly with each other?
1.2F) If two features correlate very strongly with each other, would you want to include them both in your analysis? Why or why not?
1.2G) Using your answers for 1.2C - 1.2F, fill in the code box below with a filtered list of statistical variables that you think are best to use for our model. The code box will generate a new dataframe containing only our selected useful features.
End of explanation
# Produce a dictionary mapping dcids to county names
# e.g. {'dcid' : ['County Name']}
county_name_dict = dc.get_property_values(county_dcids, 'name')
# Replace DCIDs with human readable names
# Make Row Names More Readable
# --- First, we'll copy the dcids into their own column
# --- Next, we'll get the "name" property of each dcid
# --- Then add the returned dictionary to our data frame as a new column
# --- Finally, we'll set this column as the new index
df = filtered_df.copy(deep=True)
df['DCID'] = df.index
county_name_dict = {key:value[0] for key, value in county_name_dict.items()}
df['County'] = pd.Series(county_name_dict)
df.set_index('County', inplace=True)
display(df)
Explanation: Those DCID row names are not very accessible. Let's replace them with their human-readable names, by using the get_property_values() method to get a mapping of each county's dcid to its name.
End of explanation
# Use this space to create scatter plots, histograms, etc.
# to answer the questions above.
# YOUR CODE HERE
# Example Solution:
# Get some basic statistics
display(df.describe())
# Plot histograms to get an idea of data spread
display(features_df["Median_Income_Person"].plot.hist(bins=100))
display(df["Count_Household_With4OrMorePerson"].plot.hist(bins=100))
display(df["CumulativeCount_MedicalTest_ConditionCOVID_19_Positive"].plot.hist(bins=10))
Explanation: 1.3) Data Visualization
Now that we have our features, it's time to explore the data more in depth! This step is extremely important. The more familiar we are with our data, the better models we can build, and the better equiped we will be to troubleshoot when something goes wrong.
1.2) For each feature, generate a plot or otherwise write code to answer each of the following:
- What is the maximum value of the feature?
- What is the minimum value of the feature?
- What is the distribution of values for this feature?
- Is the data complete? Are there any NaN or empty values?
- Are there any strange outliers?
End of explanation
# Use this code box to implement any imputation and data cleaning.
# YOUR CODE HERE
Explanation: 1.3) Data Cleaning
Before proceeding with our model, we first need to clean our data. Sometimes our data comes to us incomplete, with missing values, or in a different unit than we were expecting. It's always best practice to look through your data to make sure there are no corrupt, inaccurate, or missing records. If we do find such entries, we need to replace, motify, or remove that data. The process of correcting or removing bad data is known as data cleaning.
Some common things to look out for:
* Data can be missing (e.g. an empty cell in a column). Depending on your application and context, sometimes there's a clear "default" value that can be filled in.
* Duplicate rows or columns. You will need to delete the extras.
* The format of the data you're provided is incorrect. This can include strange naming conventions, typos, strange capitalization, or inconsistencies (e.g. having both "N/A" and "Not Applicable" appear).
1.3A) Why bother replacing/modifying/removing "dirty" data in the first place? What do you think would happen if we found "dirty" data, but trained a model on such data without data cleaning first?
1.3B) How would you approach handling any NaN or empty values in a dataframe? Should we remove that row? Remove the feature? Or should we replace NaNs with a particular value (and if so, how do you decide what value that should be)?
1.3C) Take a look at the dataframe outputted by the code box above from section 1.2. Are there any values that need to be cleaned? If so, write code to implement the answers to the above questions using the code box below.
Hint: If you're strugging, check out the Pandas documentation for methods you can use to manipulate the data in the data frame.
End of explanation
# Try different transformations on your features like taking the log, binning, etc.
# YOUR CODE HERE
# Example solution:
# Data Commons often contains data already in "per capita" form, but as an exercise
# it'll be good for students to realize they should compare features like household counts
# normalized by population and do this themselves.
# Normalizing Count_HouseholdWith4OrMorePerson by population Count
household_count_percapita = df['Count_Household_With4OrMorePerson']/df['Count_Person']
household_df = household_count_percapita.to_frame()
household_df.index.name = 'place'
household_df = household_df.rename({0:'Household4orMore_percapita'}, axis=1)
features_df_new = pd.concat([household_df,df], axis=1)
display(features_df_new)
Explanation: Part 2: Building Features
Now that we've selected and explored some features, we now need to decide how exactly to encode our data into a feature vector to feed into our model.
2.1) Feature Transformations
Sometimes transforming the data can reveal interesting combinations, or better scale our data. Here are some things to look out for:
If your data has a skewed distribution or large changes in magnitude, it may be helpful to take the $log()$ of your data to bring it closer to normal.
Othertimes it may be helpful to bin close values together (for example, create groupings by age 0-10, 11-20, 21-30, etc.)
When working with population or demographic data, it's often also prudent to consider whether the features you are using should be scaled by population.
2.1) Choose a feature transformation to implement, and use it to transform at least one of your features.
End of explanation
# Write a function that implements one-hot encoding.
# YOUR CODE HERE
Explanation: 2.2) Feature Representations
If any of your data is discrete, getting a good encoding of discrete features is particularly important. You want to create “opportunities” for your model to find the underlying regularities.
2.2A) For each of the following encodings, name an example of data the encoding would work well on, as well as an example of data it would not work as well for. Explain your answers.
Numeric Assign each of these values a number, say 1.0/k, 2.0/k, . . . , 1.0.
Thermometer code Use a vector of length k binary variables, where we convert discrete input value $0 < j < k$ into a vector in which the first j values are 1.0 and the rest are 0.0.
Factored code If your discrete values can sensibly be decomposed into two parts, then it’s best to treat those as two separate features (choosing a separate encoding scheme for each).
One-hot code Use a vector of length k, where we convert discrete input value $0 < j < k$ into a vector in which all values are 0.0, except for the $j$-th, which is 1.0.
2.2B) Write a function that creates a one-hot encoding.
End of explanation
# Create a new dataframe with each of the features standardized.
# YOUR CODE HERE
# Solution:
standardized_df = (features_df_new - features_df_new.mean())/features_df_new.std()
display(standardized_df)
Explanation: 2.3) Standardization
It is typically useful to scale numeric data, so that it tends to be in the range [−1, +1]. Without performing this transformation, if you have
one feature with much larger values than another, it will take the learning algorithm a lot of work to find parameters that can put them on an equal basis.
Typically, we use the transformation
$$ \phi(x) = \frac{\bar{x} − x}{\sigma} $$
where $\bar{x}$ is the average of the $x_i$, and $\sigma$ is the standard
deviation of the $x_i$.
The resulting feature values will have mean 0 and standard deviation 1. This transformation is sometimes called standardizing a variable.
2.3) Write code to standardize each of the features in your dataframe.
End of explanation
# Run me!
# Convert Dataframes into data and labels for the model
target_df = standardized_df
X = target_df[['Household4orMore_percapita','Median_Income_Person']]
Y = target_df[['CumulativeCount_MedicalTest_ConditionCOVID_19_Positive']]
# Split into training and test sets
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.2)
# Fit an OLS linear regression model
model = linear_model.LinearRegression(fit_intercept=True)
model.fit(x_train, y_train)
print('Model Intercept: {}'.format(model.intercept_))
print('Model Coefficients: {}'.format(model.coef_))
Explanation: Part 3: Testing Your Features
Now, let's see how well a simple linear regression model can learn to predict the cummulative number of COVID-19 cases, given our features.
End of explanation
train_pred = model.predict(x_train)
test_pred = model.predict(x_test)
print('Training Error: {}'.format(mse(train_pred, y_train)))
print('Test Error: {}'.format(mse(test_pred, y_test)))
Explanation: Let's now see how accurate our model was on our test set.
End of explanation |
13,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Setup
Change to GPU runtime
Step2: The jaxlib version must correspond to the version of the existing CUDA installation you want to use, with cuda110 for CUDA 11.0, cuda102 for CUDA 10.2, cuda101 for CUDA 10.1, and cuda100 for CUDA 10.0.
Step3: Load data
Step4: Train neural XC functional with KS regularizer
Step7:
Step8: Visualize the model prediction on H$_2$ over training
Step9: Visualize the optimal checkpoint in paper
Here we use the neural XC functional trained with Kohn-Sham regularizer in
Kohn-Sham equations as regularizer
Step10: Solve one H2 separation
Step11: Neural XC
Step12: Local density approximation (LDA)
As a comparison, we show the solution on the same molecule using LDA functional | Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google-research/google-research/blob/master/jax_dft/examples/training_neural_xc_functional.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Check cuda version
!nvcc --version
Explanation: Setup
Change to GPU runtime: Runtime -> Change runtime type -> Hardware accelerator -> GPU
End of explanation
# For GPU runtime
!pip install --upgrade jax jaxlib==0.1.62+cuda110 -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Install jax-dft
!git clone https://github.com/google-research/google-research.git
!pip install google-research/jax_dft
!rm -rf h2
!wget https://github.com/google-research/google-research/raw/master/jax_dft/data/h2.zip
!unzip h2.zip
!wget https://github.com/google-research/google-research/raw/master/jax_dft/data/h2_optimal.pkl
import os
os.environ['XLA_FLAGS'] = '--xla_gpu_deterministic_reductions'
import glob
import pickle
import time
import jax
from jax import random
from jax import tree_util
from jax.config import config
import jax.numpy as jnp
from jax_dft import datasets
from jax_dft import jit_scf
from jax_dft import losses
from jax_dft import neural_xc
from jax_dft import np_utils
from jax_dft import scf
from jax_dft import utils
from jax_dft import xc
import matplotlib.pyplot as plt
import numpy as np
import scipy
# Set the default dtype as float64
config.update('jax_enable_x64', True)
print(f'JAX devices: {jax.devices()}')
Explanation: The jaxlib version must correspond to the version of the existing CUDA installation you want to use, with cuda110 for CUDA 11.0, cuda102 for CUDA 10.2, cuda101 for CUDA 10.1, and cuda100 for CUDA 10.0.
End of explanation
train_distances = [128, 384] #@param
dataset = datasets.Dataset(path='h2/', num_grids=513)
grids = dataset.grids
train_set = dataset.get_molecules(train_distances)
#@title Check distances are symmetric
if not np.all(utils.location_center_at_grids_center_point(
train_set.locations, grids)):
raise ValueError(
'Training set contains examples '
'not centered at the center of the grids.')
#@title Initial density
initial_density = scf.get_initial_density(train_set, method='noninteracting')
Explanation: Load data
End of explanation
#@title Initialize network
network = neural_xc.build_global_local_conv_net(
num_global_filters=16,
num_local_filters=16,
num_local_conv_layers=2,
activation='swish',
grids=grids,
minval=0.1,
maxval=2.385345,
downsample_factor=0)
network = neural_xc.wrap_network_with_self_interaction_layer(
network, grids=grids, interaction_fn=utils.exponential_coulomb)
init_fn, neural_xc_energy_density_fn = neural_xc.global_functional(
network, grids=grids)
init_params = init_fn(random.PRNGKey(0))
initial_checkpoint_index = 0
spec, flatten_init_params = np_utils.flatten(init_params)
print(f'number of parameters: {len(flatten_init_params)}')
Explanation: Train neural XC functional with KS regularizer
End of explanation
#@markdown The number of Kohn-Sham iterations in training.
num_iterations = 15 #@param{'type': 'integer'}
#@markdown The density linear mixing factor.
alpha = 0.5 #@param{'type': 'number'}
#@markdown Decay factor of density linear mixing factor.
alpha_decay = 0.9 #@param{'type': 'number'}
#@markdown The number of density differences in the previous iterations to mix the
#@markdown density. Linear mixing is num_mixing_iterations = 1.
num_mixing_iterations = 1 #@param{'type': 'integer'}
#@markdown The stopping criteria of Kohn-Sham iteration on density.
density_mse_converge_tolerance = -1. #@param{'type': 'number'}
#@markdown Apply stop gradient on the output state of this step and all steps
#@markdown before. The first KS step is indexed as 0. Default -1, no stop gradient
#@markdown is applied.
stop_gradient_step=-1 #@param{'type': 'integer'}
def _kohn_sham(flatten_params, locations, nuclear_charges, initial_density):
return jit_scf.kohn_sham(
locations=locations,
nuclear_charges=nuclear_charges,
num_electrons=dataset.num_electrons,
num_iterations=num_iterations,
grids=grids,
xc_energy_density_fn=tree_util.Partial(
neural_xc_energy_density_fn,
params=np_utils.unflatten(spec, flatten_params)),
interaction_fn=utils.exponential_coulomb,
# The initial density of KS self-consistent calculations.
initial_density=initial_density,
alpha=alpha,
alpha_decay=alpha_decay,
enforce_reflection_symmetry=True,
num_mixing_iterations=num_mixing_iterations,
density_mse_converge_tolerance=density_mse_converge_tolerance,
stop_gradient_step=stop_gradient_step)
_batch_jit_kohn_sham = jax.vmap(_kohn_sham, in_axes=(None, 0, 0, 0))
grids_integration_factor = utils.get_dx(grids) * len(grids)
def loss_fn(
flatten_params, locations, nuclear_charges,
initial_density, target_energy, target_density):
Get losses.
states = _batch_jit_kohn_sham(
flatten_params, locations, nuclear_charges, initial_density)
# Energy loss
loss_value = losses.trajectory_mse(
target=target_energy,
predict=states.total_energy[
# The starting states have larger errors. Ignore the number of
# starting states (here 10) in loss.
:, 10:],
# The discount factor in the trajectory loss.
discount=0.9) / dataset.num_electrons
# Density loss
loss_value += losses.mean_square_error(
target=target_density, predict=states.density[:, -1, :]
) * grids_integration_factor / dataset.num_electrons
return loss_value
value_and_grad_fn = jax.jit(jax.value_and_grad(loss_fn))
#@markdown The frequency of saving checkpoints.
save_every_n = 20 #@param{'type': 'integer'}
loss_record = []
def np_value_and_grad_fn(flatten_params):
Gets loss value and gradient of parameters as float and numpy array.
start_time = time.time()
# Automatic differentiation.
train_set_loss, train_set_gradient = value_and_grad_fn(
flatten_params,
locations=train_set.locations,
nuclear_charges=train_set.nuclear_charges,
initial_density=initial_density,
target_energy=train_set.total_energy,
target_density=train_set.density)
step_time = time.time() - start_time
step = initial_checkpoint_index + len(loss_record)
print(f'step {step}, loss {train_set_loss} in {step_time} sec')
# Save checkpoints.
if len(loss_record) % save_every_n == 0:
checkpoint_path = f'ckpt-{step:05d}'
print(f'Save checkpoint {checkpoint_path}')
with open(checkpoint_path, 'wb') as handle:
pickle.dump(np_utils.unflatten(spec, flatten_params), handle)
loss_record.append(train_set_loss)
return train_set_loss, np.array(train_set_gradient)
#@title Use L-BFGS optimizer to update neural network functional
#@markdown This cell trains the model. Each step takes about 1.6s.
max_train_steps = 200 #@param{'type': 'integer'}
_, _, info = scipy.optimize.fmin_l_bfgs_b(
np_value_and_grad_fn,
x0=np.array(flatten_init_params),
# Maximum number of function evaluations.
maxfun=max_train_steps,
factr=1,
m=20,
pgtol=1e-14)
print(info)
#@title loss curve
plt.plot(np.minimum.accumulate(loss_record))
plt.yscale('log')
plt.ylabel('loss')
plt.xlabel('training steps')
plt.show()
Explanation:
End of explanation
#@title Helper functions
plot_distances = [40, 56, 72, 88, 104, 120, 136, 152, 184, 200, 216, 232, 248, 264, 280, 312, 328, 344, 360, 376, 392, 408, 424, 456, 472, 488, 504, 520, 536, 568, 584, 600] #@param
plot_set = dataset.get_molecules(plot_distances)
plot_initial_density = scf.get_initial_density(
plot_set, method='noninteracting')
nuclear_energy = utils.get_nuclear_interaction_energy_batch(
plot_set.locations,
plot_set.nuclear_charges,
interaction_fn=utils.exponential_coulomb)
def kohn_sham(
params, locations, nuclear_charges, initial_density=None, use_lda=False):
return scf.kohn_sham(
locations=locations,
nuclear_charges=nuclear_charges,
num_electrons=dataset.num_electrons,
num_iterations=num_iterations,
grids=grids,
xc_energy_density_fn=tree_util.Partial(
xc.get_lda_xc_energy_density_fn() if use_lda else neural_xc_energy_density_fn,
params=params),
interaction_fn=utils.exponential_coulomb,
# The initial density of KS self-consistent calculations.
initial_density=initial_density,
alpha=alpha,
alpha_decay=alpha_decay,
enforce_reflection_symmetry=True,
num_mixing_iterations=num_mixing_iterations,
density_mse_converge_tolerance=density_mse_converge_tolerance)
def get_states(ckpt_path):
print(f'Load {ckpt_path}')
with open(ckpt_path, 'rb') as handle:
params = pickle.load(handle)
states = []
for i in range(len(plot_distances)):
states.append(kohn_sham(
params,
locations=plot_set.locations[i],
nuclear_charges=plot_set.nuclear_charges[i],
initial_density=plot_initial_density[i]))
return tree_util.tree_multimap(lambda *x: jnp.stack(x), *states)
#@title Distribution of the model trained with Kohn-Sham regularizer
#@markdown Runtime ~20 minutes for 11 checkpoints.
#@markdown To speed up the calculation, you can reduce the number of
#@markdown separations to compute in
#@markdown `Helper functions -> plot_distances`
ckpt_list = sorted(glob.glob('ckpt-?????'))
num_ckpts = len(ckpt_list)
ckpt_states = []
for ckpt_path in ckpt_list:
ckpt_states.append(get_states(ckpt_path))
for i, (states, ckpt_path) in enumerate(zip(ckpt_states, ckpt_list)):
plt.plot(
np.array(plot_distances) / 100,
nuclear_energy + states.total_energy[:, -1],
color=str(0.1 + 0.85 * (num_ckpts - i) / num_ckpts),
label=ckpt_path)
plt.plot(
np.array(plot_distances) / 100,
nuclear_energy + plot_set.total_energy,
c='r', dashes=(10, 8), label='exact')
plt.xlabel(r'$R\,\,\mathrm{(Bohr)}$')
plt.ylabel(r'$E+E_\mathrm{nn}\,\,\mathsf{(Hartree)}$')
plt.legend(bbox_to_anchor=(1.4, 0.8), framealpha=0.5)
plt.show()
Explanation: Visualize the model prediction on H$_2$ over training
End of explanation
states = get_states('h2_optimal.pkl')
plt.plot(
np.array(plot_distances) / 100,
nuclear_energy + states.total_energy[:, -1], lw=2.5, label='KSR')
plt.plot(
np.array(plot_distances) / 100,
nuclear_energy + plot_set.total_energy,
c='r', dashes=(10, 8), label='exact')
plt.xlabel(r'$R\,\,\mathrm{(Bohr)}$')
plt.ylabel(r'$E+E_\mathrm{nn}\,\,\mathsf{(Hartree)}$')
plt.legend(loc=0)
plt.show()
Explanation: Visualize the optimal checkpoint in paper
Here we use the neural XC functional trained with Kohn-Sham regularizer in
Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics
Li Li, Stephan Hoyer, Ryan Pederson, Ruoxi Sun, Ekin D. Cubuk, Patrick Riley, and Kieron Burke
https://arxiv.org/abs/2009.08551
H2 dissociation curve
End of explanation
distance_x100 = 400 #@param{'type': 'integer'}
#@markdown Plot range on x axis
x_min = -10 #@param{'type': 'number'}
x_max = 10 #@param{'type': 'number'}
with open('h2_optimal.pkl', 'rb') as handle:
params = pickle.load(handle)
test = dataset.get_molecules([distance_x100])
Explanation: Solve one H2 separation
End of explanation
solution = kohn_sham(
params,
locations=test.locations[0],
nuclear_charges=test.nuclear_charges[0])
# Density and XC energy density
_, axs = plt.subplots(
nrows=3,
ncols=num_iterations // 3,
figsize=(2.5 * (num_iterations // 3), 6), sharex=True, sharey=True)
axs[2][2].set_xlabel('x')
for i, ax in enumerate(axs.ravel()):
ax.set_title(f'KS iter {i + 1}')
ax.plot(grids, solution.density[i], label=r'$n$')
ax.plot(grids, test.density[0], 'k--', label=r'exact $n$')
ax.plot(grids, solution.xc_energy_density[i], label=r'$\epsilon_\mathrm{XC}$')
ax.set_xlim(x_min, x_max)
axs[2][-1].legend(bbox_to_anchor=(1.2, 0.8))
axs[1][0].set_ylabel('Neural XC')
plt.show()
plt.plot(
1 + np.arange(num_iterations), solution.total_energy,
label='KS')
truth = test.total_energy[0]
plt.axhline(y=truth, ls='--', color='k', label='exact')
plt.axhspan(
truth - 0.0016, truth + 0.0016, color='0.9', label='chemical accuracy')
plt.xlabel('KS iterations')
plt.ylabel('Energy')
plt.legend()
plt.show()
Explanation: Neural XC
End of explanation
lda = kohn_sham(
None,
locations=test.locations[0],
nuclear_charges=test.nuclear_charges[0],
use_lda=True)
_, axs = plt.subplots(
nrows=3,
ncols=num_iterations // 3,
figsize=(2.5 * (num_iterations // 3), 6), sharex=True, sharey=True)
axs[2][2].set_xlabel('x')
for i, ax in enumerate(axs.ravel()):
ax.set_title(f'KS iter {i + 1}')
ax.plot(grids, lda.density[i], label=r'$n$')
ax.plot(grids, test.density[0], 'k--', label=r'exact $n$')
ax.plot(grids, lda.xc_energy_density[i], label=r'$\epsilon_\mathrm{XC}$')
ax.set_xlim(x_min, x_max)
axs[2][-1].legend(bbox_to_anchor=(1.2, 0.8))
axs[1][0].set_ylabel('LDA')
plt.show()
plt.plot(
1 + np.arange(num_iterations), lda.total_energy,
label='KS')
truth = test.total_energy[0]
plt.axhline(y=truth, ls='--', color='k', label='exact')
plt.axhspan(
truth - 0.0016, truth + 0.0016, color='0.9', label='chemical accuracy')
plt.xlabel('KS iterations')
plt.ylabel('Energy')
plt.legend()
plt.show()
Explanation: Local density approximation (LDA)
As a comparison, we show the solution on the same molecule using LDA functional:
One-dimensional mimicking of electronic structure: The case for exponentials
Thomas E. Baker, E. Miles Stoudenmire, Lucas O. Wagner, Kieron Burke, and Steven R. White
Physical Review B 91.23 (2015): 235141.
The LDA functional is implemented in the jax_dft.xc module.
End of explanation |
13,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scipy.stats - computational statistics
T-test and ANOVA (one way)
linear regression, curve fitting and parameter estimation
statistical enrichment analysis (GO enrichment, fisher test)
differential expression (error estimation on a synthetic poisson model)
T-test
One of the basic application of statistics is testing if a null hypothesis is true. In the previous application of Pandas for gene expression studies we observed that the data was normalized, with its averages matching closely. Suppose we want to know if in statistical terms these averages are matching sufficiently close. A simple way to check this is to compute all pair t-tests and see of all our obtained P - values are falling under the 0.005 bar.
http
Step1: ANOVA
.. But, in order not to get incidentally murdered by a reviewer with background in statistics, one should use a single test to rule them all, popularly called ANOVA.
Step2: Linear regression
Ever felt the need to fit a straight trendline to a number of points in a scatter plot? This is called linear regression. Here is how you can do LR with numpy. As a useful exercise, let us generate our own dataset.
How is the line fitted? The most basic methodology is least square minimization of the squared standard error. One can do least square fitting directly in numpy or scikit-learn to obtain the same results.
Step4: Curve fitting and parameter estimation
TODO
Step6: Statistical enrichment analysis
What is enrichment?
Before we move on, we should get back to something that is always obsessing biologists with data
Step9: Perform the statistical test
In Set Outside Set
GOid ann. b a
Not GOid ann. c d
Read the docs
Step10: Now we want to choose a set of genes to test enrichment, so we output a few terms together with their annotated genes
Step11: Differential Expression
As biologists, DE is finally a statistical concept that I feel no need of explaining. But I have a personal remark | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from scipy import stats
df = pd.read_csv('data/gex.txt', sep = '\t', index_col = 0)
print(df.head(4))
#df.iloc[:,1:].astype(float)
#print df.dtypes
#print type(df.GSM21712.values)
pmatrix = np.zeros((6,6))# the P-value matrix
i = -1
for ci in df.columns[1:]:
i += 1
a = df[ci].values
a = a[~np.isnan(a)]#Removing undefined elements
#a = df[ci].values.astype(float)
#print a.dtype, type(a[0])
j = -1
for cj in df.columns[1:]:
j += 1
if ci == cj: continue
b = df[cj].values
b = b[~np.isnan(b)]
t, p = stats.ttest_ind(a, b, equal_var = False)
#print np.isnan(a).any(), np.isnan(b).any()
pmatrix[i,j]=p
np.set_printoptions(linewidth=200)
np.set_printoptions(precision=2)
print(pmatrix)
df.boxplot()
Explanation: scipy.stats - computational statistics
T-test and ANOVA (one way)
linear regression, curve fitting and parameter estimation
statistical enrichment analysis (GO enrichment, fisher test)
differential expression (error estimation on a synthetic poisson model)
T-test
One of the basic application of statistics is testing if a null hypothesis is true. In the previous application of Pandas for gene expression studies we observed that the data was normalized, with its averages matching closely. Suppose we want to know if in statistical terms these averages are matching sufficiently close. A simple way to check this is to compute all pair t-tests and see of all our obtained P - values are falling under the 0.005 bar.
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.ttest_ind.html
End of explanation
import pandas as pd
import numpy as np
from scipy import stats
df = pd.read_csv('data/gex.txt', sep = '\t', index_col = 0)
sl = [] #sample list
for ci in df.columns[1:]:
a = df[ci].values
a = a[~np.isnan(a)]#Removing undefined elements
sl.append(a)
print stats.f_oneway(*sl)
Explanation: ANOVA
.. But, in order not to get incidentally murdered by a reviewer with background in statistics, one should use a single test to rule them all, popularly called ANOVA.
End of explanation
%matplotlib inline
import numpy as np
import pylab as plt
from scipy import stats
nsamp = 30
x = np.linspace(0, 3, nsamp)
y = -0.5*x + 1 #next line randomizes these aligned datapoints
yr = y + .5*np.random.normal(size=nsamp)
slope, intercept, r_value, p_value, std_err = stats.linregress(x,yr)
line = slope*x+intercept
plt.plot(x,line,'r-', x,yr,'o', x,y,'b-')
Explanation: Linear regression
Ever felt the need to fit a straight trendline to a number of points in a scatter plot? This is called linear regression. Here is how you can do LR with numpy. As a useful exercise, let us generate our own dataset.
How is the line fitted? The most basic methodology is least square minimization of the squared standard error. One can do least square fitting directly in numpy or scikit-learn to obtain the same results.
End of explanation
%matplotlib inline
import numpy as np
import pylab as plt
from scipy import optimize
nsamp = 30
x = np.linspace(0,1,nsamp)
y = -0.5*x**2 + 7*sin(x)
This is what we try to fit against. Suppose we know our function is generated
by this law and want to find the (-0.5, 7) parameters. Alternatively we might
not know anything about this dataset but just want to fit this curve to it.
f = lambda p, x: p[0]*x*x + p[1]*np.sin(x)
#Exercise: define a normal Python function f() instead!
testp = (1, 20)
y = f(testp,x)
yr = y + .5*np.random.normal(size=nsamp)
e = lambda p, x, y: (abs((f(p,x)-y))).sum()
p0 = (5, 20) # initial parameter value
p = optimize.fmin(e, p0, args=(x,yr))
yp = f(p,x) # predicted target
print("estimated vs real", p, testp)
plt.title("estimated(red) vs real(blue)")
plt.plot(x,yp,'r-', x,yr,'o', x,y,'b-')
Explanation: Curve fitting and parameter estimation
TODO: moved to the new optimization chapter!
When the data points are multidimensional you will use more complex multivariate regression techniques, but we will discuss that at more length in the machine learning chapter. For the moment, let us use a similar exercise as before, but fit a curve instead. While not strictly statistics related, this exercise can be useful for example if we want to decide how a probability distribution fits our data. We will use the least-square again, through the optimization module of scipy.
End of explanation
%run ./lib/svgoutput.py
GO enrichment figure displayed using IPython's native SVG frontend
scene = SVGScene(500,600)
scene.text((50,50),"GO enrichment schema")
scene.circle((170,200),35,(100,100,100), 0.5)
scene.text((190,160),"G: genes annotated to GO_id",size = 11)
scene.circle((200,200),80,(200,200,200), 0.1)
scene.text((190,110),"X: all annotated genes",size = 11)
scene.circle((100,200),50,(100,100,100), 0.5)
scene.text((50,140),"T: test gene set",size = 11)
scene.text((170,200),"a", size = 11)
scene.text((140,200),"b", size = 11)
scene.text((125,200),"c", size = 11)
scene.text((250,200),"d", size = 11)
scene
Explanation: Statistical enrichment analysis
What is enrichment?
Before we move on, we should get back to something that is always obsessing biologists with data: putting a P-value on your findings. If all you have is a series of numbers of a series of series it may be that the T-tests and ANOVA will be quite good, but what if you have cathegorical data, as many times it happens in biology? What if the question is "I have a bag of vegetables, to which of the following vegetable racks is my bag belonging more?". If the answer also takes into account how many items each of the vegetable racks is holding then you can sove it with statistical enrichment testing. Of course no one at this course will argue with your shopping habits if you don't think the number of vegies on a store racks is important! As there are many cathegorical sets of functional annotations in biology, we will only focus on Gene Ontology
Gene Ontology
To use the GO annotations we typically need to know if the genes/protein of a specific organism are annotated with a certain GO label (GO id), while concurrent labels are possible for every gene/protein since each can have multiple roles in biology. The annotations are structured in a tree, with the branches having inherited all annotations from the leafs. You can read more about it here.
Extra task: Download raw annotation files and program your own Python GO module. One key aspect is annotations must be inherited through the tree. The elegant way to achieve the tree traversal is using a recursive function. This is a hard task, that needs hours to complete but it can be very instructive.
For the purpose of this course in order to parse the GO annotations we will use a Python library called Orange, that has other useful modules as well. Other alternatives are calling an R package from within Python (recomending topGO for enrichment and GO.db for interogation) or if you are a Perl senior then call a BIO::Perl module. BioPython is also preparing a GO module so be sure to check their package once every couple of years.
Enrichment test
In the figure below you can see that there are four sets forming up when we intersect our custom set of genes with the annotation database. The purpose of the enrichment test is to tell how likely is the four sets overlap. To answer this a hypergeometric test is commonly used, known as Fischer's exact test. Let us put it into Python!
http://en.wikipedia.org/wiki/Fisher%27s_exact_test
Note that multiple testing corrections like the Bonferroni correction must also be executed, but obtaining the raw P-values is sufficient for the purpose of this course.
TODO: use GOATOOLS
- https://www.nature.com/articles/s41598-018-28948-z
- https://github.com/tanghaibao/goatools
End of explanation
Using R to get the annotations and perform GO enrichment
TODO: not finished, only for reference
See here something similar:
http://bcb.io/2009/10/18/gene-ontology-analysis-with-python-and-bioconductor/
import rpy2.robjects as robjects
def get_go_children(go_term, go_term_type):
robjects.r('''
library(GO.db)
''')
child_map = robjects.r["GO%sCHILDREN" % (go_term_type)]
children = []
to_check = [go_term]
while len(to_check) > 0:
new_children = []
for check_term in to_check:
new_children.extend(list(robjects.r.get(check_term, child_map)))
new_children = list(set([c for c in new_children if c]))
children.extend(new_children)
to_check = new_children
children = list(set(children))
return children
Using Orange to get annotations
from orangecontrib.bio import go
import sys
ontology = go.Ontology()
# Print names and definitions of all terms with "apoptosis" in the name
terms = [term for term in ontology.terms.values() if "apoptosis" in term.name.lower()]
annotations = go.Annotations("sgd", ontology=ontology)
go = {}
for term in ontology.terms.values():
ants = annotations.get_all_annotations(term.id)
gs = set([a.gene_name for a in ants])
if len(gs)>0: go[term.id] = gs
print len(go)
Explanation: Perform the statistical test
In Set Outside Set
GOid ann. b a
Not GOid ann. c d
Read the docs:
http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.fisher_exact.html
End of explanation
testset = set(['KAR2', 'EGD1', 'EGD2', 'YBR137W', 'MDY2', 'SEC72', 'SGT2', 'SEC66', 'SEC61', 'SEC62', 'SEC63', 'LHS1', 'SSS1', 'BTT1', 'GET3', 'GET4'])
testGO = 'GO:0006620'# posttranslational protein targeting to membrane set
import numpy as np
from scipy import stats
T = testset
X = set()
for goid in go:
X = X | go[goid]
print "Total number of annotated genes:", len(X)
rec = []#will hold the results tupple
for goid in go:
G = go[goid]
a = G - T
b = T & G
c = (X & T) - G
d = X - T - G
oddsratio, pvalue = stats.fisher_exact([[len(b), len(a)], [len(c), len(d)]])
rec.append((goid, ontology[goid].name, pvalue))
df = pd.DataFrame(rec)
df.sort(columns = 2)
Explanation: Now we want to choose a set of genes to test enrichment, so we output a few terms together with their annotated genes:
End of explanation
from scipy import stats
import numpy as np
np.random.seed(1) # for repeatability
gex = [stats.poisson(5.0).rvs(50) for i in range(0,2000)]
E1 = stats.poisson(10.0).rvs(25) # Poisson sampling of average 10. expression
E2 = stats.poisson(5.0).rvs(25)
gex.append(np.concatenate((E1, E2), axis = None))
X = np.stack(gex)
from scipy.stats.distributions import norm
C1 = list(range(0,25))
C2 = list(range(25,50))
MC1 = X[:,C1].mean(axis=1)
MC2 = X[:,C2].mean(axis=1)
VC1 = X[:,C1].var(axis=1)
VC2 = X[:,C2].var(axis=1)
nC1 = len(C1)
nC2 = len(C2)
zscores = (MC1 - MC2) / np.sqrt(VC1/nC1 + VC2/nC2)
print(zscores.mean(), zscores.std())
pvalues = 2*norm.cdf(-np.abs(zscores))
Explanation: Differential Expression
As biologists, DE is finally a statistical concept that I feel no need of explaining. But I have a personal remark: one of the most impotant concept in biological data science is also one of the leas understood. This is mainly because of the tendency to create black hole programs or function calls, that have no transparent model fitting. This is not a pythonic approach!
Here is a simple T-test application for determining the samples having a differential expression level. Suppose that the machine cannot determine very accurately the amount of reads for a given gene. We estimate the machine error using a Poisson model with set medians. We are interested in genes having a remarkably different signal in the first set of samples.
End of explanation |
13,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex Training
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Step2: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step3: Otherwise, set your project ID here.
Step4: Set project ID
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Import libraries and define constants
Step11: Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the CloudML Hypertune library and sets up the entrypoint for the training code.
Step12: Create training application code
Next, you create a trainer directory with a task.py script that contains the code for your training application.
Step17: In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.
There are a few components that are specific to using the hyperparameter tuning service
Step18: Build the Container
In the next cells, you build the container and push it to Google Container Registry.
Step19: Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications
Step20: Create a CustomJob.
Step21: Then, create and run a HyperparameterTuningJob.
There are a few arguments to note
Step22: Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform
Explanation: Vertex Training: Distributed Hyperparameter Tuning
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td> <td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/hyperparameter_tuning/distributed-hyperparameter-tuning.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
Overview
This notebook demonstrates how to run a hyperparameter tuning job with Vertex Training to discover optimal hyperparameter values for an ML model. To speed up the training process, MirroredStrategy from the tf.distribute module is used to distribute training across multiple GPUs on a single machine.
Dataset
The dataset used for this tutorial is the horses or humans dataset from TensorFlow Datasets. The trained model predicts if an image is of a horse or a human.
Objective
In this notebook, you create a custom-trained model from a Python script in a Docker container. You learn how to modify training application code for hyperparameter tuning and submit a Vertex Training hyperparameter tuning job with the Python SDK.
The steps performed include:
Create a Vertex AI custom job for training a model.
Launch hyperparameter tuning job with the Python SDK.
Cleanup resources.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the
command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Install additional packages
Install the latest version of Vertex SDK for Python.
End of explanation
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI API and Compute Engine API.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
! gcloud config set project $PROJECT_ID
Explanation: Set project ID
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key
page.
Click Create service account.
In the Service account name field, enter a name, and
click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI"
into the filter box, and select
Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "us-central1" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
print(BUCKET_URI)
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a custom training job using the Cloud SDK, you will need to provide a staging bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_URI
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_URI
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import os
import sys
from google.cloud import aiplatform
from google.cloud.aiplatform import hyperparameter_tuning as hpt
Explanation: Import libraries and define constants
End of explanation
%%writefile Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-gpu.2-5
WORKDIR /
# Installs hypertune library
RUN pip install cloudml-hypertune
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
Explanation: Write Dockerfile
The first step in containerizing your code is to create a Dockerfile. In the Dockerfile, you'll include all the commands needed to run the image such as installing the necessary libraries and setting up the entry point for the training code.
This Dockerfile uses the Deep Learning Container TensorFlow Enterprise 2.5 GPU Docker image. The Deep Learning Containers on Google Cloud come with many common ML and data science frameworks pre-installed. After downloading that image, this Dockerfile installs the CloudML Hypertune library and sets up the entrypoint for the training code.
End of explanation
# Create trainer directory
! mkdir trainer
Explanation: Create training application code
Next, you create a trainer directory with a task.py script that contains the code for your training application.
End of explanation
%%writefile trainer/task.py
import argparse
import hypertune
import tensorflow as tf
import tensorflow_datasets as tfds
def get_args():
Parses args. Must include all hyperparameters you want to tune.
parser = argparse.ArgumentParser()
parser.add_argument(
'--learning_rate', required=True, type=float, help='learning rate')
parser.add_argument(
'--momentum', required=True, type=float, help='SGD momentum value')
parser.add_argument(
'--units',
required=True,
type=int,
help='number of units in last hidden layer')
parser.add_argument(
'--epochs',
required=False,
type=int,
default=10,
help='number of training epochs')
args = parser.parse_args()
return args
def preprocess_data(image, label):
Resizes and scales images.
image = tf.image.resize(image, (150, 150))
return tf.cast(image, tf.float32) / 255., label
def create_dataset(batch_size):
Loads Horses Or Humans dataset and preprocesses data.
data, info = tfds.load(
name='horses_or_humans', as_supervised=True, with_info=True)
# Create train dataset
train_data = data['train'].map(preprocess_data)
train_data = train_data.shuffle(1000)
train_data = train_data.batch(batch_size)
# Create validation dataset
validation_data = data['test'].map(preprocess_data)
validation_data = validation_data.batch(64)
return train_data, validation_data
def create_model(units, learning_rate, momentum):
Defines and compiles model.
inputs = tf.keras.Input(shape=(150, 150, 3))
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Conv2D(64, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2, 2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
model.compile(
loss='binary_crossentropy',
optimizer=tf.keras.optimizers.SGD(
learning_rate=learning_rate, momentum=momentum),
metrics=['accuracy'])
return model
def main():
args = get_args()
# Create Strategy
strategy = tf.distribute.MirroredStrategy()
# Scale batch size
GLOBAL_BATCH_SIZE = 64 * strategy.num_replicas_in_sync
train_data, validation_data = create_dataset(GLOBAL_BATCH_SIZE)
# Wrap model variables within scope
with strategy.scope():
model = create_model(args.units, args.learning_rate, args.momentum)
# Train model
history = model.fit(
train_data, epochs=args.epochs, validation_data=validation_data)
# Define Metric
hp_metric = history.history['val_accuracy'][-1]
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=hp_metric,
global_step=args.epochs)
if __name__ == '__main__':
main()
Explanation: In the next cell, you write the contents of the training script, task.py. This file downloads the horses or humans dataset from TensorFlow datasets and trains a tf.keras functional model using MirroredStrategy from the tf.distribute module.
There are a few components that are specific to using the hyperparameter tuning service:
The script imports the hypertune library. Note that the Dockerfile included instructions to pip install the hypertune library.
The function get_args() defines a command-line argument for each hyperparameter you want to tune. In this example, the hyperparameters that will be tuned are the learning rate, the momentum value in the optimizer, and the number of units in the last hidden layer of the model. The value passed in those arguments is then used to set the corresponding hyperparameter in the code.
At the end of the main() function, the hypertune library is used to define the metric to optimize. In this example, the metric that will be optimized is the the validation accuracy. This metric is passed to an instance of HyperTune.
End of explanation
# Set the IMAGE_URI
IMAGE_URI = f"gcr.io/{PROJECT_ID}/horse-human:hypertune"
# Build the docker image
! docker build -f Dockerfile -t $IMAGE_URI ./
# Push it to Google Container Registry:
! docker push $IMAGE_URI
Explanation: Build the Container
In the next cells, you build the container and push it to Google Container Registry.
End of explanation
worker_pool_specs = [
{
"machine_spec": {
"machine_type": "n1-standard-4",
"accelerator_type": "NVIDIA_TESLA_T4",
"accelerator_count": 2,
},
"replica_count": 1,
"container_spec": {"image_uri": IMAGE_URI},
}
]
metric_spec = {"accuracy": "maximize"}
parameter_spec = {
"learning_rate": hpt.DoubleParameterSpec(min=0.001, max=1, scale="log"),
"momentum": hpt.DoubleParameterSpec(min=0, max=1, scale="linear"),
"units": hpt.DiscreteParameterSpec(values=[64, 128, 512], scale=None),
}
Explanation: Create and run hyperparameter tuning job on Vertex AI
Once your container is pushed to Google Container Registry, you use the Vertex SDK to create and run the hyperparameter tuning job.
You define the following specifications:
* worker_pool_specs: Dictionary specifying the machine type and Docker image. This example defines a single node cluster with one n1-standard-4 machine with two NVIDIA_TESLA_T4 GPUs.
* parameter_spec: Dictionary specifying the parameters to optimize. The dictionary key is the string assigned to the command line argument for each hyperparameter in your training application code, and the dictionary value is the parameter specification. The parameter specification includes the type, min/max values, and scale for the hyperparameter.
* metric_spec: Dictionary specifying the metric to optimize. The dictionary key is the hyperparameter_metric_tag that you set in your training application code, and the value is the optimization goal.
End of explanation
print(BUCKET_URI)
# Create a CustomJob
JOB_NAME = "horses-humans-hyperparam-job" + TIMESTAMP
my_custom_job = aiplatform.CustomJob(
display_name=JOB_NAME,
project=PROJECT_ID,
worker_pool_specs=worker_pool_specs,
staging_bucket=BUCKET_URI,
)
Explanation: Create a CustomJob.
End of explanation
# Create and run HyperparameterTuningJob
hp_job = aiplatform.HyperparameterTuningJob(
display_name=JOB_NAME,
custom_job=my_custom_job,
metric_spec=metric_spec,
parameter_spec=parameter_spec,
max_trial_count=15,
parallel_trial_count=3,
project=PROJECT_ID,
search_algorithm=None,
)
hp_job.run()
Explanation: Then, create and run a HyperparameterTuningJob.
There are a few arguments to note:
max_trial_count: Sets an upper bound on the number of trials the service will run. The recommended practice is to start with a smaller number of trials and get a sense of how impactful your chosen hyperparameters are before scaling up.
parallel_trial_count: If you use parallel trials, the service provisions multiple training processing clusters. The worker pool spec that you specify when creating the job is used for each individual training cluster. Increasing the number of parallel trials reduces the amount of time the hyperparameter tuning job takes to run; however, it can reduce the effectiveness of the job overall. This is because the default tuning strategy uses results of previous trials to inform the assignment of values in subsequent trials.
search_algorithm: The available search algorithms are grid, random, or default (None). The default option applies Bayesian optimization to search the space of possible hyperparameter values and is the recommended algorithm.
End of explanation
# Set this to true only if you'd like to delete your bucket
delete_bucket = False
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -r $BUCKET_URI
Explanation: Click on the generated link in the output to see your run in the Cloud Console. When the job completes, you will see the results of the tuning trials.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
13,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jacobiana
$$\frac{dx}{dt}=a_{1}x-b_{1}x^{2}+c_{1}xy$$
$$\frac{dy}{dt}=a_{2}y-b_{2}y^{2}+c_{1}xy$$
$$\frac{dx}{dt}=(1-x-y)x$$
$$\frac{dy}{dt}=(4-7x-3y)y$$
Step1: Equilibrios
Step2: Jacobiana
Step3: Evaluada en un punto de equilibrio | Python Code:
import numpy as np
# importamos bibliotecas para plotear
import matplotlib
import matplotlib.pyplot as plt
# para desplegar los plots en el notebook
%matplotlib inline
# para cómputo simbólico
from sympy import *
init_printing()
x, y = symbols('x y')
f = (1-x-y)*x
f
g = (4-7*x-3*y)*y
g
Explanation: Jacobiana
$$\frac{dx}{dt}=a_{1}x-b_{1}x^{2}+c_{1}xy$$
$$\frac{dy}{dt}=a_{2}y-b_{2}y^{2}+c_{1}xy$$
$$\frac{dx}{dt}=(1-x-y)x$$
$$\frac{dy}{dt}=(4-7x-3y)y$$
End of explanation
solve(f, x)
solve(g, y)
Y = solve(g, y)[1]
solve(f.subs(y, Y),x)
solve(g.subs(x, -y + 1), y)
Explanation: Equilibrios
End of explanation
J = symbols("J")
J = Matrix([[diff(f, x), diff(f, y)],
[diff(g, x), diff(g, y)]])
J
Explanation: Jacobiana
End of explanation
J = J.subs({x: 1/4, y:3/4})
J
J.det(), J.trace()
Explanation: Evaluada en un punto de equilibrio
End of explanation |
13,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 6 – Decision Trees
This notebook contains all the sample code and solutions to the exercises in chapter 6.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Training and visualizing
Step2: Predicting classes and class probabilities
Step3: Sensitivity to training set details
Step4: Regression trees
Step5: Exercise solutions
1. to 6.
See appendix A.
7.
Exercise
Step6: b. Split it into a training set and a test set using train_test_split().
Step7: c. Use grid search with cross-validation (with the help of the GridSearchCV class) to find good hyperparameter values for a DecisionTreeClassifier. Hint
Step8: d. Train it on the full training set using these hyperparameters, and measure your model's performance on the test set. You should get roughly 85% to 87% accuracy.
By default, GridSearchCV trains the best model found on the whole training set (you can change this by setting refit=False), so we don't need to do it again. We can simply evaluate the model's accuracy
Step9: 8.
Exercise
Step10: b. Train one Decision Tree on each subset, using the best hyperparameter values found above. Evaluate these 1,000 Decision Trees on the test set. Since they were trained on smaller sets, these Decision Trees will likely perform worse than the first Decision Tree, achieving only about 80% accuracy.
Step11: c. Now comes the magic. For each test set instance, generate the predictions of the 1,000 Decision Trees, and keep only the most frequent prediction (you can use SciPy's mode() function for this). This gives you majority-vote predictions over the test set.
Step12: d. Evaluate these predictions on the test set | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "decision_trees"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
Explanation: Chapter 6 – Decision Trees
This notebook contains all the sample code and solutions to the exercises in chapter 6.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/06_decision_trees.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
iris = load_iris()
X = iris.data[:, 2:] # petal length and width
y = iris.target
tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42)
tree_clf.fit(X, y)
from sklearn.tree import export_graphviz
def image_path(fig_id):
return os.path.join(IMAGES_PATH, fig_id)
export_graphviz(
tree_clf,
out_file=image_path("iris_tree.dot"),
feature_names=iris.feature_names[2:],
class_names=iris.target_names,
rounded=True,
filled=True
)
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if not iris:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
if plot_training:
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica")
plt.axis(axes)
if iris:
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
else:
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
if legend:
plt.legend(loc="lower right", fontsize=14)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf, X, y)
plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2)
plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2)
plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2)
plt.text(1.40, 1.0, "Depth=0", fontsize=15)
plt.text(3.2, 1.80, "Depth=1", fontsize=13)
plt.text(4.05, 0.5, "(Depth=2)", fontsize=11)
save_fig("decision_tree_decision_boundaries_plot")
plt.show()
Explanation: Training and visualizing
End of explanation
tree_clf.predict_proba([[5, 1.5]])
tree_clf.predict([[5, 1.5]])
Explanation: Predicting classes and class probabilities
End of explanation
X[(X[:, 1]==X[:, 1][y==1].max()) & (y==1)] # widest Iris-Versicolor flower
not_widest_versicolor = (X[:, 1]!=1.8) | (y==2)
X_tweaked = X[not_widest_versicolor]
y_tweaked = y[not_widest_versicolor]
tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40)
tree_clf_tweaked.fit(X_tweaked, y_tweaked)
plt.figure(figsize=(8, 4))
plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False)
plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2)
plt.plot([0, 7.5], [1.75, 1.75], "k--", linewidth=2)
plt.text(1.0, 0.9, "Depth=0", fontsize=15)
plt.text(1.0, 1.80, "Depth=1", fontsize=13)
save_fig("decision_tree_instability_plot")
plt.show()
from sklearn.datasets import make_moons
Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53)
deep_tree_clf1 = DecisionTreeClassifier(random_state=42)
deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42)
deep_tree_clf1.fit(Xm, ym)
deep_tree_clf2.fit(Xm, ym)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("No restrictions", fontsize=16)
plt.subplot(122)
plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False)
plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14)
save_fig("min_samples_leaf_plot")
plt.show()
angle = np.pi / 180 * 20
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xr = X.dot(rotation_matrix)
tree_clf_r = DecisionTreeClassifier(random_state=42)
tree_clf_r.fit(Xr, y)
plt.figure(figsize=(8, 3))
plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False)
plt.show()
np.random.seed(6)
Xs = np.random.rand(100, 2) - 0.5
ys = (Xs[:, 0] > 0).astype(np.float32) * 2
angle = np.pi / 4
rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]])
Xsr = Xs.dot(rotation_matrix)
tree_clf_s = DecisionTreeClassifier(random_state=42)
tree_clf_s.fit(Xs, ys)
tree_clf_sr = DecisionTreeClassifier(random_state=42)
tree_clf_sr.fit(Xsr, ys)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
plt.subplot(122)
plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False)
save_fig("sensitivity_to_rotation_plot")
plt.show()
Explanation: Sensitivity to training set details
End of explanation
# Quadratic training set + noise
np.random.seed(42)
m = 200
X = np.random.rand(m, 1)
y = 4 * (X - 0.5) ** 2
y = y + np.random.randn(m, 1) / 10
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg.fit(X, y)
from sklearn.tree import DecisionTreeRegressor
tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2)
tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"):
x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1)
y_pred = tree_reg.predict(x1)
plt.axis(axes)
plt.xlabel("$x_1$", fontsize=18)
if ylabel:
plt.ylabel(ylabel, fontsize=18, rotation=0)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_regression_predictions(tree_reg1, X, y)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
plt.text(0.21, 0.65, "Depth=0", fontsize=15)
plt.text(0.01, 0.2, "Depth=1", fontsize=13)
plt.text(0.65, 0.8, "Depth=1", fontsize=13)
plt.legend(loc="upper center", fontsize=18)
plt.title("max_depth=2", fontsize=14)
plt.subplot(122)
plot_regression_predictions(tree_reg2, X, y, ylabel=None)
for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")):
plt.plot([split, split], [-0.2, 1], style, linewidth=2)
for split in (0.0458, 0.1298, 0.2873, 0.9040):
plt.plot([split, split], [-0.2, 1], "k:", linewidth=1)
plt.text(0.3, 0.5, "Depth=2", fontsize=13)
plt.title("max_depth=3", fontsize=14)
save_fig("tree_regression_plot")
plt.show()
export_graphviz(
tree_reg1,
out_file=image_path("regression_tree.dot"),
feature_names=["x1"],
rounded=True,
filled=True
)
tree_reg1 = DecisionTreeRegressor(random_state=42)
tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10)
tree_reg1.fit(X, y)
tree_reg2.fit(X, y)
x1 = np.linspace(0, 1, 500).reshape(-1, 1)
y_pred1 = tree_reg1.predict(x1)
y_pred2 = tree_reg2.predict(x1)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", fontsize=18, rotation=0)
plt.legend(loc="upper center", fontsize=18)
plt.title("No restrictions", fontsize=14)
plt.subplot(122)
plt.plot(X, y, "b.")
plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$")
plt.axis([0, 1, -0.2, 1.1])
plt.xlabel("$x_1$", fontsize=18)
plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14)
save_fig("tree_regression_regularization_plot")
plt.show()
Explanation: Regression trees
End of explanation
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=10000, noise=0.4, random_state=42)
Explanation: Exercise solutions
1. to 6.
See appendix A.
7.
Exercise: train and fine-tune a Decision Tree for the moons dataset.
a. Generate a moons dataset using make_moons(n_samples=10000, noise=0.4).
Adding random_state=42 to make this notebook's output constant:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Explanation: b. Split it into a training set and a test set using train_test_split().
End of explanation
from sklearn.model_selection import GridSearchCV
params = {'max_leaf_nodes': list(range(2, 100)), 'min_samples_split': [2, 3, 4]}
grid_search_cv = GridSearchCV(DecisionTreeClassifier(random_state=42), params, n_jobs=-1, verbose=1, cv=3)
grid_search_cv.fit(X_train, y_train)
grid_search_cv.best_estimator_
Explanation: c. Use grid search with cross-validation (with the help of the GridSearchCV class) to find good hyperparameter values for a DecisionTreeClassifier. Hint: try various values for max_leaf_nodes.
End of explanation
from sklearn.metrics import accuracy_score
y_pred = grid_search_cv.predict(X_test)
accuracy_score(y_test, y_pred)
Explanation: d. Train it on the full training set using these hyperparameters, and measure your model's performance on the test set. You should get roughly 85% to 87% accuracy.
By default, GridSearchCV trains the best model found on the whole training set (you can change this by setting refit=False), so we don't need to do it again. We can simply evaluate the model's accuracy:
End of explanation
from sklearn.model_selection import ShuffleSplit
n_trees = 1000
n_instances = 100
mini_sets = []
rs = ShuffleSplit(n_splits=n_trees, test_size=len(X_train) - n_instances, random_state=42)
for mini_train_index, mini_test_index in rs.split(X_train):
X_mini_train = X_train[mini_train_index]
y_mini_train = y_train[mini_train_index]
mini_sets.append((X_mini_train, y_mini_train))
Explanation: 8.
Exercise: Grow a forest.
a. Continuing the previous exercise, generate 1,000 subsets of the training set, each containing 100 instances selected randomly. Hint: you can use Scikit-Learn's ShuffleSplit class for this.
End of explanation
from sklearn.base import clone
forest = [clone(grid_search_cv.best_estimator_) for _ in range(n_trees)]
accuracy_scores = []
for tree, (X_mini_train, y_mini_train) in zip(forest, mini_sets):
tree.fit(X_mini_train, y_mini_train)
y_pred = tree.predict(X_test)
accuracy_scores.append(accuracy_score(y_test, y_pred))
np.mean(accuracy_scores)
Explanation: b. Train one Decision Tree on each subset, using the best hyperparameter values found above. Evaluate these 1,000 Decision Trees on the test set. Since they were trained on smaller sets, these Decision Trees will likely perform worse than the first Decision Tree, achieving only about 80% accuracy.
End of explanation
Y_pred = np.empty([n_trees, len(X_test)], dtype=np.uint8)
for tree_index, tree in enumerate(forest):
Y_pred[tree_index] = tree.predict(X_test)
from scipy.stats import mode
y_pred_majority_votes, n_votes = mode(Y_pred, axis=0)
Explanation: c. Now comes the magic. For each test set instance, generate the predictions of the 1,000 Decision Trees, and keep only the most frequent prediction (you can use SciPy's mode() function for this). This gives you majority-vote predictions over the test set.
End of explanation
accuracy_score(y_test, y_pred_majority_votes.reshape([-1]))
Explanation: d. Evaluate these predictions on the test set: you should obtain a slightly higher accuracy than your first model (about 0.5 to 1.5% higher). Congratulations, you have trained a Random Forest classifier!
End of explanation |
13,242 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
While nan == nan is always False, in many cases people want to treat them as equal, and this is enshrined in pandas.DataFrame.equals: | Problem:
import pandas as pd
import numpy as np
np.random.seed(10)
df = pd.DataFrame(np.random.randint(0, 20, (10, 10)).astype(float), columns=["c%d"%d for d in range(10)])
df.where(np.random.randint(0,2, df.shape).astype(bool), np.nan, inplace=True)
def g(df):
return df.columns[df.iloc[0,:].fillna('Nan') == df.iloc[8,:].fillna('Nan')]
result = g(df.copy()) |
13,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
analysing tabular data
Step1: variables
Step2: this is 60 by 40
Step3: lets get the first 10 columns for the firsst 4 rows
print(data[0
Step4: we dont need to start slicng at 0
Step5: we dont even need to inc upper and lower limits
Step6: arithmetic on arrays
Step7: get a set of data for the first station
this is shorthand for "all the columns"
Step8: we dont need to create @temporary@ array slices
we can refer to what we call array axes
Step9: axis = 0 gets mean down eaach column
axis=1 gets the mean across each row so the mean temp
for each station for all periods
see above
do some simple vissualisations
Step10: lets look at the average tempp over time
Step11: create a wide figure to hold sub plots
Step12: create placeholders for plots
Step13: this is fine for small numbers of datasets, what if wwe have hundreds or thousands? we need more automaation
loops
Step14: see aabove note diff between squaare and normaal brackets
Step15: reading filenames
get a list of all the filenames from disk
Step16: global..something~
Step17: putting it all together
Step18: didnt print "done" due to break in indentation sequence
Step19: elif eqauls else if, always good to finish a chain with an else
Step20: something went wrong with the above
Step21: using functions
Step24: unfinsinshed | Python Code:
import numpy
numpy.loadtxt
numpy.loadtxt(fname='data/weather-01.csv' delimiter = ',')
numpy.loadtxt(fname='data/weather-01.csv'delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
Explanation: analysing tabular data
End of explanation
weight_kg=55
print (weight_kg)
print('weight in pounds:',weight_kg*2.2)
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
%whos
data=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
%whos
%whos
print(data.dtype)
print(data.shape)
Explanation: variables
End of explanation
print ("first value in data:",data [0,0])
print ('A middle value:',data[30,20])
Explanation: this is 60 by 40
End of explanation
print (data[0:4, 0:10])
Explanation: lets get the first 10 columns for the firsst 4 rows
print(data[0:4, 0:10])
start at index 0 and go up to but not including index 4
End of explanation
print (data[5:10,7:15])
Explanation: we dont need to start slicng at 0
End of explanation
smallchunk=data[:3,36:]
print(smallchunk)
Explanation: we dont even need to inc upper and lower limits
End of explanation
doublesmallchunk=smallchunk*2.0
print(doublesmallchunk)
triplesmallchunk=smallchunk+doublesmallchunk
print(triplesmallchunk)
print(numpy.mean(data))
print (numpy.max(data))
print (numpy.min(data))
Explanation: arithmetic on arrays
End of explanation
station_0=data[0,:]
print(numpy.max(station_0))
Explanation: get a set of data for the first station
this is shorthand for "all the columns"
End of explanation
print(numpy.mean(data, axis=0))
print(numpy.mean(data, axis=1))
Explanation: we dont need to create @temporary@ array slices
we can refer to what we call array axes
End of explanation
import matplotlib.pyplot
%matplotlib inline
image=matplotlib.pyplot.imshow(data)
Explanation: axis = 0 gets mean down eaach column
axis=1 gets the mean across each row so the mean temp
for each station for all periods
see above
do some simple vissualisations
End of explanation
avg_temperature=numpy.mean(data,axis=0)
avg_plot=matplotlib.pyplot.plot(avg_temperature)
import numpy
import matplotlib.pyplot
%matplotlib inline
data=numpy.loadtxt(fname='data/weather-01.csv',delimiter=',')
Explanation: lets look at the average tempp over time
End of explanation
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
Explanation: create a wide figure to hold sub plots
End of explanation
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
Explanation: create placeholders for plots
End of explanation
word='notebook'
print (word[4])
Explanation: this is fine for small numbers of datasets, what if wwe have hundreds or thousands? we need more automaation
loops
End of explanation
for char in word:
# colon before word or indentation v imporetaant
#indent is 4 spaces
for char in word:
print (char)
Explanation: see aabove note diff between squaare and normaal brackets
End of explanation
import glob
Explanation: reading filenames
get a list of all the filenames from disk
End of explanation
print(glob.glob('data/weather*.csv'))
Explanation: global..something~
End of explanation
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=filenames[0:3]
for f in filenames:
print (f)
data=numpy.loadtxt(fname=f, delimiter=',')
#next bits need indenting
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
num=37
if num>100:
print('greater')
else:
print('not greater')
print ('done')
num=107
if num>100:
print('greater')
else:
print('not greater')
print ('done')
Explanation: putting it all together
End of explanation
num=-3
if num>0:
print (num, "is positive")
elif num ==0:
print (num, "is zero")
else:
print (num, "is negative")
Explanation: didnt print "done" due to break in indentation sequence
End of explanation
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=sorted(glob.glob('data/weather*.csv'))
filenames=filenames[0:3]
for f in filenames:
print (f)
data=numpy.loadtxt(fname=f, delimiter=',') == 0
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
#next bits need indenting
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
Explanation: elif eqauls else if, always good to finish a chain with an else
End of explanation
def fahr_to_kelvin(temp):
return((temp-32)*(5/9)+ 273.15)
print ('freezing point of water:', fahr_to_kelvin(32))
print ('boiling point of water:', fahr_to_kelvin(212))
Explanation: something went wrong with the above
End of explanation
def analyse (filename):
data=numpy.loadtxt(fname=filename,)......
Explanation: using functions
End of explanation
def detect_problems (filename):
data=numpy.loadtxt(fname=filename, delimiter=',')
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
def analyse (filename):
data=numpy.loadtxt(fname=filename,delimiter=',')
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
for f in filenames [0:5]:
print (f)
analyse (f)
detect_problems (f)
help(numpy.loadtxt)
help(detect_problems)
some of our temperature files haave problems, check for these
this function reads a file and reports on odd looking maxima and minimia that add to zero
the function does not return any data
def detect_problems (filename):
data=numpy.loadtxt(fname=filename, delimiter=',')
if numpy.max (data, axis=0)[0] ==0 and numpy.max (data, axis=0)[20] ==20:
print ('suspicious looking maxima')
elif numpy.sum(numpy.min(data, axis=0)) ==0:
print ('minimum adds to zero')
else:
print ('data looks ok')
def analyse (filename):
data=numpy.loadtxt(fname=filename,delimiter=',')
this function analyses a dataset and outputs plots for maax min and ave
fig=matplotlib.pyplot.figure (figsize=(10.0,3.0))
subplot1=fig.add_subplot (1,3,1)
subplot2=fig.add_subplot (1,3,2)
subplot3=fig.add_subplot (1,3,3)
subplot1.set_ylabel('average')
subplot1.plot(numpy.mean(data, axis=0))
subplot2.set_ylabel('minimum')
subplot2.plot(numpy.min(data, axis=0))
subplot3.set_ylabel('maximum')
subplot3.plot(numpy.max(data, axis=0))
fig.tight_layout()
matplotlib.pyplot.show
Explanation: unfinsinshed
End of explanation |
13,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BGS Morphological Properties
The goal of this notebook is to quantify the average morphological properties of the BGS sample. Specifically, the DESI-Data simulations require knowledge of the mean half-light radii of the bulge and disk components of the sample, as well as the average light fraction between the two components.
These measurements require a detailed study by the BGS, Targeting, and Galaxy & Quasar Physics Working Groups. However, as a quick hack we can use the expectation that the BGS galaxy sample will have very similar properties as the SDSS/Main sample. (BGS will target galaxies to r=20, reaching a median redshift of z=0.2, whereas SDSS/Main targeted galaxies to r=17.7 and reached a median redshift of z=0.1. Although there exists a small amount of luminosity evolution and evolution in the size-mass relation between these two epochs, the amount of evolution is significantly smaller than the scatter in galaxy properties at fixed redshift and stellar mass.)
Fortunately, A. Meert and collaborators have carried out a detailed 2D morphological analysis of galaxies in the SDSS/Main sample and publicly released their catalog. Some of the relevant papers include
Step1: Read the parent CAST catalog.
Read the parent SDSS (CAST) catalog which defines the sample.
Step4: Read the g-band model fitting results and select a "good" sample.
Read the g-band model fitting results and select a clean sample using the "finalflag" bit (see Section 2.2 of the data_tables.pdf documentation).
Step5: Identify the subset of galaxies with good 1- and 2-component fits.
Step6: Generate some plots. | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import fitsio
from astropy.table import Table
from corner import corner
plt.style.use('seaborn-talk')
%matplotlib inline
basicdir = os.path.join(os.getenv('IM_DATA_DIR'), 'upenn-photdec', 'basic-catalog', 'v2')
adddir = os.path.join(os.getenv('IM_DATA_DIR'), 'upenn-photdec', 'additional-catalogs')
Explanation: BGS Morphological Properties
The goal of this notebook is to quantify the average morphological properties of the BGS sample. Specifically, the DESI-Data simulations require knowledge of the mean half-light radii of the bulge and disk components of the sample, as well as the average light fraction between the two components.
These measurements require a detailed study by the BGS, Targeting, and Galaxy & Quasar Physics Working Groups. However, as a quick hack we can use the expectation that the BGS galaxy sample will have very similar properties as the SDSS/Main sample. (BGS will target galaxies to r=20, reaching a median redshift of z=0.2, whereas SDSS/Main targeted galaxies to r=17.7 and reached a median redshift of z=0.1. Although there exists a small amount of luminosity evolution and evolution in the size-mass relation between these two epochs, the amount of evolution is significantly smaller than the scatter in galaxy properties at fixed redshift and stellar mass.)
Fortunately, A. Meert and collaborators have carried out a detailed 2D morphological analysis of galaxies in the SDSS/Main sample and publicly released their catalog. Some of the relevant papers include:
Vikram et al. 2010, PyMorph: Automated Galaxy Structural Parameter Estimation using Python
Meert et al. 2013, Simulations of single- and two-component galaxy decompositions for spectroscopically selected galaxies from the SDSS
Meert et al. 2015, A catalogue of 2D photometric decompositions in the SDSS-DR7 spectroscopic main galaxy sample: preferred models and systematics
Meert et al. 2016, A catalogue of 2D photometric decompositions in the SDSS-DR7 spectroscopic main galaxy sample: extension to g and i bands
Here we focus on the fits to the g-band SDSS imaging, as this will most closely resemble the r-band selection of the BGS sample.
Imports and paths--
End of explanation
castfile = os.path.join(basicdir, 'UPenn_PhotDec_CAST.fits')
castinfo = fitsio.FITS(castfile)
castinfo[1]
allcast = castinfo[1].read()
Explanation: Read the parent CAST catalog.
Read the parent SDSS (CAST) catalog which defines the sample.
End of explanation
thisband = 'gband'
def photdec_select(finalflag, bit):
Select subsets of the catalog using the finalflag bitmask.
1 - good bulge-only galaxy
4 - good disk-only galaxy
10 - good two-component fit (logical_or of flags 11, 12, and 13)
20 - bad total magnitude and size
return finalflag & np.power(2, bit) != 0
def select_meert(modelcat, onecomp=False, twocomp=False):
Select various (good) subsets of galaxies.
Args:
modelcat: 'UPenn_PhotDec_Models_[g,r,i]band.fits' catalog.
onecomp (bool): galaxies fitted with single-Sersic model.
twocomp (bool): galaxies fitted with Sersic-exponential model.
Notes:
* Flag 10 is a logical_or of 11, 12, 13.
* Flag 1, 4, and 10 are mutually exclusive.
* If Flag 1 or 4 are set then n_disk,r_disk are -999.
finalflag = modelcat['finalflag']
smalln = modelcat['n_bulge'] < 8
goodr = modelcat['r_bulge'] > 0 # Moustakas hack
two = photdec_select(finalflag, 10)
two = np.logical_and( two, smalln )
two = np.logical_and( two, goodr )
if twocomp:
return two
one = np.logical_or( photdec_select(finalflag, 1), photdec_select(finalflag, 4) )
one = np.logical_and( one, smalln )
if onecomp:
return one
both = np.logical_or( one, two )
return both
measfile = os.path.join(basicdir, 'UPenn_PhotDec_nonParam_{}.fits'.format(thisband))
measinfo = fitsio.FITS(measfile)
fitfile = os.path.join(basicdir, 'UPenn_PhotDec_Models_{}.fits'.format(thisband))
fitinfo = fitsio.FITS(fitfile)
print(measinfo[1], fitinfo[1])
_fit = fitinfo[1].read(columns=['finalflag', 'n_bulge', 'r_bulge'])
good = select_meert(_fit)
goodindx = np.where(good)[0]
nobj = len(goodindx)
print('Selected {}/{} good targets.'.format(nobj, len(_fit)))
fit, meas = [], []
fitfile = os.path.join(basicdir, 'UPenn_PhotDec_Models_{}.fits'.format(thisband))
measfile = os.path.join(basicdir, 'UPenn_PhotDec_NonParam_{}.fits'.format(thisband))
gfit = fitsio.read(fitfile, ext=1, rows=goodindx)
gmeas = fitsio.read(measfile, ext=1, rows=goodindx)
cast = allcast[goodindx]
Explanation: Read the g-band model fitting results and select a "good" sample.
Read the g-band model fitting results and select a clean sample using the "finalflag" bit (see Section 2.2 of the data_tables.pdf documentation).
End of explanation
one = select_meert(gfit, onecomp=True)
two = select_meert(gfit, twocomp=True)
Explanation: Identify the subset of galaxies with good 1- and 2-component fits.
End of explanation
print('g-band range = {:.3f} - {:.3f}'.format(gfit['m_tot'].min(), gfit['m_tot'].max()))
print('Redshift range = {:.4f} - {:.4f}'.format(cast['z'].min(), cast['z'].max()))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5))
_ = ax1.hist(cast['z'], bins=100, range=(0, 0.4), alpha=0.5, label='All Galaxies')
_ = ax1.hist(cast['z'][two], bins=100, range=(0, 0.4), alpha=0.5, label='Two-Component Fits')
ax1.legend(loc='upper right')
ax1.set_xlabel('Redshift')
ax1.set_ylabel('Number of Galaxies')
hb = ax2.hexbin(cast['ra'], cast['dec'], C=cast['z'], vmin=0, vmax=0.3,
cmap=plt.cm.get_cmap('RdYlBu'))
cb = plt.colorbar(hb)
cb.set_label('Redshift')
ax2.set_xlabel('RA')
ax2.set_ylabel('Dec')
labels = [r'$g_{tot}$', r'B/T ($g$-band)', r'Bulge $n$ ($g$-band)',
r'Bulge $r_{50, g}$', r'Disk $r_{50, g}$']
data = np.array([
gfit['m_tot'][two],
gfit['BT'][two],
gfit['n_bulge'][two],
np.log10(gfit['r_bulge'][two]),
np.log10(gfit['r_disk'][two])
]).T
data.shape
_ = corner(data, quantiles=[0.25, 0.50, 0.75], labels=labels,
range=np.repeat(0.9999, len(labels)), verbose=True)
Explanation: Generate some plots.
End of explanation |
13,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Further Python Basics
Step1: Magics!
% and %% magics
interact
embed image
embed links, youtube
link notebooks
Check out http
Step3: Numpy
If you have arrays of numbers, use numpy or pandas (built on numpy) to represent the data. Tons of very fast underlying code.
Step4: Matplotlib and Numpy | Python Code:
names = ['alice', 'jonathan', 'bobby']
ages = [24, 32, 45]
ranks = ['kinda cool', 'really cool', 'insanely cool']
for (name, age, rank) in zip(names, ages, ranks):
print(name, age, rank)
for index, (name, age, rank) in enumerate(zip(names, ages, ranks)):
print(index, name, age, rank)
# return, esc, shift+enter, ctrl+enter
# text keyboard shortcuts -- cmd > (right), < left,
# option delete (deletes words)
# keyboard shortcuts
# - a, b, y, m, dd, h, ctrl+shift+-
%matplotlib inline
%config InlineBackend.figure_format='retina'
import matplotlib.pyplot as plt
# no pylab
import seaborn as sns
sns.set_context('talk')
sns.set_style('darkgrid')
plt.rcParams['figure.figsize'] = 12, 8 # plotsize
import numpy as np
# don't do `from numpy import *`
import pandas as pd
# If you have a specific function that you'd like to import
from numpy.random import randn
x = np.arange(100)
y = np.sin(x)
plt.plot(x, y);
%matplotlib notebook
x = np.arange(10)
y = np.sin(x)
plt.plot(x, y)#;
Explanation: Further Python Basics
End of explanation
%%bash
for num in {1..5}
do
for infile in *;
do
echo $num $infile
done
wc $infile
done
print("hi")
!pwd
!ping google.com
this_is_magic = "Can you believe you can pass variables and strings like this?"
!echo $this_is_magic
hey
Explanation: Magics!
% and %% magics
interact
embed image
embed links, youtube
link notebooks
Check out http://matplotlib.org/gallery.html select your favorite.
End of explanation
x = np.arange(10000)
print(x) # smart printing
print(x[0]) # first element
print(x[-1]) # last element
print(x[0:5]) # first 5 elements (also x[:5])
print(x[:]) # "Everything"
print(x[-5:]) # last five elements
print(x[-5:-2])
print(x[-5:-1]) # not final value -- not inclusive on right
x = np.random.randint(5, 5000, (3, 5))
x
np.sum(x)
x.sum()
np.sum(x)
np.sum(x, axis=0)
np.sum(x, axis=1)
x.sum(axis=1)
# Multi dimension array slice with a comma
x[:, 2]
y = np.linspace(10, 20, 11)
y
np.linspace?
np.linspace()
# shift-tab; shift-tab-tab
np.
def does_it(first=x, second=y):
This is my doc
pass
y[[3, 5, 7]]
does_it()
num = 3000
x = np.linspace(1.0, 300.0, num)
y = np.random.rand(num)
z = np.sin(x)
np.savetxt("example.txt", np.transpose((x, y, z)))
%less example.txt
!wc example.txt
!head example.txt
#Not a good idea
a = []
b = []
for line in open("example.txt", 'r'):
a.append(line[0])
b.append(line[2])
a[:10] # Whoops!
a = []
b = []
for line in open("example.txt", 'r'):
line = line.split()
a.append(line[0])
b.append(line[2])
a[:10] # Strings!
a = []
b = []
for line in open("example.txt", 'r'):
line = line.split()
a.append(float(line[0]))
b.append(float(line[2]))
a[:10] # Lists!
# Do this!
a, b = np.loadtxt("example.txt", unpack=True, usecols=(0,2))
a
Explanation: Numpy
If you have arrays of numbers, use numpy or pandas (built on numpy) to represent the data. Tons of very fast underlying code.
End of explanation
from numpy.random import randn
num = 50
x = np.linspace(2.5, 300, num)
y = randn(num)
plt.scatter(x, y)
y > 1
y[y > 1]
y[(y < 1) & (y > -1)]
plt.scatter(x, y, c='b', s=50)
plt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r', s=50)
y[~((y < 1) & (y > -1))] = 1.0
plt.scatter(x, y, c='b')
plt.scatter(x, np.clip(y, -0.5, 0.5), color='red')
num = 350
slope = 0.3
x = randn(num) * 50. + 150.0
y = randn(num) * 5 + x * slope
plt.scatter(x, y, c='b')
# plt.scatter(x[(y < 1) & (y > -1)], y[(y < 1) & (y > -1)], c='r')
# np.argsort, np.sort, complicated index slicing
dframe = pd.DataFrame({'x': x, 'y': y})
g = sns.jointplot('x', 'y', data=dframe, kind="reg")
Explanation: Matplotlib and Numpy
End of explanation |
13,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Square Wave Generator
A square wave is a periodic waveform that alternates between two discrete values.
Here's an example square wave that is generated using a simple Python function.
Step1: To implement our square wave in magma, we start by importing the IceStick module from loam. We instance the IceStick and turn on the Clock and J3[0] (configured as an output).
Now we'll use magma and mantle to implement a square wave generator.
Since our square wave just toggles between 0 and 1 using a fixed period, we can use any bit in a synchronous counter to implement it (choosing a certain counter bit will change the period).
Step2: Compile and build the circuit. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.arange(0, 100)
def square(x):
return (x % 50) < 25
plt.plot(x, square(x))
import magma as m
m.set_mantle_target("ice40")
Explanation: Square Wave Generator
A square wave is a periodic waveform that alternates between two discrete values.
Here's an example square wave that is generated using a simple Python function.
End of explanation
import mantle
from loam.boards.icestick import IceStick
icestick = IceStick()
icestick.Clock.on()
icestick.J3[0].output().on()
main = icestick.main()
counter = mantle.Counter(32)
square = counter.O[9]
m.wire( square, main.J3 )
Explanation: To implement our square wave in magma, we start by importing the IceStick module from loam. We instance the IceStick and turn on the Clock and J3[0] (configured as an output).
Now we'll use magma and mantle to implement a square wave generator.
Since our square wave just toggles between 0 and 1 using a fixed period, we can use any bit in a synchronous counter to implement it (choosing a certain counter bit will change the period).
End of explanation
m.compile('build/square', main)
%%bash
cd build
cat square.pcf
yosys -q -p 'synth_ice40 -top main -blif square.blif' square.v
arachne-pnr -q -d 1k -o square.txt -p square.pcf square.blif
icepack square.txt square.bin
iceprog square.bin
Explanation: Compile and build the circuit.
End of explanation |
13,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stokes solver for asymptotic flow
Inport away.
Step1: Load a frame from a real simulation.
Step2: Load the governing properties from the frame.
Step3: Load the last midplane slice of the scalar field and manipulate it into a periodic box.
Step4: Make sure it looks OK.
Step5: Stokes is linear and we have periodic boundaries, so we can solve it directly using Fourier transforms and the frequency-space Green's function (which is diagonal).
Step6: Look ok?
Step7: Now we want to turn this into something Darcy-Weisbach-esque. We don't have uniform forcing, so we take an average.
Step8: Rayleight-Taylor types like the Froude number, which doesn't really make sense
Step9: Instead, we normalize "the right way" using the viscosity
Step10: Print everything out. | Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 16.0)
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from scipy import fftpack
from numpy import fft
import json
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
Explanation: Stokes solver for asymptotic flow
Inport away.
End of explanation
name = "HighAspect/HA_viscosity_4.0E-4/HA_viscosity_4.0E-4"
arch = "alcf#dtn_mira/projects/alpha-nek/experiments"
c = Chest(path="{:s}-results".format(name),
open=partial(glopen, endpoint=arch),
open_many = partial(glopen_many, endpoint=arch))
sc = CachedSlict(c)
c.prefetch(sc[:,'t_xy'].full_keys())
with glopen(
"{:s}.json".format(name), mode='r',
endpoint = arch,
) as f:
p = json.load(f)
Explanation: Load a frame from a real simulation.
End of explanation
L = 1./p["kmin"]
Atwood = p["atwood"]
g = p["g"]
viscosity = p["viscosity"]
Explanation: Load the governing properties from the frame.
End of explanation
T_end = sc[:,'t_xy'].keys()[-1]
phi_raw = sc[T_end, 't_xy']
phi_raw = np.concatenate((phi_raw, np.flipud(phi_raw)), axis=0)
phi_raw = np.concatenate((phi_raw, np.flipud(phi_raw)), axis=0)
phi_raw = np.concatenate((phi_raw, np.fliplr(phi_raw)), axis=1)
phi_raw = np.concatenate((phi_raw, np.fliplr(phi_raw)), axis=1)
raw_shape = phi_raw.shape
nx = raw_shape[0]
ny = raw_shape[0]
phi = phi_raw[nx/8:5*nx/8, ny/8:5*ny/8]
nx = phi.shape[0]
ny = phi.shape[1]
Explanation: Load the last midplane slice of the scalar field and manipulate it into a periodic box.
End of explanation
plt.figure()
plt.imshow(phi)
plt.colorbar();
Explanation: Make sure it looks OK.
End of explanation
# Setup the frequencies
dx = L / ny
X = np.tile(np.linspace(0, L, nx), (ny, 1))
Y = np.tile(np.linspace(0, L, ny), (nx, 1)).transpose()
rfreqs = fft.rfftfreq(nx, dx) * 2 * np.pi;
cfreqs = fft.fftfreq(nx, dx)* 2 * np.pi;
rones = np.ones(rfreqs.shape[0]);
cones = np.ones(cfreqs.shape[0]);
# RHS comes from the forcing
F = phi * Atwood * g / viscosity
# Transform forward
p1 = fft.rfftn(F)
# Green's function
p1 = p1 / (np.square(np.outer(cfreqs, rones)) + np.square(np.outer(cones, rfreqs)))
p1[0,0] = 0
# Transform back
w = fft.irfftn(p1)
Explanation: Stokes is linear and we have periodic boundaries, so we can solve it directly using Fourier transforms and the frequency-space Green's function (which is diagonal).
End of explanation
plt.figure()
plt.imshow(w)
plt.colorbar();
Explanation: Look ok?
End of explanation
A_tilde = np.sum(np.abs(phi))/ (nx * ny)
Explanation: Now we want to turn this into something Darcy-Weisbach-esque. We don't have uniform forcing, so we take an average.
End of explanation
Froude = np.sum(np.abs(w)) / np.sqrt(g * Atwood * A_tilde * L) / (nx * ny)
Explanation: Rayleight-Taylor types like the Froude number, which doesn't really make sense:
$$ \text{Fr} = \frac{u}{\sqrt{A g L}} $$
End of explanation
Right = np.sum(np.abs(w)) * viscosity / (g * Atwood * A_tilde * L**2)/ (nx*ny)
Explanation: Instead, we normalize "the right way" using the viscosity:
End of explanation
dff = 64 / 16 * 14.227
print("L={:f}, A={:f}, A_til={:f}, g={:f}, nu={:f}. D-W is {:f}".format(
L, Atwood, A_tilde, g, viscosity, 1./(dff*2)))
print(" Froude: {:10f} | Right: {:10f}".format(Froude, Right))
print(" C1 = {:f} * C0 ".format(1./Right))
Explanation: Print everything out.
End of explanation |
13,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 4
Step1: Experiment parameters (SPM12)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
Step2: Specify Nodes (SPM12)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
Step3: Specify input & output stream (SPM12)
Specify where the input data can be found & where and how to save the output data.
Step4: Specify Workflow (SPM12)
Create a workflow and connect the interface nodes and the I/O stream to each other.
Step5: Visualize the workflow (SPM12)
It always helps to visualize your workflow.
Step6: Run the Workflow (SPM12)
Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.
Step7: Group Analysis with ANTs
Now to run the same group analysis, but on the ANTs normalized images, we just need to change a few parameters
Step8: Now, we just have to recreate the workflow.
Step9: And we can run it!
Step10: Visualize results
Now we create a lot of outputs, but how do they look like? And also, what was the influence of different smoothing kernels and normalization?
Keep in mind, that the group analysis was only done on N=7 subjects, and that we chose a voxel-wise threshold of p<0.005. Nonetheless, we corrected for multiple comparisons with a cluster-wise FDR threshold of p<0.05.
So let's first look at the contrast average
Step11: The results are more or less what you would expect
Step12: Now, let's see the results using the glass brain plotting method. | Python Code:
from nilearn import plotting
%matplotlib inline
from os.path import join as opj
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.interfaces.spm import (OneSampleTTestDesign, EstimateModel,
EstimateContrast, Threshold)
from nipype.interfaces.utility import IdentityInterface
from nipype import Workflow, Node
from nipype.interfaces.fsl import Info
from nipype.algorithms.misc import Gunzip
Explanation: Example 4: 2nd-level Analysis
Last but not least, the 2nd-level analysis. After we removed left-handed subjects and normalized all subject data into template space, we can now do the group analysis. To show the flexibility of Nipype, we will run the group analysis on data with two different smoothing kernel (fwhm= [4, 8]) and two different normalizations (ANTs and SPM).
This example will also directly include thresholding of the output, as well as some visualization.
Let's start!
Group Analysis with SPM
Let's first run the group analysis with the SPM normalized data.
Imports (SPM12)
First, we need to import all the modules we later want to use.
End of explanation
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# Smoothing withds used during preprocessing
fwhm = [4, 8]
# Which contrasts to use for the 2nd-level analysis
contrast_list = ['con_0001', 'con_0002', 'con_0003', 'con_0004', 'con_0005', 'con_0006', 'con_0007']
mask = "/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_brainmask.nii.gz"
Explanation: Experiment parameters (SPM12)
It's always a good idea to specify all parameters that might change between experiments at the beginning of your script.
End of explanation
# Gunzip - unzip the mask image
gunzip = Node(Gunzip(in_file=mask), name="gunzip")
# OneSampleTTestDesign - creates one sample T-Test Design
onesamplettestdes = Node(OneSampleTTestDesign(),
name="onesampttestdes")
# EstimateModel - estimates the model
level2estimate = Node(EstimateModel(estimation_method={'Classical': 1}),
name="level2estimate")
# EstimateContrast - estimates group contrast
level2conestimate = Node(EstimateContrast(group_contrast=True),
name="level2conestimate")
cont1 = ['Group', 'T', ['mean'], [1]]
level2conestimate.inputs.contrasts = [cont1]
# Threshold - thresholds contrasts
level2thresh = Node(Threshold(contrast_index=1,
use_topo_fdr=True,
use_fwe_correction=False,
extent_threshold=0,
height_threshold=0.005,
height_threshold_type='p-value',
extent_fdr_p_threshold=0.05),
name="level2thresh")
Explanation: Specify Nodes (SPM12)
Initiate all the different interfaces (represented as nodes) that you want to use in your workflow.
End of explanation
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['contrast_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('contrast_id', contrast_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'cons': opj(output_dir, 'norm_spm', 'sub-*_fwhm{fwhm_id}',
'w{contrast_id}.nii')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_contrast_id_', '')]
subjFolders = [('%s_fwhm_id_%s' % (con, f), 'spm_%s_fwhm%s' % (con, f))
for f in fwhm
for con in contrast_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
Explanation: Specify input & output stream (SPM12)
Specify where the input data can be found & where and how to save the output data.
End of explanation
# Initiation of the 2nd-level analysis workflow
l2analysis = Workflow(name='spm_l2analysis')
l2analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 2nd-level analysis components
l2analysis.connect([(infosource, selectfiles, [('contrast_id', 'contrast_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, onesamplettestdes, [('cons', 'in_files')]),
(gunzip, onesamplettestdes, [('out_file',
'explicit_mask_file')]),
(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
]),
(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')]),
])
Explanation: Specify Workflow (SPM12)
Create a workflow and connect the interface nodes and the I/O stream to each other.
End of explanation
# Create 1st-level analysis output graph
l2analysis.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(l2analysis.base_dir, 'spm_l2analysis', 'graph.png'))
Explanation: Visualize the workflow (SPM12)
It always helps to visualize your workflow.
End of explanation
l2analysis.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: Run the Workflow (SPM12)
Now that everything is ready, we can run the 1st-level analysis workflow. Change n_procs to the number of jobs/cores you want to use.
End of explanation
# Change the SelectFiles template and recreate the node
templates = {'cons': opj(output_dir, 'norm_ants', 'sub-*_fwhm{fwhm_id}',
'{contrast_id}_trans.nii')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Change the substituion parameters for the datasink
substitutions = [('_contrast_id_', '')]
subjFolders = [('%s_fwhm_id_%s' % (con, f), 'ants_%s_fwhm%s' % (con, f))
for f in fwhm
for con in contrast_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
Explanation: Group Analysis with ANTs
Now to run the same group analysis, but on the ANTs normalized images, we just need to change a few parameters:
End of explanation
# Initiation of the 2nd-level analysis workflow
l2analysis = Workflow(name='ants_l2analysis')
l2analysis.base_dir = opj(experiment_dir, working_dir)
# Connect up the 2nd-level analysis components
l2analysis.connect([(infosource, selectfiles, [('contrast_id', 'contrast_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, onesamplettestdes, [('cons', 'in_files')]),
(gunzip, onesamplettestdes, [('out_file',
'explicit_mask_file')]),
(onesamplettestdes, level2estimate, [('spm_mat_file',
'spm_mat_file')]),
(level2estimate, level2conestimate, [('spm_mat_file',
'spm_mat_file'),
('beta_images',
'beta_images'),
('residual_image',
'residual_image')]),
(level2conestimate, level2thresh, [('spm_mat_file',
'spm_mat_file'),
('spmT_images',
'stat_image'),
]),
(level2conestimate, datasink, [('spm_mat_file',
'2ndLevel.@spm_mat'),
('spmT_images',
'2ndLevel.@T'),
('con_images',
'2ndLevel.@con')]),
(level2thresh, datasink, [('thresholded_map',
'2ndLevel.@threshold')]),
])
Explanation: Now, we just have to recreate the workflow.
End of explanation
l2analysis.run('MultiProc', plugin_args={'n_procs': 4})
Explanation: And we can run it!
End of explanation
from nilearn.plotting import plot_stat_map
%matplotlib inline
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0001_fwhm4/spmT_0001_thr.nii', title='ants fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0001_fwhm4/spmT_0001_thr.nii', title='spm fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0001_fwhm8/spmT_0001_thr.nii', title='ants fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0001_fwhm8/spmT_0001_thr.nii', title='spm fwhm=8',
bg_img=anatimg, threshold=2, vmax=8, display_mode='y', cut_coords=(-45, -30, -15, 0, 15), cmap='viridis');
Explanation: Visualize results
Now we create a lot of outputs, but how do they look like? And also, what was the influence of different smoothing kernels and normalization?
Keep in mind, that the group analysis was only done on N=7 subjects, and that we chose a voxel-wise threshold of p<0.005. Nonetheless, we corrected for multiple comparisons with a cluster-wise FDR threshold of p<0.05.
So let's first look at the contrast average:
End of explanation
from nilearn.plotting import plot_stat_map
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0005_fwhm4/spmT_0001_thr.nii', title='ants fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0005_fwhm4/spmT_0001_thr.nii', title='spm fwhm=4', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/ants_con_0005_fwhm8/spmT_0001_thr.nii', title='ants fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
plot_stat_map(
'/output/datasink/2ndLevel/spm_con_0005_fwhm8/spmT_0001_thr.nii', title='spm fwhm=8', dim=1,
bg_img=anatimg, threshold=2, vmax=8, cmap='viridis', display_mode='y', cut_coords=(-45, -30, -15, 0, 15));
Explanation: The results are more or less what you would expect: The peaks are more or less at the same places for the two normalization approaches and a wider smoothing has the effect of bigger clusters, while losing the sensitivity for smaller clusters.
Now, let's see other contrast -- Finger > others. Since we removed left-handed subjects, the activation is seen on the left part of the brain.
End of explanation
from nilearn.plotting import plot_glass_brain
plot_glass_brain(
'/output/datasink/2ndLevel/spm_con_0005_fwhm4/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='spm_fwhm4');
plot_glass_brain(
'/output/datasink/2ndLevel/ants_con_0005_fwhm4/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='ants_fwhm4');
plot_glass_brain(
'/output/datasink/2ndLevel/spm_con_0005_fwhm8/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='spm_fwhm8');
plot_glass_brain(
'/output/datasink/2ndLevel/ants_con_0005_fwhm8/spmT_0001_thr.nii', colorbar=True,
threshold=2, display_mode='lyrz', black_bg=True, vmax=10, title='ants_fwhm8');
Explanation: Now, let's see the results using the glass brain plotting method.
End of explanation |
13,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nous pouvons indiquer l'hyperplane séparateur (ci-bas).
à discuter
Step1: Un exemple plus approfondi
Exercices
* Jouez avec le code pour comprendre la forme de chaque variable.
* Découvrez le sens de "target".
Step2: Détection de spam avec la régression logistique
Il faut télécharger le corpus de spam ici. Unzippez-le dans le répertoire spam-corpus.
Step3: Il nous faut d'abord des critères (features). Puis nous allons utiliser TF-IDF pour trouver les mots les plus représentatifs des sms spam et ham.
Explorez les training data et test data, cru et cuit.
Pourquoi disons-nous fit_transform() pour les training data, mais transform() pour les test data?
Step4: Enfin, nous créeons un classifieur par régression logistique. Comme tout classifieur en scikit-learn, il nous propose fit() et predict(). Il faut toujours visualiser nos données et nos résultats, ce que nous faisons.
Step5: Métriques de performance
OK, nous avons classifié les messages, mais avec quel taux de précision?
Step6: Exercice
Qu'est-ce qui est la matrice de confusion pour notre classifieur de spam?
Cross validation | Python Code:
# Inspired by https://stackoverflow.com/questions/20045994/how-do-i-plot-the-decision-boundary-of-a-regression-using-matplotlib
# and http://stackoverflow.com/questions/28256058/plotting-decision-boundary-of-logistic-regression
X = np.array(rouge + bleu)
y = [1] * len(rouge) + [0] * len(bleu)
logreg = LogisticRegression()
logreg.fit(X, y)
xx, yy = np.mgrid[0:5:.01, -2:5:.01]
grid = np.c_[xx.ravel(), yy.ravel()]
probs = logreg.predict_proba(grid)[:, 1].reshape(xx.shape)
fig, ax = plt.subplots()
ax.contour(xx, yy, probs, levels=[.5], cmap="Greys", vmin=0, vmax=.6)
plt.scatter([x for x,y in rouge], [y for x,y in rouge], color='red')
plt.scatter([x for x,y in bleu], [y for x,y in bleu], color='blue')
plt.show()
Explanation: Nous pouvons indiquer l'hyperplane séparateur (ci-bas).
à discuter : pourquoi y a-t-il un point mal-classifié?
jouez avec les données
End of explanation
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
Y = iris.target
h = .02 # step size in the mesh
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4, 3))
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Un exemple plus approfondi
Exercices
* Jouez avec le code pour comprendre la forme de chaque variable.
* Découvrez le sens de "target".
End of explanation
df = pd.read_csv('spam-corpus/SMSSpamCollection', delimiter='\t', header=None)
print(df.head())
print('\n')
print('Number of spam messages: {n}'.format(n=df[df[0] == 'spam'][0].count()))
print('Number of ham messages: {n}'.format(n=df[df[0] == 'ham'][0].count()))
Explanation: Détection de spam avec la régression logistique
Il faut télécharger le corpus de spam ici. Unzippez-le dans le répertoire spam-corpus.
End of explanation
from sklearn.cross_validation import train_test_split, cross_val_score
X_train_raw, X_test_raw, y_train, y_test = train_test_split(df[1], df[0])
vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(X_train_raw)
X_test = vectorizer.transform(X_test_raw)
Explanation: Il nous faut d'abord des critères (features). Puis nous allons utiliser TF-IDF pour trouver les mots les plus représentatifs des sms spam et ham.
Explorez les training data et test data, cru et cuit.
Pourquoi disons-nous fit_transform() pour les training data, mais transform() pour les test data?
End of explanation
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
predictions = classifier.predict(X_test)
num_to_show = 5
for msg, prediction in zip(X_test_raw[:num_to_show], predictions[:num_to_show]):
print('Prediction: {pred}.\nMessage: {msg}\n'.format(
pred=prediction, msg=msg))
Explanation: Enfin, nous créeons un classifieur par régression logistique. Comme tout classifieur en scikit-learn, il nous propose fit() et predict(). Il faut toujours visualiser nos données et nos résultats, ce que nous faisons.
End of explanation
from sklearn.metrics import confusion_matrix
yy_test = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]
yy_pred = [0, 1, 0, 0, 0, 0, 0, 1, 1, 1]
confusion = confusion_matrix(yy_test, yy_pred)
print(confusion)
plt.matshow(confusion)
plt.title('Confusion matrix')
plt.gray()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Ou si on voudrait que le noir nous montre les plus communs :
invert_colors = np.ones(confusion.shape) * confusion.max()
plt.matshow(invert_colors - confusion)
plt.title('Confusion matrix')
plt.gray()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Explanation: Métriques de performance
OK, nous avons classifié les messages, mais avec quel taux de précision?
End of explanation
good_scores = cross_val_score(classifier, X_train, y_train, cv=5)
random_X_train = np.random.rand(X_train.shape[0], X_train.shape[1])
bad_scores = cross_val_score(classifier, random_X_train, y_train, cv=5)
print(good_scores)
print(bad_scores)
Explanation: Exercice
Qu'est-ce qui est la matrice de confusion pour notre classifieur de spam?
Cross validation
End of explanation |
13,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Try what you've learned so far
Now we have some time fo you to try out Python basics that you've just learned.
1. Float point precision
One thing to be aware of with floating point arithmetic is that its precision is limited, which can cause equality tests to be unstable. For example
Step1: Why is this the case? It turns out that it is not a behavior unique to Python, but is due to the fixed-precision format of the binary floating-point storage.
All programming languages using floating-point numbers store them in a fixed number of bits, and this leads some numbers to be represented only approximately.
We can see this by printing the three values to high precision
Step2: Python internally truncates these representations at 52 bits beyond the first nonzero bit on most systems.
This rounding error for floating-point values is a necessary evil of working with floating-point numbers.
The best way to deal with it is to always keep in mind that floating-point arithmetic is approximate, and never rely on exact equality tests with floating-point values.
2. Explore Booleans
Booleans can also be constructed using the bool() object constructor
Step3: The Boolean conversion of None is always False
Step4: For strings, bool(s) is False for empty strings and True otherwise
Step5: For sequences, which we'll see in the next section, the Boolean representation is False for empty sequences and True for any other sequences
Step6: 3. Mutability of lists and tuples
Step7: 4. Dictionary attributes | Python Code:
0.1 + 0.2 == 0.3
0.2 + 0.2 == 0.4
Explanation: Try what you've learned so far
Now we have some time fo you to try out Python basics that you've just learned.
1. Float point precision
One thing to be aware of with floating point arithmetic is that its precision is limited, which can cause equality tests to be unstable. For example:
End of explanation
print("0.1 = {0:.17f}".format(0.1))
print("0.2 = {0:.17f}".format(0.2))
print("0.3 = {0:.17f}".format(0.3))
Explanation: Why is this the case? It turns out that it is not a behavior unique to Python, but is due to the fixed-precision format of the binary floating-point storage.
All programming languages using floating-point numbers store them in a fixed number of bits, and this leads some numbers to be represented only approximately.
We can see this by printing the three values to high precision:
End of explanation
bool(2016)
bool(0)
bool(3.1415)
Explanation: Python internally truncates these representations at 52 bits beyond the first nonzero bit on most systems.
This rounding error for floating-point values is a necessary evil of working with floating-point numbers.
The best way to deal with it is to always keep in mind that floating-point arithmetic is approximate, and never rely on exact equality tests with floating-point values.
2. Explore Booleans
Booleans can also be constructed using the bool() object constructor: values of any other type can be converted to Boolean via predictable rules.
For example, any numeric type is False if equal to zero, and True otherwise:
End of explanation
bool(None)
Explanation: The Boolean conversion of None is always False:
End of explanation
bool("")
bool("abc")
Explanation: For strings, bool(s) is False for empty strings and True otherwise:
End of explanation
bool([1, 2, 3])
bool([])
Explanation: For sequences, which we'll see in the next section, the Boolean representation is False for empty sequences and True for any other sequences
End of explanation
s = (1, 2, 3) # tuple
s[1] = 4 # Can you remember that tuples cannot be changed!
s.append(4) # this is never going to work as it is a tuple!
s
t = [1, 2, 3] # But a list might just do the job:
t[1] = 4
t.append(4)
t
Explanation: 3. Mutability of lists and tuples
End of explanation
foo = dict(a='123', say='hellozles', other_key=['wow', 'a', 'list', '!'], and_another_key=3.14)
# foo.update(dict(a=42))
## or
# foo.update(a=42)
# foo.pop('say')
# foo.items()
# foo.keys()
# foo.values()
Explanation: 4. Dictionary attributes
End of explanation |
13,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Credentials
Make sure to go through Setup first!
Let's check that the environment variables have been set... We'll just try one
Step1: Google Cloud Storage
Let's see if we can create a bucket with boto (using credentials, project ID, etc. specified in boto config file)...
Step2: Listing existing buckets...
Step3: Upload a file to the new bucket
Step4: See contents of the bucket on the web interface (URL will be outputted below)
Step5: Google Prediction
Initialize API wrapper
Step6: Making predictions against a hosted model
Let's use the sample.sentiment hosted model (made publicly available by Google) | Python Code:
GPRED_PROJECT_ID = %env GPRED_PROJECT_ID
Explanation: Credentials
Make sure to go through Setup first!
Let's check that the environment variables have been set... We'll just try one:
End of explanation
import datetime
now = datetime.datetime.now()
BUCKET_NAME = 'test_' + GPRED_PROJECT_ID + now.strftime("%Y-%m-%d") # lower case letters required, no upper case allowed
import boto
import gcs_oauth2_boto_plugin
project_id = %env GPRED_PROJECT_ID
header_values = {"x-goog-project-id": project_id}
boto.storage_uri(BUCKET_NAME, 'gs').create_bucket(headers=header_values)
Explanation: Google Cloud Storage
Let's see if we can create a bucket with boto (using credentials, project ID, etc. specified in boto config file)...
End of explanation
uri = boto.storage_uri('', 'gs')
# If the default project is defined, call get_all_buckets() without arguments.
for bucket in uri.get_all_buckets(headers=header_values):
print bucket.name
Explanation: Listing existing buckets...
End of explanation
import os
os.system("echo 'hello!' > newfile")
filename = 'newfile'
boto.storage_uri(BUCKET_NAME + '/' + filename, 'gs').new_key().set_contents_from_file(open(filename))
Explanation: Upload a file to the new bucket
End of explanation
print "https://console.developers.google.com/project/" + project_id + "/storage/browser/" + BUCKET_NAME + "/?authuser=0"
Explanation: See contents of the bucket on the web interface (URL will be outputted below)
End of explanation
import googleapiclient.gpred as gpred
oauth_file = %env GPRED_OAUTH_FILE
api = gpred.api(oauth_file)
Explanation: Google Prediction
Initialize API wrapper
End of explanation
# projectname has to be 414649711441
prediction_request = api.hostedmodels().predict(project='414649711441',
hostedModelName='sample.sentiment',
body={'input': {'csvInstance': ['I hate that stuff is so stupid']}})
result = prediction_request.execute()
# We can print the raw result
print result
Explanation: Making predictions against a hosted model
Let's use the sample.sentiment hosted model (made publicly available by Google)
End of explanation |
13,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train Model with XLA_CPU (and CPU*)
Some operations do not have XLA_CPU equivalents, so we still need to use CPU.
Step1: Reset TensorFlow Graph
Useful in Jupyter Notebooks
Step2: Create TensorFlow Session
Step3: Generate Model Version (current timestamp)
Step4: Load Model Training and Test/Validation Data
Step5: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
Step6: View Accuracy of Pre-Training, Initial Random Variables
We want this to be close to 0, but it's relatively far away. This is why we train!
Step7: Setup Loss Summary Operations for Tensorboard
Step8: Train Model
Step9: View Loss Summaries in Tensorboard
Navigate to the Scalars and Graphs tab at this URL
Step10: Show Graph
Step11: View XLA JIT Visualizations
Run the next cell and click on the hlo_graph_*.png files in the left-navigation. | Python Code:
import tensorflow as tf
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
Explanation: Train Model with XLA_CPU (and CPU*)
Some operations do not have XLA_CPU equivalents, so we still need to use CPU.
End of explanation
tf.reset_default_graph()
Explanation: Reset TensorFlow Graph
Useful in Jupyter Notebooks
End of explanation
config = tf.ConfigProto(
log_device_placement=True,
)
config.graph_options.optimizer_options.global_jit_level \
= tf.OptimizerOptions.ON_1
print(config)
sess = tf.Session(config=config)
print(sess)
Explanation: Create TensorFlow Session
End of explanation
from datetime import datetime
version = int(datetime.now().strftime("%s"))
Explanation: Generate Model Version (current timestamp)
End of explanation
num_samples = 100000
import numpy as np
import pylab
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/device:XLA_CPU:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/device:XLA_CPU:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
Explanation: Load Model Training and Test/Validation Data
End of explanation
with tf.device("/device:XLA_CPU:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
Explanation: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
End of explanation
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
Explanation: View Accuracy of Pre-Training, Initial Random Variables
We want this to be close to 0, but it's relatively far away. This is why we train!
End of explanation
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_cpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_cpu/%s/test' % version,
graph=tf.get_default_graph())
Explanation: Setup Loss Summary Operations for Tensorboard
End of explanation
%%time
from tensorflow.python.client import timeline
with tf.device("/device:XLA_CPU:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-xla-cpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
Explanation: Train Model
End of explanation
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/xla_cpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_xla_cpu.pb' % optimize_me_parent_path
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
print(unoptimized_model_graph_path)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
Explanation: View Loss Summaries in Tensorboard
Navigate to the Scalars and Graphs tab at this URL:
http://[ip-address]:6006
Save Graph For Optimization
We will use this later.
End of explanation
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/xla_cpu/unoptimized_xla_cpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/xla_cpu/unoptimized_xla_cpu.pb'
output_dot='/root/notebooks/unoptimized_xla_cpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_xla_cpu.dot \
-o /root/notebooks/unoptimized_xla_cpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_xla_cpu.png', width=1024, height=768)
Explanation: Show Graph
End of explanation
%%bash
dot -T png /tmp/hlo_graph_1.*.dot -o /root/notebooks/hlo_graph_1.png &>/dev/null
dot -T png /tmp/hlo_graph_10.*.dot -o /root/notebooks/hlo_graph_10.png &>/dev/null
dot -T png /tmp/hlo_graph_50.*.dot -o /root/notebooks/hlo_graph_50.png &>/dev/null
dot -T png /tmp/hlo_graph_75.*.dot -o /root/notebooks/hlo_graph_75.png &>/dev/null
Explanation: View XLA JIT Visualizations
Run the next cell and click on the hlo_graph_*.png files in the left-navigation.
End of explanation |
13,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel Density Estimation
Kernel density estimation is the process of estimating an unknown probability density function using a kernel function $K(u)$. While a histogram counts the number of data points in somewhat arbitrary regions, a kernel density estimate is a function defined as the sum of a kernel function on every data point. The kernel function typically exhibits the following properties
Step1: A univariate example
Step2: We create a bimodal distribution
Step3: The simplest non-parametric technique for density estimation is the histogram.
Step4: Fitting with the default arguments
The histogram above is discontinuous. To compute a continuous probability density function,
we can use kernel density estimation.
We initialize a univariate kernel density estimator using KDEUnivariate.
Step5: We present a figure of the fit, as well as the true distribution.
Step6: In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see.
Varying the bandwidth using the bw argument
The bandwidth of the kernel can be adjusted using the bw argument.
In the following example, a bandwidth of bw=0.2 seems to fit the data well.
Step7: Comparing kernel functions
In the example above, a Gaussian kernel was used. Several other kernels are also available.
Step8: The available kernel functions
Step9: The available kernel functions on three data points
We now examine how the kernel density estimate will fit to three equally spaced data points.
Step10: A more difficult case
The fit is not always perfect. See the example below for a harder case.
Step11: The KDE is a distribution
Since the KDE is a distribution, we can access attributes and methods such as
Step12: Cumulative distribution, it's inverse, and the survival function
Step13: The Cumulative Hazard Function | Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.distributions.mixture_rvs import mixture_rvs
Explanation: Kernel Density Estimation
Kernel density estimation is the process of estimating an unknown probability density function using a kernel function $K(u)$. While a histogram counts the number of data points in somewhat arbitrary regions, a kernel density estimate is a function defined as the sum of a kernel function on every data point. The kernel function typically exhibits the following properties:
Symmetry such that $K(u) = K(-u)$.
Normalization such that $\int_{-\infty}^{\infty} K(u) \ du = 1$ .
Monotonically decreasing such that $K'(u) < 0$ when $u > 0$.
Expected value equal to zero such that $\mathrm{E}[K] = 0$.
For more information about kernel density estimation, see for instance Wikipedia - Kernel density estimation.
A univariate kernel density estimator is implemented in sm.nonparametric.KDEUnivariate.
In this example we will show the following:
Basic usage, how to fit the estimator.
The effect of varying the bandwidth of the kernel using the bw argument.
The various kernel functions available using the kernel argument.
End of explanation
np.random.seed(12345) # Seed the random number generator for reproducible results
Explanation: A univariate example
End of explanation
# Location, scale and weight for the two distributions
dist1_loc, dist1_scale, weight1 = -1, 0.5, 0.25
dist2_loc, dist2_scale, weight2 = 1, 0.5, 0.75
# Sample from a mixture of distributions
obs_dist = mixture_rvs(
prob=[weight1, weight2],
size=250,
dist=[stats.norm, stats.norm],
kwargs=(
dict(loc=dist1_loc, scale=dist1_scale),
dict(loc=dist2_loc, scale=dist2_scale),
),
)
Explanation: We create a bimodal distribution: a mixture of two normal distributions with locations at -1 and 1.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Scatter plot of data samples and histogram
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)),
zorder=15,
color="red",
marker="x",
alpha=0.5,
label="Samples",
)
lines = ax.hist(obs_dist, bins=20, edgecolor="k", label="Histogram")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: The simplest non-parametric technique for density estimation is the histogram.
End of explanation
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit() # Estimate the densities
Explanation: Fitting with the default arguments
The histogram above is discontinuous. To compute a continuous probability density function,
we can use kernel density estimation.
We initialize a univariate kernel density estimator using KDEUnivariate.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Plot the histrogram
ax.hist(
obs_dist,
bins=20,
density=True,
label="Histogram from samples",
zorder=5,
edgecolor="k",
alpha=0.5,
)
# Plot the KDE as fitted using the default arguments
ax.plot(kde.support, kde.density, lw=3, label="KDE from samples", zorder=10)
# Plot the true distribution
true_values = (
stats.norm.pdf(loc=dist1_loc, scale=dist1_scale, x=kde.support) * weight1
+ stats.norm.pdf(loc=dist2_loc, scale=dist2_scale, x=kde.support) * weight2
)
ax.plot(kde.support, true_values, lw=3, label="True distribution", zorder=15)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 40,
marker="x",
color="red",
zorder=20,
label="Samples",
alpha=0.5,
)
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: We present a figure of the fit, as well as the true distribution.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Plot the histrogram
ax.hist(
obs_dist,
bins=25,
label="Histogram from samples",
zorder=5,
edgecolor="k",
density=True,
alpha=0.5,
)
# Plot the KDE for various bandwidths
for bandwidth in [0.1, 0.2, 0.4]:
kde.fit(bw=bandwidth) # Estimate the densities
ax.plot(
kde.support,
kde.density,
"--",
lw=2,
color="k",
zorder=10,
label="KDE from samples, bw = {}".format(round(bandwidth, 2)),
)
# Plot the true distribution
ax.plot(kde.support, true_values, lw=3, label="True distribution", zorder=15)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 50,
marker="x",
color="red",
zorder=20,
label="Data samples",
alpha=0.5,
)
ax.legend(loc="best")
ax.set_xlim([-3, 3])
ax.grid(True, zorder=-5)
Explanation: In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see.
Varying the bandwidth using the bw argument
The bandwidth of the kernel can be adjusted using the bw argument.
In the following example, a bandwidth of bw=0.2 seems to fit the data well.
End of explanation
from statsmodels.nonparametric.kde import kernel_switch
list(kernel_switch.keys())
Explanation: Comparing kernel functions
In the example above, a Gaussian kernel was used. Several other kernels are also available.
End of explanation
# Create a figure
fig = plt.figure(figsize=(12, 5))
# Enumerate every option for the kernel
for i, (ker_name, ker_class) in enumerate(kernel_switch.items()):
# Initialize the kernel object
kernel = ker_class()
# Sample from the domain
domain = kernel.domain or [-3, 3]
x_vals = np.linspace(*domain, num=2 ** 10)
y_vals = kernel(x_vals)
# Create a subplot, set the title
ax = fig.add_subplot(3, 3, i + 1)
ax.set_title('Kernel function "{}"'.format(ker_name))
ax.plot(x_vals, y_vals, lw=3, label="{}".format(ker_name))
ax.scatter([0], [0], marker="x", color="red")
plt.grid(True, zorder=-5)
ax.set_xlim(domain)
plt.tight_layout()
Explanation: The available kernel functions
End of explanation
# Create three equidistant points
data = np.linspace(-1, 1, 3)
kde = sm.nonparametric.KDEUnivariate(data)
# Create a figure
fig = plt.figure(figsize=(12, 5))
# Enumerate every option for the kernel
for i, kernel in enumerate(kernel_switch.keys()):
# Create a subplot, set the title
ax = fig.add_subplot(3, 3, i + 1)
ax.set_title('Kernel function "{}"'.format(kernel))
# Fit the model (estimate densities)
kde.fit(kernel=kernel, fft=False, gridsize=2 ** 10)
# Create the plot
ax.plot(kde.support, kde.density, lw=3, label="KDE from samples", zorder=10)
ax.scatter(data, np.zeros_like(data), marker="x", color="red")
plt.grid(True, zorder=-5)
ax.set_xlim([-3, 3])
plt.tight_layout()
Explanation: The available kernel functions on three data points
We now examine how the kernel density estimate will fit to three equally spaced data points.
End of explanation
obs_dist = mixture_rvs(
[0.25, 0.75],
size=250,
dist=[stats.norm, stats.beta],
kwargs=(dict(loc=-1, scale=0.5), dict(loc=1, scale=1, args=(1, 0.5))),
)
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit()
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.hist(obs_dist, bins=20, density=True, edgecolor="k", zorder=4, alpha=0.5)
ax.plot(kde.support, kde.density, lw=3, zorder=7)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 50,
marker="x",
color="red",
zorder=20,
label="Data samples",
alpha=0.5,
)
ax.grid(True, zorder=-5)
Explanation: A more difficult case
The fit is not always perfect. See the example below for a harder case.
End of explanation
obs_dist = mixture_rvs(
[0.25, 0.75],
size=1000,
dist=[stats.norm, stats.norm],
kwargs=(dict(loc=-1, scale=0.5), dict(loc=1, scale=0.5)),
)
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit(gridsize=2 ** 10)
kde.entropy
kde.evaluate(-1)
Explanation: The KDE is a distribution
Since the KDE is a distribution, we can access attributes and methods such as:
entropy
evaluate
cdf
icdf
sf
cumhazard
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.cdf, lw=3, label="CDF")
ax.plot(np.linspace(0, 1, num=kde.icdf.size), kde.icdf, lw=3, label="Inverse CDF")
ax.plot(kde.support, kde.sf, lw=3, label="Survival function")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: Cumulative distribution, it's inverse, and the survival function
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.cumhazard, lw=3, label="Cumulative Hazard Function")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: The Cumulative Hazard Function
End of explanation |
13,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter
Step2: Our query this time is going to extract the both the hashtag and the tweets associated with the hashtag. We are going to created documents full of tweets that are defined by their hashtags so we need to be able to reference the hashtags per tweet.
...oh and we are only taking from Chicago.
Step3: We can now use pandas to count how many times each hashtag was used. We can turn this into a data frame.
Step4: ...and the most popular hashtags for Chicago.
Step5: Twitter is unique from other types of natural language given the constraints on size. This often makes it difficult to find coherent topics from tweets. Therefore, we want to create documents of tweets. Each document is a list of tweets that contain a particular hashtag. So what we want to do is create a list of tweets per hashtag.
Step6: Above, we are grouping by hashtag and then concatenating the tweets per group into a list. So this is going to be a data frame where the first attribute is the hashtag and the second is a list of tweets with that hashtag. Let's take a look...
Step7: We now need to use a helper function to remove some patterns from the tweets that we don't want. First, we don't want '@' signs or '#'s. We also want to remove urls. We will create a regular expression to do that.
Step8: The function above takes in a sting and replace each of the patterns in that string with the replacement. Notice that we use *pats. This is a way to create an unspecified number of arguments. Let's look at an example.
Step9: This took the string s and replaced @ and # with a blank ''.
Below, we are going to create a regular expression that matches urls. We also want these removed
Step10: In natural language processing, you often have to tokenize a task, which is to break it up text up into components. These components are often splitting on words so that each word is a unit called a token. Below we are going to simultaneously remove the patterns we don't want and tokenize each tweet and save it to a list of lists called tokenized_docs.
Step11: We can now look at the first item of tokenized_docs to see what it looks like. Notice that it contains a list/lists.
Step12: We then remove the stop words and return it to a list of lists object.
Step13: After tokenization, there is also stemming. This is the process and getting words to their base version. We are going to do a similar process here where we save it to a list of lists called texts.
NOTE
Step14: And let's look at the first item... | Python Code:
# BE SURE TO RUN THIS CELL BEFORE ANY OF THE OTHER CELLS
import psycopg2
import pandas as pd
import re
# pull in our stopwords
from nltk.corpus import stopwords
stops = stopwords.words('english')
Explanation: Twitter: An Analysis
Part 7
We've explored the basics of natural language processing using Postgres and the steps we took are often a great starting point for the rest of the analysis, but rarely will you ever just stop with those results. You will often have to pull in data after performing some aggregations, joins, etc... and then continuing on with a general purpose programming language like Python or R.
In this notebook, we are going to use postgres to pull in our data but then we are going to perform some more complex data carpentry.
End of explanation
# define our query
statement =
SELECT lower(t.text) as tweet, lower(h.text) as hashtag
FROM twitter.tweet t, twitter.hashtag h
WHERE t.job_id = 273 AND t.text NOT LIKE 'RT%' AND t.iso_language = 'en' AND t.tweet_id_str = h.tweet_id
LIMIT 100000;
try:
connect_str = "dbname='twitter' user='dsa_ro_user' host='dbase.dsa.missouri.edu'password='readonly'"
# use our connection values to establish a connection
conn = psycopg2.connect(connect_str)
cursor = conn.cursor()
# execute the statement from above
cursor.execute(statement)
column_names = [desc[0] for desc in cursor.description]
# fetch all of the rows associated with the query
rows = cursor.fetchall()
except Exception as e:
print("Uh oh, can't connect. Invalid dbname, user or password?")
print(e)
tweet_dict = {}
for i in list(range(len(column_names))):
tweet_dict['{}'.format(column_names[i])] = [x[i] for x in rows]
tweets = pd.DataFrame(tweet_dict)
tweets.head()
Explanation: Our query this time is going to extract the both the hashtag and the tweets associated with the hashtag. We are going to created documents full of tweets that are defined by their hashtags so we need to be able to reference the hashtags per tweet.
...oh and we are only taking from Chicago.
End of explanation
hashtag_groups = tweets.groupby('hashtag').size().sort_values().reset_index()
Explanation: We can now use pandas to count how many times each hashtag was used. We can turn this into a data frame.
End of explanation
hashtag_groups.tail()
Explanation: ...and the most popular hashtags for Chicago.
End of explanation
docs = tweets.groupby('hashtag')['tweet'].apply(list).reset_index()
Explanation: Twitter is unique from other types of natural language given the constraints on size. This often makes it difficult to find coherent topics from tweets. Therefore, we want to create documents of tweets. Each document is a list of tweets that contain a particular hashtag. So what we want to do is create a list of tweets per hashtag.
End of explanation
docs.head()
Explanation: Above, we are grouping by hashtag and then concatenating the tweets per group into a list. So this is going to be a data frame where the first attribute is the hashtag and the second is a list of tweets with that hashtag. Let's take a look...
End of explanation
def removePatterns(string, replacement, *pats):
for pattern in pats:
string = re.sub(pattern,replacement,string)
return string
Explanation: We now need to use a helper function to remove some patterns from the tweets that we don't want. First, we don't want '@' signs or '#'s. We also want to remove urls. We will create a regular expression to do that.
End of explanation
s = "I have @3 friends named #Arnold"
removePatterns(s,'', '#','@')
Explanation: The function above takes in a sting and replace each of the patterns in that string with the replacement. Notice that we use *pats. This is a way to create an unspecified number of arguments. Let's look at an example.
End of explanation
url = r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
Explanation: This took the string s and replaced @ and # with a blank ''.
Below, we are going to create a regular expression that matches urls. We also want these removed
End of explanation
from nltk.tokenize import RegexpTokenizer
tokenizer = RegexpTokenizer(r'\w+')
tokenized_docs = []
for i in docs['tweet']:
document = []
for text in i:
document.append(tokenizer.tokenize(removePatterns(text,'','@','#',url).lower()))
tokenized_docs.append(document)
Explanation: In natural language processing, you often have to tokenize a task, which is to break it up text up into components. These components are often splitting on words so that each word is a unit called a token. Below we are going to simultaneously remove the patterns we don't want and tokenize each tweet and save it to a list of lists called tokenized_docs.
End of explanation
tokenized_docs[0]
Explanation: We can now look at the first item of tokenized_docs to see what it looks like. Notice that it contains a list/lists.
End of explanation
stops_removed = []
for doc in tokenized_docs:
phrases = []
for phrase in doc:
p = [i for i in phrase if i not in stops]
phrases.append(p)
stops_removed.append(phrases)
Explanation: We then remove the stop words and return it to a list of lists object.
End of explanation
from nltk.stem.porter import PorterStemmer
p_stemmer = PorterStemmer()
texts = []
for doc in stops_removed:
stemmed = []
for phrase in doc:
try:
stemmed.append([p_stemmer.stem(i) for i in phrase])
except:
pass
texts.append(stemmed)
Explanation: After tokenization, there is also stemming. This is the process and getting words to their base version. We are going to do a similar process here where we save it to a list of lists called texts.
NOTE: This could take a couple of minutes
End of explanation
texts[0]
Explanation: And let's look at the first item...
End of explanation |
13,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup and basic objects
Get started with EnOSlib on Grid'5000.
Website
Step1: Resources abstractions
In this notebook we won't execute anything remotely, instead we'll just cover some basic abstractions provided by the library.
We start with the abstractions of the resources (machines and networks that are usually given by an infrastructure)
Host
An host is anything we can connect to and act on. Most of the time it corresponds to a machine reachable through SSH.
The datastructure reflects this.
Usually you don't instantiate hosts manually, instead they are brought to you by EnOSlib (because most likely they depend on a scheduler decision like OAR on Grid'5000).
Step2: The local machine can be represented by an instance of the LocalHost object. This is a specialization of an Host, the connection to this host will be made using sub-processes (instead of SSH). We can see it in the extra attribute of the LocalHost object. This extra attribute is actually interpreted when a "remote" action is triggered on our hosts.
Step3: Other types of Hosts are possible. The library has a DockerHost which represents a docker container we want to reach using the docker TCP protocol. One needs to specify where this container is running by passing an host instance.
Step4: The above extra field suggest that the connection to this docker container will be made through an ssh jump to the remote host hosting the container.
This will be done transparently by the library anyway.
Roles
A common pratice when experimenting, especially with distributed applications, is to form logical group of machines.
Indeed, during an experiment your hosts will serve different purposes
Step5: Network and Networks
Network and Networks are the same as Host and Roles but for networks
Step6: Providers (and their configurations)
EnOSlib uses Providers to ... provide resources.
Providers let the user get ownership of some resources (for the time of the experiment) in good shape (e.g access granted, network configured ...).
They transform an abstract Configuration to Roles, Networks | Python Code:
import enoslib as en
Explanation: Setup and basic objects
Get started with EnOSlib on Grid'5000.
Website: https://discovery.gitlabpages.inria.fr/enoslib/index.html
Instant chat: https://framateam.org/enoslib
Source code: https://gitlab.inria.fr/discovery/enoslib
This is the first notebooks of a series that will let you discover the main features of EnOSlib on Grid'5000.
If you want to actually execute them you'll need to setup your environment properly.
We sum up here the different steps to achieve this process.
Get a Grid'5000 account
Register using this page.
Pay attention to the fact that uploading a SSH key (public part) is mandatory to perform any EnOSlib action from your local machine.
Make sure the SSH connection is ready. You can follow this tutorial.
Make sure EnOSlib is available in your notebook environment
Follow the steps here.
Using a virtualenv is the way to go, make sure to use one.
Also adding the optional jupyter will improve your experience. (pip install enoslib[jupyter])
Testing the import
End of explanation
bare_host = en.Host("192.168.0.1")
host_with_alias = en.Host("192.168.0.2", alias="one_alias")
host_with_alias_and_username = en.Host("192.168.0.3", alias="one_alias", user="foo")
bare_host
host_with_alias
host_with_alias_and_username
Explanation: Resources abstractions
In this notebook we won't execute anything remotely, instead we'll just cover some basic abstractions provided by the library.
We start with the abstractions of the resources (machines and networks that are usually given by an infrastructure)
Host
An host is anything we can connect to and act on. Most of the time it corresponds to a machine reachable through SSH.
The datastructure reflects this.
Usually you don't instantiate hosts manually, instead they are brought to you by EnOSlib (because most likely they depend on a scheduler decision like OAR on Grid'5000).
End of explanation
localhost = en.LocalHost()
localhost
Explanation: The local machine can be represented by an instance of the LocalHost object. This is a specialization of an Host, the connection to this host will be made using sub-processes (instead of SSH). We can see it in the extra attribute of the LocalHost object. This extra attribute is actually interpreted when a "remote" action is triggered on our hosts.
End of explanation
docker_host = en.DockerHost("alias", "container_name", host_with_alias_and_username)
docker_host
Explanation: Other types of Hosts are possible. The library has a DockerHost which represents a docker container we want to reach using the docker TCP protocol. One needs to specify where this container is running by passing an host instance.
End of explanation
h1 = en.Host("10.0.0.1")
h2 = en.Host("10.0.0.2")
h3 = en.Host("10.0.0.3")
roles = en.Roles()
roles["tag1"] = [h1, h2]
roles["tag2"] = [h3]
roles["tag3"] = [h2, h3]
roles
Explanation: The above extra field suggest that the connection to this docker container will be made through an ssh jump to the remote host hosting the container.
This will be done transparently by the library anyway.
Roles
A common pratice when experimenting, especially with distributed applications, is to form logical group of machines.
Indeed, during an experiment your hosts will serve different purposes: some will host the system you are studying while other will install third party tools to inject some load, observe ...
A natural way of configuring differently several sets of hosts is to tag them and group them according to their tags.
The Roles datastructure serves this purpose: it lets you group your hosts based on tags. It follow a dict-like interface.
End of explanation
from enoslib.objects import DefaultNetwork
one_network = DefaultNetwork("192.168.1.0/24")
one_network_with_a_pool_of_ips = DefaultNetwork("192.168.1.0/24", ip_start="192.168.1.10", ip_end="192.168.1.100")
one_network
one_network_with_a_pool_of_ips
# get one free ip
ip_gen = one_network_with_a_pool_of_ips.free_ips
next(ip_gen)
# get another one
next(ip_gen)
Explanation: Network and Networks
Network and Networks are the same as Host and Roles but for networks:
Network represent a single Network
Networks represent a "Roles" of Network: networks indexed by their tags
.
Networks are usually given by an infrastructure and thus you won't really instantiate Network nor Networks by yourself.
More precisely there exists a specific subclass of Network per infrastructure which will be returned automatically by EnOSlib when needed.
Moreover Network datastructure isn't exposed in EnOSlib at the top level, let's see however how a DefaultNetwork can look like. A DefaultNetwork is a very common abstraction of a network that allows to represent a basic network with optionnally a pool of free ips/macs address. For instance a subnet or a vlan on Grid5000 are represented by a specific DefaultNetwork.
End of explanation
import enoslib as en
# An empty configuration isn't really useful but let you see
# some of the default parameters
# Note that by default the job_type is set to deploy == the env_name will be deployed
conf = en.G5kConf()
conf
# changing the top level options is done by calling the classmethod `from_settings`
en.G5kConf.from_settings(walltime="10:00:00", job_name="my awesome job")
# the canonical way of getting some machines
prod_network = en.G5kNetworkConf(roles=["mynetwork"], site="rennes", type="prod")
conf = (
en.G5kConf()
.add_machine(cluster="paravance", nodes=3, roles=["role1", "role2"], primary_network=prod_network)
.add_machine(cluster="parasilo", nodes=3, roles=["role2", "role3"], primary_network=prod_network)
.add_network_conf(prod_network)
# optional, but do some sanity checks on the configuration
.finalize()
)
conf
# changing to a non-deploy job
# == no deployment will occur, the production environment will be used
prod_network = en.G5kNetworkConf(roles=["mynetwork"], site="rennes", type="prod")
conf = (
en.G5kConf.from_settings(job_type=["allow_classic_ssh"])
.add_machine(cluster="paravance", nodes=3, roles=["role1", "role2"], primary_network=prod_network)
.add_machine(cluster="parasilo", nodes=3, roles=["role2", "role3"], primary_network=prod_network)
.add_network_conf(prod_network)
# optional, but do some sanity checks on the configuration
.finalize()
)
conf
# Using a secondary networks
prod_network = en.G5kNetworkConf(roles=["mynetwork"], site="rennes", type="prod")
kavlan_network = en.G5kNetworkConf(roles=["myprivate"], site="rennes", type="kavlan")
conf = (
en.G5kConf()
.add_machine(cluster="paravance", nodes=3, roles=["role1", "role2"], primary_network=prod_network, secondary_networks=[kavlan_network])
.add_machine(cluster="parasilo", nodes=3, roles=["role2", "role3"], primary_network=prod_network, secondary_networks=[kavlan_network])
.add_network_conf(prod_network)
.add_network_conf(kavlan_network)
# optional, but do some sanity checks on the configuration
.finalize()
)
conf
Explanation: Providers (and their configurations)
EnOSlib uses Providers to ... provide resources.
Providers let the user get ownership of some resources (for the time of the experiment) in good shape (e.g access granted, network configured ...).
They transform an abstract Configuration to Roles, Networks :
$$Configuration \xrightarrow{provider} Roles, Networks$$
There are different providers in EnOSlib:
Vbox/KVM to work with locally hosted virtual machines
Openstack/Chameleon to work with bare-metal resources hosted in the Chameleon platform
FiT/IOT lab to work with sensors or low profile machines
Grid'5000 to get bare-metal resources from G5k.<br/>
There are also some composite providers that sit on top of the Grid'5000 provider
VmonG5k to work with virtual machines on Grid'5000**
Distem to work with lxc containers on Grid'5000**
Configurations
A Provider must be fed with a Configuration. Configuration objects are specific to each provider.
You can build them from a dictionnary (e.g from a yaml/json file) or programmatically. For instance the schema for Grid'5000 is here.
In this section, we'll only build some configurations (No resource will be reserved on Grid'5000)
End of explanation |
13,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
tf.reset_default_graph()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding="same", activation=tf.nn.relu, name='conv1')
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), 2, padding="same", name='maxpool1')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding="same", activation=tf.nn.relu, name='conv2')
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), 2, padding="same", name='maxpool2')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding="same", activation=tf.nn.relu, name='conv3')
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), 2, padding="same", name='encoded')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, [7,7], name='upsample1')
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding="same", activation=tf.nn.relu, name='conv4')
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, [14,14], name='upsample2')
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding="same", activation=tf.nn.relu, name='conv5')
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, [28,28], name='upsample3')
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding="same", activation=tf.nn.relu, name='conv6')
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None, name='conv-logits')
#Now 28x28x1
print('MODEL:')
print(conv1)
print(maxpool1)
print(conv2)
print(maxpool2)
print(conv3)
print(encoded)
print(upsample1)
print(conv4)
print(upsample2)
print(conv5)
print(upsample3)
print(conv6)
print(logits)
print()
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
show_after = 50
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
if ii % show_after == 0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
tf.reset_default_graph()
learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding="same", activation=tf.nn.relu, name='conv1')
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), 2, padding="same", name='maxpool1')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding="same", activation=tf.nn.relu, name='conv2')
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), 2, padding="same", name='maxpool2')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding="same", activation=tf.nn.relu, name='conv3')
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), 2, padding="same", name='encoded')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, [7,7], name='upsample1')
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding="same", activation=tf.nn.relu, name='conv4')
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, [14,14], name='upsample2')
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding="same", activation=tf.nn.relu, name='conv5')
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, [28,28], name='upsample3')
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding="same", activation=tf.nn.relu, name='conv6')
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None, name='conv-logits')
#Now 28x28x1
print('MODEL:')
print(conv1)
print(maxpool1)
print(conv2)
print(maxpool2)
print(conv3)
print(encoded)
print(upsample1)
print(conv4)
print(upsample2)
print(conv5)
print(upsample3)
print(conv6)
print(logits)
print()
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
show_after = 150
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
if ii % show_after == 0:
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
13,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 2
Problem 2-3
Step1: Description
Consider a simple power system consisting of an ideal voltage source, an ideal step-up transformer, a transmission line, an ideal step-down transformer, and a load.
The voltage of the source is $\vec{V}S = 480 \angle 0° V$ . The impedance of the transmission line is $Z\text{line} = 3 + j4 \,\Omega$, and the impedance of the load is $Z_\text{load} = 30 + j40\,\Omega$ .
Step2: (a)
Assume that the transformers are not present in the circuit.
What is the load voltage and efficiency of the system?
(b)
Assume that transformer 1 is a 1
Step3: What is the load voltage and efficiency of the system?
(c)
What transformer turns ratio would be required to reduce the transmission line losses to 1% of the total power produced by the generator?
SOLUTION
(a)
The equivalent circuit of this power system is shown below
Step4: The load voltage is
Step5: The power consumed by the load is
Step6: The power consumed by the transmission line is
Step7: The efficiency of the power system is
Step8: (b)
The equivalent circuit of this power system is shown below
Step9: The load impedance referred to primary side of $T_1$ is the same as the actual impedance, since the turns ratios of the step-up and step-down transformers undo each other’s changes.
Step10: The resulting equivalent circuit referred to the primary side of $T_1$ is
Step11: The load voltage is
Step12: The power consumed by the load is
Step13: The power consumed by the transmission line is
Step14: The efficiency of the power system is
Step15: (c)
Since the power in a resistor is given by $P = I^2R$, the total power consumed in the line resistance will be directly proportional to the ratio of the line resistance to the total resistance in the circuit. The load resistance is $30\,\Omega$, and that must be $99\,\%$ of the total resistance in order for the efficient to be $1\,\%$.
Therefore, the referred line resistance must be
Step16: Since the referred line resistance is
Step17: and the actual line resistance is
Step18: the turns ration must be | Python Code:
%pylab notebook
%precision 4
Explanation: Excercises Electric Machinery Fundamentals
Chapter 2
Problem 2-3
End of explanation
VS = 480.0 * exp(0j) # [Ohm] using polar syntax
Zline = 3.0 + 4.0j # [Ohm] using cartesian syntax
Zload = 30.0 + 40.0j # [Ohm] using cartesian syntax
Explanation: Description
Consider a simple power system consisting of an ideal voltage source, an ideal step-up transformer, a transmission line, an ideal step-down transformer, and a load.
The voltage of the source is $\vec{V}S = 480 \angle 0° V$ . The impedance of the transmission line is $Z\text{line} = 3 + j4 \,\Omega$, and the impedance of the load is $Z_\text{load} = 30 + j40\,\Omega$ .
End of explanation
a_b = 1.0/5.0
a_b
Explanation: (a)
Assume that the transformers are not present in the circuit.
What is the load voltage and efficiency of the system?
(b)
Assume that transformer 1 is a 1:5 step-up transformer, and transformer 2 is a 5:1 step-down transformer.
End of explanation
Iload_a = VS / (Zline+Zload)
Iload_a_angle = arctan(Iload_a.imag / Iload_a.real) # angle of Iload_a [rad]
print('Iload_a = {:.3f} A ∠{:.2f}°'.format(
abs(Iload_a), degrees(Iload_a_angle)))
Explanation: What is the load voltage and efficiency of the system?
(c)
What transformer turns ratio would be required to reduce the transmission line losses to 1% of the total power produced by the generator?
SOLUTION
(a)
The equivalent circuit of this power system is shown below:
<img src="figs/Problem_2-03a.jpg" width="70%">
The load current in this system is:
$$\vec{I}\text{load} = \frac{\vec{V}_S}{Z\text{line} + Z_\text{load}} $$
End of explanation
Vload_a = Iload_a * Zload
print('Vload_a = {:.1f} V ∠{:.0f}°'.format(
abs(Vload_a), angle(Vload_a, deg=True)))
Explanation: The load voltage is:
$$\vec{V}\text{load} = \vec{I}\text{load}Z_\text{load}$$
End of explanation
Rload = Zload.real
Pload_a = abs(Iload_a)**2 * Rload
print('Pload_a = {:.1f} W'.format(Pload_a))
Explanation: The power consumed by the load is:
$$P_\text{load} = I_\text{load}^2R_\text{load}$$
End of explanation
Rline = Zline.real
Pline_a = abs(Iload_a)**2 * Rline
print('Pline_a = {:.1f} W'.format(Pline_a))
Explanation: The power consumed by the transmission line is:
$$P_\text{line} = I_\text{load}^2R_\text{line}$$
End of explanation
eta_a = Pload_a / (Pload_a + Pline_a) * 100 # [%]
print('η_a = {:.1f} %'.format(eta_a))
Explanation: The efficiency of the power system is:
$$\eta = \frac{P_\text{OUT}}{P_\text{IN}} \cdot 100\% = \frac{P_\text{load}}{P_\text{load}+P_\text{line}} \cdot 100\%$$
End of explanation
Z_line_b = a_b**2 * Zline
print('Z_line_b = {:.2f} Ω'.format(Z_line_b))
Explanation: (b)
The equivalent circuit of this power system is shown below:
<img src="figs/Problem_2-03b.jpg" width="70%">
The line impedance referred to primary side of $T_1$ is:
$$Z'\text{line} = a^2Z\text{line}$$
End of explanation
Z_load_b = Zload
Explanation: The load impedance referred to primary side of $T_1$ is the same as the actual impedance, since the turns ratios of the step-up and step-down transformers undo each other’s changes.
End of explanation
Iload_b = VS / (Z_line_b+Z_load_b)
Iload_b_angle = arctan(Iload_b.imag/Iload_b.real) # angle of Iload_b [rad]
print('Iload_b = {:.3f} A ∠{:.2f}°'.format(
abs(Iload_b), degrees(Iload_b_angle)))
Explanation: The resulting equivalent circuit referred to the primary side of $T_1$ is:
<img src="figs/Problem_2-03b2.jpg" width="70%">
The load current in this system is:
$$\vec{I}\text{load} = \frac{\vec{V}_S}{Z'\text{line}+Z'_\text{load}}$$
End of explanation
Vload_b = Iload_b * Z_load_b
Vload_b_angle = arctan(Vload_b.imag/Vload_b.real) # angle of Vload_b [rad]
print('Vload_b = {:.0f} V ∠{:.1f}°'.format(
abs(Vload_b), degrees(Vload_b_angle)))
Explanation: The load voltage is:
End of explanation
R_load_b = Z_load_b.real
Pload_b = abs(Iload_b)**2 * R_load_b
print('Pload_b = {:.1f} W'.format(Pload_b))
Explanation: The power consumed by the load is:
End of explanation
R_line_b = Z_line_b.real
Pline_b = abs(Iload_b)**2 * R_line_b
print('Pline_b = {:.1f} W'.format(Pline_b))
Explanation: The power consumed by the transmission line is:
End of explanation
eta_b = Pload_b / (Pload_b+Pline_b) * 100 # [%]
print('η_b = {:.1f} %'.format(eta_b))
Explanation: The efficiency of the power system is:
End of explanation
Rload = Zload.real
R_line_c = 0.01/0.99 * Rload
print('R_line_c = {:.3f} Ω'.format(R_line_c))
Explanation: (c)
Since the power in a resistor is given by $P = I^2R$, the total power consumed in the line resistance will be directly proportional to the ratio of the line resistance to the total resistance in the circuit. The load resistance is $30\,\Omega$, and that must be $99\,\%$ of the total resistance in order for the efficient to be $1\,\%$.
Therefore, the referred line resistance must be:
$$\frac{R'\text{line}}{R\text{load}} = \frac{0.01}{0.99}$$
End of explanation
R_line_c
Explanation: Since the referred line resistance is
End of explanation
Rload
Explanation: and the actual line resistance is
End of explanation
Rline = Zline.real
a_c = sqrt(R_line_c/ Rline)
print('a = {:.3f}'.format(a_c))
Explanation: the turns ration must be:
$$a^2 = \frac{R'\text{line}}{R\text{line}}$$
End of explanation |
13,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works. Implementing additional forces in python as below will typically be a factor of a few slower than a C implementation. For a library that has C implementations for several commonly used additional effects (with everything callable from Python), see REBOUNDx.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
Step1: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
Step2: Next, we need to tell REBOUND about this function.
Step3: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
Step4: And let's plot the result.
Step5: You can see that the eccentricity is oscillating between 0 and almost 1.
Note that the function starkForce(reb_sim) above receives the argument reb_sim when it is called. This is a pointer to the simulation structure. Instead of using the global ps variable to access particle data, one could also use reb_sim.contents.particles. This could be useful when one is running multiple simulations in parallel or when the particles get added and removed (in those cases particles might change). The contents has the same meaning a -> in C, i.e. follow the memory address. To find out more about pointers, check out the ctypes documentation.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
Step6: But we change the additional force to be
Step7: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
Step8: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity. | Python Code:
import rebound
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
Explanation: Additional forces
REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces.
This tutorial gives you a very quick overview of how that works. Implementing additional forces in python as below will typically be a factor of a few slower than a C implementation. For a library that has C implementations for several commonly used additional effects (with everything callable from Python), see REBOUNDx.
Stark problem
We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND.
End of explanation
ps = sim.particles
c = 0.01
def starkForce(reb_sim):
ps[1].ax += c
Explanation: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function
End of explanation
sim.additional_forces = starkForce
Explanation: Next, we need to tell REBOUND about this function.
End of explanation
import numpy as np
Nout = 1000
es = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time, exact_finish_time=0) # integrate to the nearest timestep so WHFast's timestep stays constant
es[i] = sim.particles[1].e
Explanation: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
plt.plot(times, es);
Explanation: And let's plot the result.
End of explanation
sim = rebound.Simulation()
sim.integrator = "ias15"
sim.add(m=1.)
sim.add(m=1e-6,a=1.)
sim.move_to_com() # Moves to the center of momentum frame
Explanation: You can see that the eccentricity is oscillating between 0 and almost 1.
Note that the function starkForce(reb_sim) above receives the argument reb_sim when it is called. This is a pointer to the simulation structure. Instead of using the global ps variable to access particle data, one could also use reb_sim.contents.particles. This could be useful when one is running multiple simulations in parallel or when the particles get added and removed (in those cases particles might change). The contents has the same meaning a -> in C, i.e. follow the memory address. To find out more about pointers, check out the ctypes documentation.
Non-conservative forces
The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before.
End of explanation
ps = sim.particles
tau = 1000.
def migrationForce(reb_sim):
ps[1].ax -= ps[1].vx/tau
ps[1].ay -= ps[1].vy/tau
ps[1].az -= ps[1].vz/tau
Explanation: But we change the additional force to be
End of explanation
sim.additional_forces = migrationForce
sim.force_is_velocity_dependent = 1
Explanation: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles.
End of explanation
Nout = 1000
a_s = np.zeros(Nout)
times = np.linspace(0.,100.*2.*np.pi,Nout)
for i, time in enumerate(times):
sim.integrate(time)
a_s[i] = sim.particles[1].a
fig = plt.figure(figsize=(15,5))
ax = plt.subplot(111)
ax.set_xlabel("time")
ax.set_ylabel("semi-major axis")
plt.plot(times, a_s);
Explanation: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity.
End of explanation |
13,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set the fn variable to the filename of either the training or test dataset
Step1: After running the cell below, you can move the slider to visualize the various instances of the dataset, change factor slider to increase the sharpness of the image | Python Code:
#training data
#fn = 'data/ocr/optdigits.tra'
#testing data
fn = 'data/ocr/optdigits.tes'
header="x11,x12,x13,x14,x15,x16,x17,x18,x21,x22,x23,x24,x25,x26,x27,x28,x31,x32,x33,x34,x35,x36,x37,x38,x41,x42,x43,x44,x45,x46,x47,x48,x51,x52,x53,x54,x55,x56,x57,x58,x61,x62,x63,x64,x65,x66,x67,x68,x71,x72,x73,x74,x75,x76,x77,x78,x81,x82,x83,x84,x85,x86,x87,x88,digit".split(",")
df = pd.read_csv(fn, header=None)
df.columns = header
df.head()
y = df.digit.copy().values
X = df.drop("digit", axis=1).values
X.shape, y.shape
X = X.reshape((-1, 8,8))
X.shape
Explanation: Set the fn variable to the filename of either the training or test dataset
End of explanation
@interact(X=fixed(X), y=fixed(y), idx=(0,X.shape[0]), factor=(1,50))
def show_item(X, y, idx=0, factor=5):
x = X[idx]
print("Instance %s:\t[%s]" % (
idx+1, ", ".join("'%s'" % str(k) for k in
list(x.flatten()) + [y[idx]])))
x = (((x-16)/16.0)*255).astype("int")
x = blowUp(x, factor)
fig, ax = plt.subplots(figsize=(5,5))
ax.imshow(x, cmap="Greys")
ax.set_title("Instance=%s, Digit=%s" % (idx+1, y[idx]))
plt.axis('off')
Explanation: After running the cell below, you can move the slider to visualize the various instances of the dataset, change factor slider to increase the sharpness of the image
End of explanation |
13,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
13,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating Sentiment Orientation with SKLearn
Jason Brietstone [email protected] & Amar Patel [email protected]
Natural language processsing is a booming field in the finance industry because of the massive amounts of user generated data that has recently become avaiable for analysis. Websites such as Twitter, StockTwits and Facebook, if analyzed correctly, can yield very relevient insights for predictions. Natural language processing can be a very daunting task because of how many layers of analysis you can add and the immense size of the English language. NLTK is a Python module and data repository, which serves as a tool for easy language processing. This paper proposes a simple method of deriving either a positive or negative sentiment orientation and highlights a few of the useful tools NLTK offers.
Firstly we need to download the NLTK repository, which is achieved by running nltk.download(). You can choose to download all text examples or download just the movie reviews which we use in this example into the corpus folder.
Step1: Wordnet is a NLTK sub package that can be used to link togther words with their respective synonyms, antonyms, part of speech and definitions. This is a very powerful tool because creating a similar wordnet requires a significant amount of databasing and organization.
Step2: Here we are going to verify that the we are using the version of the word we think we are using by pulling the definition
Step3: We can also test test the word by using the .examples() method which will yield examples of the word in question
Step4: For each word we can create a comprehensive list of all synonyms and antonyms by creating a for loop.
Step5: Wu and Palmer System
In the English language, there are multiple different ways of expressing an idea. Very often, people think that by using a synonym, the meaning of the sentence is unchanged. Under many circumstances, this is true, however to the computer a slight word change can make a big difference in the returned list of lemmas and antonyms. One method we can use to determine the similarity between to words to make sure any syntax changes we make dont alter the meaning of the word is to use the Wu and Palmer system of determing semantic similarity by calling on the wup_similarity method.
Step6: TurnItIn as a use case
Many students try to buy essays online. The services that sell those papers often use a form of natrual language processing to change the words with synonyms. The above method could determine if that has been occuring by gauging the similarities to other papers
Sentiment Analysis through Tokenization
Now we are going to begin to develop our module for determining sentiment. We are going to achieve this by tokenizing the movie reviews in our set. Tokenizing is the process of splitting blocks of text in to list of individual words. By doing this we can determining if the occurence of a particular word can be used as an indicator for posotive or negative sentiment. When doing this, we hope to see results that do NOT use any non-substantial words such as conjuctions (i.e. 'the', 'an', 'or', 'and').
This method does not have to be used only for determining sentiment. Other potential use cases could include determining the subject of a block of text or determining the origin of the author by indicators that would represent local slangs. There are 2 main benefits to using this method
Step7: Above we created a list all_words of all the used words, then turned that into a frequency distrubution and printed the top 15 most common words, then we printed how many times the word 'stupid' occured in the list all_words. Notice how these are obviously all conjunctions, common prepositions, and words like "the, that" as well as a hyphen.
Below, we are going to create our training set and testing set. First we pull out the keys from our all_words frequency distrubution. Considering all_words is a dictionary of ("the word", Frequency) we will now have each word. Our feature set, which is the raw data modified to show our distingushing feature, has the words that what we are defining in our training and testing set.
Step8: First, we are going to test using a simple NaiveBayesClassifier provided by NLTK. We will have it return the prediction accuracy and the most informative features. We hope to see two things that will demonstrate the efficiency of this
Step9: As you can see above the Naive Bayes Classifier is not a very accurate algorithim. To increase accuracy, we are going to try to use as many diffrent classifier methods as possible to test the data on. From there we are going to define a function that creates a voting system where each classifier votes. The outcome is the average of the votes, for example if 5/8 classifiers say positive, we will vote positive. We are also going to print the classification and the confidence of the algo.
Step10: Above we printed the accuracy for all the classifiers and their respective performance. Despite their individual accuracies, it is important to note that they were generally in agreement. Below, we are printing the confidence, which is calcualted by the amount of classifiers in agreement. This removes any time that individual classifeirs may have been lacking in their ability to predict a ceartin secenrio. However, it will more heavily weight any individual machine learning funcation that may have been more accurate than even the majority.
Accuracy optimization depends on the data set in question. To optimize, you must try running all individual classifiers, and then selectively remove those that failed to meet sufficient accuracy. For further investigation, one could leverage a data set that is structured in a way that pos or neg reviews are grouped in order to test the accuracy for just posotive or negative reviews. You may find that some are biased towards one side and can be removed for better accuracy. Obviously, regardless of any investigation you make, you also likely be able to increase the accuracy and applicability of the algothim by using a larger training set. Remember, in practice, using a larger training set will increase the time and processing power necissary for completion, which is relevant if the speed of execution is important.
Step11: Now, we will find out how confident each of these tests were
Step12: Now, we can figure out what is the distribution of confidence, what the average and standard deviation are, to get an idea of its true accuracy
Step13: This data is interesting.
First, the x labels show four possible options
That is because the voting process decides on a simple majority, which in this case, is best out of seven
57.14% represents 4/7, 71.43% is 5/7, 85.71% is 6/7 and 100.0% is 7/7
Step14: This shows us that only a smaller of the time do all 7 tests agree while the largest percentage of the time, only 5/7 tests agree. This points to the limitations of the tests we use in this example
Step15: Factor models are a way of explaining the results via a linear combination of its inherent alpha as well as exposure to other indicators. The general form of a factor model is
$$Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n$$ | Python Code:
import nltk
#nltk.download()
Explanation: Estimating Sentiment Orientation with SKLearn
Jason Brietstone [email protected] & Amar Patel [email protected]
Natural language processsing is a booming field in the finance industry because of the massive amounts of user generated data that has recently become avaiable for analysis. Websites such as Twitter, StockTwits and Facebook, if analyzed correctly, can yield very relevient insights for predictions. Natural language processing can be a very daunting task because of how many layers of analysis you can add and the immense size of the English language. NLTK is a Python module and data repository, which serves as a tool for easy language processing. This paper proposes a simple method of deriving either a positive or negative sentiment orientation and highlights a few of the useful tools NLTK offers.
Firstly we need to download the NLTK repository, which is achieved by running nltk.download(). You can choose to download all text examples or download just the movie reviews which we use in this example into the corpus folder.
End of explanation
from nltk.corpus import wordnet
#example word: plan, you may change this word to anything that has synonyms and antonyms
test_word = 'good'
syns = wordnet.synsets(test_word)
# Lemma is another word for something simialr to a synonym
print(syns[1].lemmas()[1].name())
Explanation: Wordnet is a NLTK sub package that can be used to link togther words with their respective synonyms, antonyms, part of speech and definitions. This is a very powerful tool because creating a similar wordnet requires a significant amount of databasing and organization.
End of explanation
print(syns[0].definition())
Explanation: Here we are going to verify that the we are using the version of the word we think we are using by pulling the definition
End of explanation
#examples
print(syns[0].examples())
Explanation: We can also test test the word by using the .examples() method which will yield examples of the word in question
End of explanation
synonyms =[]
antonyms=[]
for syn in wordnet.synsets(test_word):
for l in syn.lemmas():
#gather all lemmas of each synonym in the synonym set
synonyms.append(l.name())
if l.antonyms():
#gather all antonyms of each lemma of each synonym in the synonym set
antonyms.append(l.antonyms()[0].name())
print(set(synonyms))
print(set(antonyms))
Explanation: For each word we can create a comprehensive list of all synonyms and antonyms by creating a for loop.
End of explanation
# yields % similarity of the words
w1 = wordnet.synset('ship.n.01') # ship , refrences that it is the noun ship, 1st entry of similar words
w2 = wordnet.synset('boat.n.01')# boat
print('word similarity is',w1.wup_similarity(w2)*100,'%')
w1 = wordnet.synset('ship.n.01') # ship , the noun ship, 1st entry of similar word
w2 = wordnet.synset('car.n.01')# boat
print('word similarity is',w1.wup_similarity(w2)*100,'%')
w1 = wordnet.synset('ship.n.01') # ship , the noun ship, 1st entry of similar word
w2 = wordnet.synset('cat.n.01')# boat
print('word similarity is',w1.wup_similarity(w2)*100,'%')
Explanation: Wu and Palmer System
In the English language, there are multiple different ways of expressing an idea. Very often, people think that by using a synonym, the meaning of the sentence is unchanged. Under many circumstances, this is true, however to the computer a slight word change can make a big difference in the returned list of lemmas and antonyms. One method we can use to determine the similarity between to words to make sure any syntax changes we make dont alter the meaning of the word is to use the Wu and Palmer system of determing semantic similarity by calling on the wup_similarity method.
End of explanation
import random
from nltk.corpus import movie_reviews #1000 labeled posotive or negative movie reviews
documents = [(list(movie_reviews.words(fileid)), category) # list of tuples for features
for category in movie_reviews.categories()
for fileid in movie_reviews.fileids(category)]
#documents[1] # prints the first movie review in tokenized format
random.shuffle(documents)# removes bias by not training and testing on the same set
all_words = []
for w in movie_reviews.words():
all_words.append(w.lower())
all_words = nltk.FreqDist(all_words)# makes frequency distrubution
print(all_words.most_common(10)) # prints(10 most common words , frequency)
print("stupid occured ",all_words['stupid'],'times') # prints frequency of stupid
Explanation: TurnItIn as a use case
Many students try to buy essays online. The services that sell those papers often use a form of natrual language processing to change the words with synonyms. The above method could determine if that has been occuring by gauging the similarities to other papers
Sentiment Analysis through Tokenization
Now we are going to begin to develop our module for determining sentiment. We are going to achieve this by tokenizing the movie reviews in our set. Tokenizing is the process of splitting blocks of text in to list of individual words. By doing this we can determining if the occurence of a particular word can be used as an indicator for posotive or negative sentiment. When doing this, we hope to see results that do NOT use any non-substantial words such as conjuctions (i.e. 'the', 'an', 'or', 'and').
This method does not have to be used only for determining sentiment. Other potential use cases could include determining the subject of a block of text or determining the origin of the author by indicators that would represent local slangs. There are 2 main benefits to using this method:
1. If we were to create our own list of positive or negative indicators to test against, we may risk missing out on words that could be impactful
2. We remove a significant amount of statical bias by not assuming a words impact but by judging based on what has already occured
End of explanation
word_features = list(all_words.keys())[:3000]
def find_features(document):
words = set(document) # converting list to set makes all the words not the amount
features ={}
for w in word_features:
features[w] = (w in words)
return features
featuresets = [(find_features(rev),category) for (rev, category) in documents]
'''
Here we create a testing and training set by arbitrariy splitting up
the 3000 words in word_features
'''
training_set =featuresets[1900:]
testing_set = featuresets[:1900]
featuresets[0][1]
#here, we reference the dictionary (list of features, pos or neg) to give the second entry
Explanation: Above we created a list all_words of all the used words, then turned that into a frequency distrubution and printed the top 15 most common words, then we printed how many times the word 'stupid' occured in the list all_words. Notice how these are obviously all conjunctions, common prepositions, and words like "the, that" as well as a hyphen.
Below, we are going to create our training set and testing set. First we pull out the keys from our all_words frequency distrubution. Considering all_words is a dictionary of ("the word", Frequency) we will now have each word. Our feature set, which is the raw data modified to show our distingushing feature, has the words that what we are defining in our training and testing set.
End of explanation
classifier =nltk.NaiveBayesClassifier.train(training_set)
print("original naieve bayes algo accuracy:", nltk.classify.accuracy(classifier,testing_set)*100)
classifier.show_most_informative_features(15)
Explanation: First, we are going to test using a simple NaiveBayesClassifier provided by NLTK. We will have it return the prediction accuracy and the most informative features. We hope to see two things that will demonstrate the efficiency of this:
1. No conjuctions (i.e. and, the, etc.) occur in the most informative feature set
2. High algo accuracy
End of explanation
from sklearn.naive_bayes import MultinomialNB, GaussianNB , BernoulliNB
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.linear_model import LogisticRegression,SGDClassifier
from sklearn.svm import SVC, LinearSVC, NuSVC
from nltk.classify import ClassifierI
from statistics import mode
class VoteClassifier(ClassifierI):
def __init__(self,*classifiers):
self._classifiers = classifiers
def classify(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
return mode(votes)
def confidence(self, features):
votes = []
for c in self._classifiers:
v = c.classify(features)
votes.append(v)
choice_votes = votes.count(mode(votes))
conf = choice_votes / len(votes)
return conf
MNB_classifier = SklearnClassifier(MultinomialNB())
MNB_classifier.train(training_set)
print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100)
BernoulliNB_classifier = SklearnClassifier(BernoulliNB())
BernoulliNB_classifier.train(training_set)
print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100)
LogisticRegression_classifier = SklearnClassifier(LogisticRegression())
LogisticRegression_classifier.train(training_set)
print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100)
SGDClassifier_classifier = SklearnClassifier(SGDClassifier())
SGDClassifier_classifier.train(training_set)
print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100)
#Another possible test, but is known to be largely inaccurate
#SVC_classifier = SklearnClassifier(SVC())
#SVC_classifier.train(training_set)
#print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100)
LinearSVC_classifier = SklearnClassifier(LinearSVC())
LinearSVC_classifier.train(training_set)
print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100)
NuSVC_classifier = SklearnClassifier(NuSVC())
NuSVC_classifier.train(training_set)
print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100)
Explanation: As you can see above the Naive Bayes Classifier is not a very accurate algorithim. To increase accuracy, we are going to try to use as many diffrent classifier methods as possible to test the data on. From there we are going to define a function that creates a voting system where each classifier votes. The outcome is the average of the votes, for example if 5/8 classifiers say positive, we will vote positive. We are also going to print the classification and the confidence of the algo.
End of explanation
voted_classifier = VoteClassifier(classifier,
NuSVC_classifier,
LinearSVC_classifier,
SGDClassifier_classifier,
MNB_classifier,
BernoulliNB_classifier,
LogisticRegression_classifier)
print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100)
#Example Classifications of the first two tests
print("Classification 0:", voted_classifier.classify(testing_set[0][0]), "Confidence %:",voted_classifier.confidence(testing_set[0][0])*100)
print("Classification 1:", voted_classifier.classify(testing_set[1][0]), "Confidence %:",voted_classifier.confidence(testing_set[1][0])*100)
Explanation: Above we printed the accuracy for all the classifiers and their respective performance. Despite their individual accuracies, it is important to note that they were generally in agreement. Below, we are printing the confidence, which is calcualted by the amount of classifiers in agreement. This removes any time that individual classifeirs may have been lacking in their ability to predict a ceartin secenrio. However, it will more heavily weight any individual machine learning funcation that may have been more accurate than even the majority.
Accuracy optimization depends on the data set in question. To optimize, you must try running all individual classifiers, and then selectively remove those that failed to meet sufficient accuracy. For further investigation, one could leverage a data set that is structured in a way that pos or neg reviews are grouped in order to test the accuracy for just posotive or negative reviews. You may find that some are biased towards one side and can be removed for better accuracy. Obviously, regardless of any investigation you make, you also likely be able to increase the accuracy and applicability of the algothim by using a larger training set. Remember, in practice, using a larger training set will increase the time and processing power necissary for completion, which is relevant if the speed of execution is important.
End of explanation
def make_confidence(number_of_tests):
confidence = []
for x in range (0,number_of_tests):
confidence.append(voted_classifier.confidence(testing_set[x][0])*100)
return confidence
import matplotlib.pyplot as plt
import pandas as pd
y = make_confidence(1000) #use all 1000 tests
x = range(0, len(y))
#create a dictionary to sort data
count_data = dict((i, y.count(i)) for i in y)
Explanation: Now, we will find out how confident each of these tests were:
End of explanation
%matplotlib inline
from decimal import Decimal
import collections
#Sort the Dictionary
od = collections.OrderedDict(sorted(count_data.items()))
plt.style.use('fivethirtyeight')
plt.bar(range(len(od)), od.values(), align='center')
plt.xticks(range(len(od)), [float(Decimal("%.2f" % key)) for key in od.keys()])
plt.show()
Explanation: Now, we can figure out what is the distribution of confidence, what the average and standard deviation are, to get an idea of its true accuracy
End of explanation
labels = ['4/7','5/7','6/7','7/7']
plt.pie(list(count_data.values()), labels = labels, autopct='%1.1f%%')
plt.show()
Explanation: This data is interesting.
First, the x labels show four possible options
That is because the voting process decides on a simple majority, which in this case, is best out of seven
57.14% represents 4/7, 71.43% is 5/7, 85.71% is 6/7 and 100.0% is 7/7
End of explanation
import numpy as np
mean = np.mean(y, dtype=np.float64)
print("The mean confidence level is: ", mean)
stdev = np.std(y, dtype = np.float64)
print("The standard deviation of the confidence is: ", stdev)
#Linear Regression
import statsmodels.api as sm
from statsmodels import regression
Explanation: This shows us that only a smaller of the time do all 7 tests agree while the largest percentage of the time, only 5/7 tests agree. This points to the limitations of the tests we use in this example
End of explanation
# Let's define everything in familiar regression terms
X = x
Y = y
def linreg(x,y):
# We add a constant so that we can also fit an intercept (alpha) to the model
x = sm.add_constant(x)
model = regression.linear_model.OLS(y,x).fit()
# Remove the constant now that we're done
x = x[:, 1]
#print(model.params)
return model.params[0], model.params[1]
#alpha and beta
alpha, beta = linreg(X,Y)
print ('alpha: ' + str(alpha))
print ('beta: ' + str(beta))
X2 = np.linspace(X.start, X.stop)
Y_hat = X2 * beta + alpha
plt.scatter(X, Y, alpha=0.25) # Plot the raw data
plt.xlabel("Number of Tests", fontsize = "16")
plt.ylabel("Confidence", fontsize = "16")
plt.axis('tight')
# Add the regression line, colored in red
plt.plot(X2, Y_hat, 'r', alpha=0.9);
Explanation: Factor models are a way of explaining the results via a linear combination of its inherent alpha as well as exposure to other indicators. The general form of a factor model is
$$Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n$$
End of explanation |
13,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning with TensorFlow
Credits
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step4: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step5: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step6: Problem 3
Convince yourself that the data is still good after shuffling!
Problem 4
Another check
Step7: Finally, let's save the data for later reuse | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
import matplotlib.pyplot as plt
import numpy as np
import os
import tarfile
import urllib
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
import cPickle as pickle
Explanation: Deep Learning with TensorFlow
Credits: Forked from TensorFlow by Google
Setup
Refer to the setup instructions.
Exercise 1
The objective of this exercise is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urllib.urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print 'Found and verified', filename
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
def extract(filename):
tar = tarfile.open(filename)
tar.extractall()
tar.close()
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
data_folders = [os.path.join(root, d) for d in sorted(os.listdir(root))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_folders, len(data_folders)))
print data_folders
return data_folders
train_folders = extract(train_filename)
test_folders = extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load(data_folders, min_num_images, max_num_images):
dataset = np.ndarray(
shape=(max_num_images, image_size, image_size), dtype=np.float32)
labels = np.ndarray(shape=(max_num_images), dtype=np.int32)
label_index = 0
image_index = 0
for folder in data_folders:
print folder
for image in os.listdir(folder):
if image_index >= max_num_images:
raise Exception('More images than expected: %d >= %d' % (
num_images, max_num_images))
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
labels[image_index] = label_index
image_index += 1
except IOError as e:
print 'Could not read:', image_file, ':', e, '- it\'s ok, skipping.'
label_index += 1
num_images = image_index
dataset = dataset[0:num_images, :, :]
labels = labels[0:num_images]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' % (
num_images, min_num_images))
print 'Full dataset tensor:', dataset.shape
print 'Mean:', np.mean(dataset)
print 'Standard deviation:', np.std(dataset)
print 'Labels:', labels.shape
return dataset, labels
train_dataset, train_labels = load(train_folders, 450000, 550000)
test_dataset, test_labels = load(test_folders, 18000, 20000)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. The labels will be stored into a separate array of integers 0 through 9.
A few images might not be readable, we'll just skip them.
End of explanation
np.random.seed(133)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
train_size = 200000
valid_size = 10000
valid_dataset = train_dataset[:valid_size,:,:]
valid_labels = train_labels[:valid_size]
train_dataset = train_dataset[valid_size:valid_size+train_size,:,:]
train_labels = train_labels[valid_size:valid_size+train_size]
print 'Training', train_dataset.shape, train_labels.shape
print 'Validation', valid_dataset.shape, valid_labels.shape
Explanation: Problem 3
Convince yourself that the data is still good after shuffling!
Problem 4
Another check: we expect the data to be balanced across classes. Verify that.
Prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed.
Also create a validation dataset for hyperparameter tuning.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print 'Unable to save data to', pickle_file, ':', e
raise
statinfo = os.stat(pickle_file)
print 'Compressed pickle size:', statinfo.st_size
Explanation: Finally, let's save the data for later reuse:
End of explanation |
13,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #16
Reinforcement Learning (Q-Learning)
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial is about so-called Reinforcement Learning in which an agent is learning how to navigate some environment, in this case Atari games from the 1970-80's. The agent does not know anything about the game and must learn how to play it from trial and error. The only information that is available to the agent is the screen output of the game, and whether the previous action resulted in a reward or penalty.
This is a very difficult problem in Machine Learning / Artificial Intelligence, because the agent must both learn to distinguish features in the game-images, and then connect the occurence of certain features in the game-images with its own actions and a reward or penalty that may be deferred many steps into the future.
This problem was first solved by the researchers from Google DeepMind. This tutorial is based on the main ideas from their early research papers (especially this and this), although we make several changes because the original DeepMind algorithm was awkward and over-complicated in some ways. But it turns out that you still need several tricks in order to stabilize the training of the agent, so the implementation in this tutorial is unfortunately also somewhat complicated.
The basic idea is to have the agent estimate so-called Q-values whenever it sees an image from the game-environment. The Q-values tell the agent which action is most likely to lead to the highest cumulative reward in the future. The problem is then reduced to finding these Q-values and storing them for later retrieval using a function approximator.
This builds on some of the previous tutorials. You should be familiar with TensorFlow and Convolutional Neural Networks from Tutorial #01 and #02. It will also be helpful if you are familiar with one of the builder APIs in Tutorials #03 or #03-B.
The Problem
This tutorial uses the Atari game Breakout, where the player or agent is supposed to hit a ball with a paddle, thus avoiding death while scoring points when the ball smashes pieces of a wall.
When a human learns to play a game like this, the first thing to figure out is what part of the game environment you are controlling - in this case the paddle at the bottom. If you move right on the joystick then the paddle moves right and vice versa. The next thing is to figure out what the goal of the game is - in this case to smash as many bricks in the wall as possible so as to maximize the score. Finally you need to learn what to avoid - in this case you must avoid dying by letting the ball pass beside the paddle.
Below are shown 3 images from the game that demonstrate what we need our agent to learn. In the image to the left, the ball is going downwards and the agent must learn to move the paddle so as to hit the ball and avoid death. The image in the middle shows the paddle hitting the ball, which eventually leads to the image on the right where the ball smashes some bricks and scores points. The ball then continues downwards and the process repeats.
The problem is that there are 10 states between the ball going downwards and the paddle hitting the ball, and there are an additional 18 states before the reward is obtained when the ball hits the wall and smashes some bricks. How can we teach an agent to connect these three situations and generalize to similar situations? The answer is to use so-called Reinforcement Learning with a Neural Network, as shown in this tutorial.
Q-Learning
One of the simplest ways of doing Reinforcement Learning is called Q-learning. Here we want to estimate so-called Q-values which are also called action-values, because they map a state of the game-environment to a numerical value for each possible action that the agent may take. The Q-values indicate which action is expected to result in the highest future reward, thus telling the agent which action to take.
Unfortunately we do not know what the Q-values are supposed to be, so we have to estimate them somehow. The Q-values are all initialized to zero and then updated repeatedly as new information is collected from the agent playing the game. When the agent scores a point then the Q-value must be updated with the new information.
There are different formulas for updating Q-values, but the simplest is to set the new Q-value to the reward that was observed, plus the maximum Q-value for the following state of the game. This gives the total reward that the agent can expect from the current game-state and onwards. Typically we also multiply the max Q-value for the following state by a so-called discount-factor slightly below 1. This causes more distant rewards to contribute less to the Q-value, thus making the agent favour rewards that are closer in time.
The formula for updating the Q-value is
Step1: The main source-code for Reinforcement Learning is located in the following module
Step2: This was developed using Python 3.6.0 (Anaconda) with package versions
Step3: Game Environment
This is the name of the game-environment that we want to use in OpenAI Gym.
Step4: This is the base-directory for the TensorFlow checkpoints as well as various log-files.
Step5: Once the base-dir has been set, you need to call this function to set all the paths that will be used. This will also create the checkpoint-dir if it does not already exist.
Step6: Download Pre-Trained Model
You can download a TensorFlow checkpoint which holds all the pre-trained variables for the Neural Network. Two checkpoints are provided, one for Breakout and one for Space Invaders. They were both trained for about 150 hours on a laptop with 2.6 GHz CPU and a GTX 1070 GPU.
COMPATIBILITY ISSUES
These TensorFlow checkpoints were developed with OpenAI gym v. 0.8.1 and atari-py v. 0.0.19 which had unused / redundant actions as noted above. There appears to have been a change in the gym API since then, as the unused actions are no longer present. This means the vectors with actions and Q-values now only contain 4 elements instead of the 6 shown here. This also means that the TensorFlow checkpoints cannot be used with newer versions of gym and atari-py, so in order to use these pre-trained checkpoints you need to install the older versions of gym and atari-py - or you can just train a new model yourself so you get a new TensorFlow checkpoint.
WARNING!
These checkpoints are 280-360 MB each. They are currently hosted on the webserver I use for www.hvass-labs.org because it is awkward to automatically download large files on Google Drive. To lower the traffic on my webserver, this line has been commented out, so you have to activate it manually. You are welcome to download it, I just don't want it to download automatically for everyone who only wants to run this Notebook briefly.
Step7: I believe the webserver is located in Denmark. If you are having problems downloading the files using the automatic function above, then you can try and download the files manually in a webbrowser or using wget or curl. Or you can download from Google Drive, where you will get an anti-virus warning that is awkward to bypass automatically
Step8: The Neural Network is automatically instantiated by the Agent-class. We will create a direct reference for convenience.
Step9: Similarly, the Agent-class also allocates the replay-memory when training==True. The replay-memory will require more than 3 GB of RAM, so it should only be allocated when needed. We will need the replay-memory in this Notebook to record the states and Q-values we observe, so they can be plotted further below.
Step10: Training
The agent's run() function is used to play the game. This uses the Neural Network to estimate Q-values and hence determine the agent's actions. If training==True then it will also gather states and Q-values in the replay-memory and train the Neural Network when the replay-memory is sufficiently full. You can set num_episodes=None if you want an infinite loop that you would stop manually with ctrl-c. In this case we just set num_episodes=1 because we are not actually interested in training the Neural Network any further, we merely want to collect some states and Q-values in the replay-memory so we can plot them below.
Step11: In training-mode, this function will output a line for each episode. The first counter is for the number of episodes that have been processed. The second counter is for the number of states that have been processed. These two counters are stored in the TensorFlow checkpoint along with the weights of the Neural Network, so you can restart the training e.g. if you only have one computer and need to train during the night.
Note that the number of episodes is almost 90k. It is impractical to print that many lines in this Notebook, so the training is better done in a terminal window by running the following commands
Step12: We can now read the logs from file
Step13: Training Progress
Step14: Training Progress
Step15: Testing
When the agent and Neural Network is being trained, the so-called epsilon-probability is typically decreased from 1.0 to 0.1 over a large number of steps, after which the probability is held fixed at 0.1. This means the probability is 0.1 or 10% that the agent will select a random action in each step, otherwise it will select the action that has the highest Q-value. This is known as the epsilon-greedy policy. The choice of 0.1 for the epsilon-probability is a compromise between taking the actions that are already known to be good, versus exploring new actions that might lead to even higher rewards or might lead to death of the agent.
During testing it is common to lower the epsilon-probability even further. We have set it to 0.01 as shown here
Step16: We will now instruct the agent that it should no longer perform training by setting this boolean
Step17: We also reset the previous episode rewards.
Step18: We can render the game-environment to screen so we can see the agent playing the game, by setting this boolean
Step19: We can now run a single episode by calling the run() function again. This should open a new window that shows the game being played by the agent. At the time of this writing, it was not possible to resize this tiny window, and the developers at OpenAI did not seem to care about this feature which should obviously be there.
Step20: Mean Reward
The game-play is slightly random, both with regard to selecting actions using the epsilon-greedy policy, but also because the OpenAI Gym environment will repeat any action between 2-4 times, with the number chosen at random. So the reward of one episode is not an accurate estimate of the reward that can be expected in general from this agent.
We need to run 30 or even 50 episodes to get a more accurate estimate of the reward that can be expected.
We will first reset the previous episode rewards.
Step21: We disable the screen-rendering so the game-environment runs much faster.
Step22: We can now run 30 episodes. This records the rewards for each episode. It might have been a good idea to disable the output so it does not print all these lines - you can do this as an exercise.
Step23: We can now print some statistics for the episode rewards, which vary greatly from one episode to the next.
Step24: We can also plot a histogram with the episode rewards.
Step26: Example States
We can plot examples of states from the game-environment and the Q-values that are estimated by the Neural Network.
This helper-function prints the Q-values for a given index in the replay-memory.
Step28: This helper-function plots a state from the replay-memory and optionally prints the Q-values.
Step29: The replay-memory has room for 200k states but it is only partially full from the above call to agent.run(num_episodes=1). This is how many states are actually used.
Step30: Get the Q-values from the replay-memory that are actually used.
Step31: For each state, calculate the min / max Q-values and their difference. This will be used to lookup interesting states in the following sections.
Step32: Example States
Step33: This state is where the ball hits the wall so the agent scores a point.
We can show the surrounding states leading up to and following this state. Note how the Q-values are very close for the different actions, because at this point it really does not matter what the agent does as the reward is already guaranteed. But note how the Q-values decrease significantly after the ball has hit the wall and a point has been scored.
Also note that the agent uses the Epsilon-greedy policy for taking actions, so there is a small probability that a random action is taken instead of the action with the highest Q-value.
Step34: Example
Step35: Example
Step36: Example
Step37: Example
Step39: Output of Convolutional Layers
The outputs of the convolutional layers can be plotted so we can see how the images from the game-environment are being processed by the Neural Network.
This is the helper-function for plotting the output of the convolutional layer with the given name, when inputting the given state from the replay-memory.
Step40: Game State
This is the state that is being input to the Neural Network. The image on the left is the last image from the game-environment. The image on the right is the processed motion-trace that shows the trajectories of objects in the game-environment.
Step41: Output of Convolutional Layer 1
This shows the images that are output by the 1st convolutional layer, when inputting the above state to the Neural Network. There are 16 output channels of this convolutional layer.
Note that you can invert the colors by setting inverse_cmap=True in the parameters to this function.
Step42: Output of Convolutional Layer 2
These are the images output by the 2nd convolutional layer, when inputting the above state to the Neural Network. There are 32 output channels of this convolutional layer.
Step43: Output of Convolutional Layer 3
These are the images output by the 3rd convolutional layer, when inputting the above state to the Neural Network. There are 64 output channels of this convolutional layer.
All these images are flattened to a one-dimensional array (or tensor) which is then used as the input to a fully-connected layer in the Neural Network.
During the training-process, the Neural Network has learnt what convolutional filters to apply to the images from the game-environment so as to produce these images, because they have proven to be useful when estimating Q-values.
Can you see what it is that the Neural Network has learned to detect in these images?
Step45: Weights for Convolutional Layers
We can also plot the weights of the convolutional layers in the Neural Network. These are the weights that are being optimized so as to improve the ability of the Neural Network to estimate Q-values. Tutorial #02 explains in greater detail what convolutional weights are.
There are also weights for the fully-connected layers but they are not shown here.
This is the helper-function for plotting the weights of a convoluational layer.
Step46: Weights for Convolutional Layer 1
These are the weights of the first convolutional layer of the Neural Network, with respect to the first input channel of the state. That is, these are the weights that are used on the image from the game-environment. Some basic statistics are also shown.
Note how the weights are more negative (blue) than positive (red). It is unclear why this happens as these weights are found through optimization. It is apparently beneficial for the following layers to have this processing with more negative weights in the first convolutional layer.
Step47: We can also plot the convolutional weights for the second input channel, that is, the motion-trace of the game-environment. Once again we see that the negative weights (blue) have a much greater magnitude than the positive weights (red).
Step48: Weights for Convolutional Layer 2
These are the weights of the 2nd convolutional layer in the Neural Network. There are 16 input channels and 32 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
Step49: Weights for Convolutional Layer 3
These are the weights of the 3rd convolutional layer in the Neural Network. There are 32 input channels and 64 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note again how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import gym
import numpy as np
import math
Explanation: TensorFlow Tutorial #16
Reinforcement Learning (Q-Learning)
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial is about so-called Reinforcement Learning in which an agent is learning how to navigate some environment, in this case Atari games from the 1970-80's. The agent does not know anything about the game and must learn how to play it from trial and error. The only information that is available to the agent is the screen output of the game, and whether the previous action resulted in a reward or penalty.
This is a very difficult problem in Machine Learning / Artificial Intelligence, because the agent must both learn to distinguish features in the game-images, and then connect the occurence of certain features in the game-images with its own actions and a reward or penalty that may be deferred many steps into the future.
This problem was first solved by the researchers from Google DeepMind. This tutorial is based on the main ideas from their early research papers (especially this and this), although we make several changes because the original DeepMind algorithm was awkward and over-complicated in some ways. But it turns out that you still need several tricks in order to stabilize the training of the agent, so the implementation in this tutorial is unfortunately also somewhat complicated.
The basic idea is to have the agent estimate so-called Q-values whenever it sees an image from the game-environment. The Q-values tell the agent which action is most likely to lead to the highest cumulative reward in the future. The problem is then reduced to finding these Q-values and storing them for later retrieval using a function approximator.
This builds on some of the previous tutorials. You should be familiar with TensorFlow and Convolutional Neural Networks from Tutorial #01 and #02. It will also be helpful if you are familiar with one of the builder APIs in Tutorials #03 or #03-B.
The Problem
This tutorial uses the Atari game Breakout, where the player or agent is supposed to hit a ball with a paddle, thus avoiding death while scoring points when the ball smashes pieces of a wall.
When a human learns to play a game like this, the first thing to figure out is what part of the game environment you are controlling - in this case the paddle at the bottom. If you move right on the joystick then the paddle moves right and vice versa. The next thing is to figure out what the goal of the game is - in this case to smash as many bricks in the wall as possible so as to maximize the score. Finally you need to learn what to avoid - in this case you must avoid dying by letting the ball pass beside the paddle.
Below are shown 3 images from the game that demonstrate what we need our agent to learn. In the image to the left, the ball is going downwards and the agent must learn to move the paddle so as to hit the ball and avoid death. The image in the middle shows the paddle hitting the ball, which eventually leads to the image on the right where the ball smashes some bricks and scores points. The ball then continues downwards and the process repeats.
The problem is that there are 10 states between the ball going downwards and the paddle hitting the ball, and there are an additional 18 states before the reward is obtained when the ball hits the wall and smashes some bricks. How can we teach an agent to connect these three situations and generalize to similar situations? The answer is to use so-called Reinforcement Learning with a Neural Network, as shown in this tutorial.
Q-Learning
One of the simplest ways of doing Reinforcement Learning is called Q-learning. Here we want to estimate so-called Q-values which are also called action-values, because they map a state of the game-environment to a numerical value for each possible action that the agent may take. The Q-values indicate which action is expected to result in the highest future reward, thus telling the agent which action to take.
Unfortunately we do not know what the Q-values are supposed to be, so we have to estimate them somehow. The Q-values are all initialized to zero and then updated repeatedly as new information is collected from the agent playing the game. When the agent scores a point then the Q-value must be updated with the new information.
There are different formulas for updating Q-values, but the simplest is to set the new Q-value to the reward that was observed, plus the maximum Q-value for the following state of the game. This gives the total reward that the agent can expect from the current game-state and onwards. Typically we also multiply the max Q-value for the following state by a so-called discount-factor slightly below 1. This causes more distant rewards to contribute less to the Q-value, thus making the agent favour rewards that are closer in time.
The formula for updating the Q-value is:
Q-value for state and action = reward + discount * max Q-value for next state
In academic papers, this is typically written with mathematical symbols like this:
$$
Q(s_{t},a_{t}) \leftarrow \underbrace{r_{t}}{\rm reward} + \underbrace{\gamma}{\rm discount} \cdot \underbrace{\max_{a}Q(s_{t+1}, a)}_{\rm estimate~of~future~rewards}
$$
Furthermore, when the agent loses a life, then we know that the future reward is zero because the agent is dead, so we set the Q-value for that state to zero.
Simple Example
The images below demonstrate how Q-values are updated in a backwards sweep through the game-states that have previously been visited. In this simple example we assume all Q-values have been initialized to zero. The agent gets a reward of 1 point in the right-most image. This reward is then propagated backwards to the previous game-states, so when we see similar game-states in the future, we know that the given actions resulted in that reward.
The discounting is an exponentially decreasing function. This example uses a discount-factor of 0.97 so the Q-value for the 3rd image is about $0.885 \simeq 0.97^4$ because it is 4 states prior to the state that actually received the reward. Similarly for the other states. This example only shows one Q-value per state, but in reality there is one Q-value for each possible action in the state, and the Q-values are updated in a backwards-sweep using the formula above. This is shown in the next section.
Detailed Example
This is a more detailed example showing the Q-values for two successive states of the game-environment and how to update them.
The Q-values for the possible actions have been estimated by a Neural Network. For the action NOOP in state t the Q-value is estimated to be 2.900, which is the highest Q-value for that state so the agent takes that action, i.e. the agent does not do anything between state t and t+1 because NOOP means "No Operation".
In state t+1 the agent scores 4 points, but this is limited to 1 point in this implementation so as to stabilize the training. The maximum Q-value for state t+1 is 1.830 for the action RIGHTFIRE. So if we select that action and continue to select the actions proposed by the Q-values estimated by the Neural Network, then the discounted sum of all the future rewards is expected to be 1.830.
Now that we know the reward of taking the NOOP action from state t to t+1, we can update the Q-value to incorporate this new information. This uses the formula above:
$$
Q(state_{t},NOOP) \leftarrow \underbrace{r_{t}}{\rm reward} + \underbrace{\gamma}{\rm discount} \cdot \underbrace{\max_{a}Q(state_{t+1}, a)}_{\rm estimate~of~future~rewards} = 1.0 + 0.97 \cdot 1.830 \simeq 2.775
$$
The new Q-value is 2.775 which is slightly lower than the previous estimate of 2.900. This Neural Network has already been trained for 150 hours so it is quite good at estimating Q-values, but earlier during the training, the estimated Q-values would be more different.
The idea is to have the agent play many, many games and repeatedly update the estimates of the Q-values as more information about rewards and penalties becomes available. This will eventually lead to good estimates of the Q-values, provided the training is numerically stable, as discussed further below. By doing this, we create a connection between rewards and prior actions.
Motion Trace
If we only use a single image from the game-environment then we cannot tell which direction the ball is moving. The typical solution is to use multiple consecutive images to represent the state of the game-environment.
This implementation uses another approach by processing the images from the game-environment in a motion-tracer that outputs two images as shown below. The left image is from the game-environment and the right image is the processed image, which shows traces of recent movements in the game-environment. In this case we can see that the ball is going downwards and has bounced off the right wall, and that the paddle has moved from the left to the right side of the screen.
Note that the motion-tracer has only been tested for Breakout and partially tested for Space Invaders, so it may not work for games with more complicated graphics such as Doom.
Training Stability
We need a function approximator that can take a state of the game-environment as input and produce as output an estimate of the Q-values for that state. We will use a Convolutional Neural Network for this. Although they have achieved great fame in recent years, they are actually a quite old technologies with many problems - one of which is training stability. A significant part of the research for this tutorial was spent on tuning and stabilizing the training of the Neural Network.
To understand why training stability is a problem, consider the 3 images below which show the game-environment in 3 consecutive states. At state $t$ the agent is about to score a point, which happens in the following state $t+1$. Assuming all Q-values were zero prior to this, we should now set the Q-value for state $t+1$ to be 1.0 and it should be 0.97 for state $t$ if the discount-value is 0.97, according to the formula above for updating Q-values.
If we were to train a Neural Network to estimate the Q-values for the two states $t$ and $t+1$ with Q-values 0.97 and 1.0, respectively, then the Neural Network will most likely be unable to distinguish properly between the images of these two states. As a result the Neural Network will also estimate a Q-value near 1.0 for state $t+2$ because the images are so similar. But this is clearly wrong because the Q-values for state $t+2$ should be zero as we do not know anything about future rewards at this point, and that is what the Q-values are supposed to estimate.
If this is continued and the Neural Network is trained after every new game-state is observed, then it will quickly cause the estimated Q-values to explode. This is an artifact of training Neural Networks which must have sufficiently large and diverse training-sets. For this reason we will use a so-called Replay Memory so we can gather a large number of game-states and shuffle them during training of the Neural Network.
Flowchart
This flowchart shows roughly how Reinforcement Learning is implemented in this tutorial. There are two main loops which are run sequentially until the Neural Network is sufficiently accurate at estimating Q-values.
The first loop is for playing the game and recording data. This uses the Neural Network to estimate Q-values from a game-state. It then stores the game-state along with the corresponding Q-values and reward/penalty in the Replay Memory for later use.
The other loop is activated when the Replay Memory is sufficiently full. First it makes a full backwards sweep through the Replay Memory to update the Q-values with the new rewards and penalties that have been observed. Then it performs an optimization run so as to train the Neural Network to better estimate these updated Q-values.
There are many more details in the implementation, such as decreasing the learning-rate and increasing the fraction of the Replay Memory being used during training, but this flowchart shows the main ideas.
Neural Network Architecture
The Neural Network used in this implementation has 3 convolutional layers, all of which have filter-size 3x3. The layers have 16, 32, and 64 output channels, respectively. The stride is 2 in the first two convolutional layers and 1 in the last layer.
Following the 3 convolutional layers there are 4 fully-connected layers each with 1024 units and ReLU-activation. Then there is a single fully-connected layer with linear activation used as the output of the Neural Network.
This architecture is different from those typically used in research papers from DeepMind and others. They often have large convolutional filter-sizes of 8x8 and 4x4 with high stride-values. This causes more aggressive down-sampling of the game-state images. They also typically have only a single fully-connected layer with 256 or 512 ReLU units.
During the research for this tutorial, it was found that smaller filter-sizes and strides in the convolutional layers, combined with several fully-connected layers having more units, were necessary in order to have sufficiently accurate Q-values. The Neural Network architectures originally used by DeepMind appear to distort the Q-values quite significantly. A reason that their approach still worked, is possibly due to their use of a very large Replay Memory with 1 million states, and that the Neural Network did one mini-batch of training for each step of the game-environment, and some other tricks.
The architecture used here is probably excessive but it takes several days of training to test each architecture, so it is left as an exercise for the reader to try and find a smaller Neural Network architecture that still performs well.
Installation
The documentation for OpenAI Gym currently suggests that you need to build it in order to install it. But if you just want to install the Atari games, then you only need to install a single pip-package by typing the following commands in a terminal.
conda create --name tf-gym --clone tf
source activate tf-gym
pip install gym[atari]
This assumes you already have an Anaconda environment named tf which has TensorFlow installed, it will then be cloned to another environment named tf-gym where OpenAI Gym is also installed. This allows you to easily switch between your normal TensorFlow environment and another one which also contains OpenAI Gym.
You can also have two environments named tf-gpu and tf-gpu-gym for the GPU versions of TensorFlow.
Imports
End of explanation
import reinforcement_learning as rl
Explanation: The main source-code for Reinforcement Learning is located in the following module:
End of explanation
# TensorFlow
tf.__version__
# OpenAI Gym
gym.__version__
Explanation: This was developed using Python 3.6.0 (Anaconda) with package versions:
End of explanation
env_name = 'Breakout-v0'
# env_name = 'SpaceInvaders-v0'
Explanation: Game Environment
This is the name of the game-environment that we want to use in OpenAI Gym.
End of explanation
rl.checkpoint_base_dir = 'checkpoints_tutorial16/'
Explanation: This is the base-directory for the TensorFlow checkpoints as well as various log-files.
End of explanation
rl.update_paths(env_name=env_name)
Explanation: Once the base-dir has been set, you need to call this function to set all the paths that will be used. This will also create the checkpoint-dir if it does not already exist.
End of explanation
# rl.maybe_download_checkpoint(env_name=env_name)
Explanation: Download Pre-Trained Model
You can download a TensorFlow checkpoint which holds all the pre-trained variables for the Neural Network. Two checkpoints are provided, one for Breakout and one for Space Invaders. They were both trained for about 150 hours on a laptop with 2.6 GHz CPU and a GTX 1070 GPU.
COMPATIBILITY ISSUES
These TensorFlow checkpoints were developed with OpenAI gym v. 0.8.1 and atari-py v. 0.0.19 which had unused / redundant actions as noted above. There appears to have been a change in the gym API since then, as the unused actions are no longer present. This means the vectors with actions and Q-values now only contain 4 elements instead of the 6 shown here. This also means that the TensorFlow checkpoints cannot be used with newer versions of gym and atari-py, so in order to use these pre-trained checkpoints you need to install the older versions of gym and atari-py - or you can just train a new model yourself so you get a new TensorFlow checkpoint.
WARNING!
These checkpoints are 280-360 MB each. They are currently hosted on the webserver I use for www.hvass-labs.org because it is awkward to automatically download large files on Google Drive. To lower the traffic on my webserver, this line has been commented out, so you have to activate it manually. You are welcome to download it, I just don't want it to download automatically for everyone who only wants to run this Notebook briefly.
End of explanation
agent = rl.Agent(env_name=env_name,
training=True,
render=True,
use_logging=False)
Explanation: I believe the webserver is located in Denmark. If you are having problems downloading the files using the automatic function above, then you can try and download the files manually in a webbrowser or using wget or curl. Or you can download from Google Drive, where you will get an anti-virus warning that is awkward to bypass automatically:
Download Breakout Checkpoint from Google Drive
Download Space Invaders Checkpoint from Google Drive
You can use the checksum to ensure the downloaded files are complete:
SHA256 Checksum
Create Agent
The Agent-class implements the main loop for playing the game, recording data and optimizing the Neural Network. We create an object-instance and need to set training=True because we want to use the replay-memory to record states and Q-values for plotting further below. We disable logging so this does not corrupt the logs from the actual training that was done previously. We can also set render=True but it will have no effect as long as training==True.
End of explanation
model = agent.model
Explanation: The Neural Network is automatically instantiated by the Agent-class. We will create a direct reference for convenience.
End of explanation
replay_memory = agent.replay_memory
Explanation: Similarly, the Agent-class also allocates the replay-memory when training==True. The replay-memory will require more than 3 GB of RAM, so it should only be allocated when needed. We will need the replay-memory in this Notebook to record the states and Q-values we observe, so they can be plotted further below.
End of explanation
agent.run(num_episodes=1)
Explanation: Training
The agent's run() function is used to play the game. This uses the Neural Network to estimate Q-values and hence determine the agent's actions. If training==True then it will also gather states and Q-values in the replay-memory and train the Neural Network when the replay-memory is sufficiently full. You can set num_episodes=None if you want an infinite loop that you would stop manually with ctrl-c. In this case we just set num_episodes=1 because we are not actually interested in training the Neural Network any further, we merely want to collect some states and Q-values in the replay-memory so we can plot them below.
End of explanation
log_q_values = rl.LogQValues()
log_reward = rl.LogReward()
Explanation: In training-mode, this function will output a line for each episode. The first counter is for the number of episodes that have been processed. The second counter is for the number of states that have been processed. These two counters are stored in the TensorFlow checkpoint along with the weights of the Neural Network, so you can restart the training e.g. if you only have one computer and need to train during the night.
Note that the number of episodes is almost 90k. It is impractical to print that many lines in this Notebook, so the training is better done in a terminal window by running the following commands:
source activate tf-gpu-gym # Activate your Python environment with TF and Gym.
python reinforcement-learning.py --env Breakout-v0 --training
Training Progress
Data is being logged during training so we can plot the progress afterwards. The reward for each episode and a running mean of the last 30 episodes are logged to file. Basic statistics for the Q-values in the replay-memory are also logged to file before each optimization run.
This could be logged using TensorFlow and TensorBoard, but they were designed for logging variables of the TensorFlow graph and data that flows through the graph. In this case the data we want logged does not reside in the graph, so it becomes a bit awkward to use TensorFlow to log this data.
We have therefore implemented a few small classes that can write and read these logs.
End of explanation
log_q_values.read()
log_reward.read()
Explanation: We can now read the logs from file:
End of explanation
plt.plot(log_reward.count_states, log_reward.episode, label='Episode Reward')
plt.plot(log_reward.count_states, log_reward.mean, label='Mean of 30 episodes')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
Explanation: Training Progress: Reward
This plot shows the reward for each episode during training, as well as the running mean of the last 30 episodes. Note how the reward varies greatly from one episode to the next, so it is difficult to say from this plot alone whether the agent is really improving during the training, although the running mean does appear to trend upwards slightly.
End of explanation
plt.plot(log_q_values.count_states, log_q_values.mean, label='Q-Value Mean')
plt.xlabel('State-Count for Game Environment')
plt.legend()
plt.show()
Explanation: Training Progress: Q-Values
The following plot shows the mean Q-values from the replay-memory prior to each run of the optimizer for the Neural Network. Note how the mean Q-values increase rapidly in the beginning and then they increase fairly steadily for 40 million states, after which they still trend upwards but somewhat more irregularly.
The fast improvement in the beginning is probably due to (1) the use of a smaller replay-memory early in training so the Neural Network is optimized more often and the new information is used faster, (2) the backwards-sweeping of the replay-memory so the rewards are used to update the Q-values for many of the states, instead of just updating the Q-values for a single state, and (3) the replay-memory is balanced so at least half of each mini-batch contains states whose Q-values have high estimation-errors for the Neural Network.
The original paper from DeepMind showed much slower progress in the first phase of training, see Figure 2 in that paper but note that the Q-values are not directly comparable, possibly because they used a higher discount factor of 0.99 while we only used 0.97 here.
End of explanation
agent.epsilon_greedy.epsilon_testing
Explanation: Testing
When the agent and Neural Network is being trained, the so-called epsilon-probability is typically decreased from 1.0 to 0.1 over a large number of steps, after which the probability is held fixed at 0.1. This means the probability is 0.1 or 10% that the agent will select a random action in each step, otherwise it will select the action that has the highest Q-value. This is known as the epsilon-greedy policy. The choice of 0.1 for the epsilon-probability is a compromise between taking the actions that are already known to be good, versus exploring new actions that might lead to even higher rewards or might lead to death of the agent.
During testing it is common to lower the epsilon-probability even further. We have set it to 0.01 as shown here:
End of explanation
agent.training = False
Explanation: We will now instruct the agent that it should no longer perform training by setting this boolean:
End of explanation
agent.reset_episode_rewards()
Explanation: We also reset the previous episode rewards.
End of explanation
agent.render = True
Explanation: We can render the game-environment to screen so we can see the agent playing the game, by setting this boolean:
End of explanation
agent.run(num_episodes=1)
Explanation: We can now run a single episode by calling the run() function again. This should open a new window that shows the game being played by the agent. At the time of this writing, it was not possible to resize this tiny window, and the developers at OpenAI did not seem to care about this feature which should obviously be there.
End of explanation
agent.reset_episode_rewards()
Explanation: Mean Reward
The game-play is slightly random, both with regard to selecting actions using the epsilon-greedy policy, but also because the OpenAI Gym environment will repeat any action between 2-4 times, with the number chosen at random. So the reward of one episode is not an accurate estimate of the reward that can be expected in general from this agent.
We need to run 30 or even 50 episodes to get a more accurate estimate of the reward that can be expected.
We will first reset the previous episode rewards.
End of explanation
agent.render = False
Explanation: We disable the screen-rendering so the game-environment runs much faster.
End of explanation
agent.run(num_episodes=30)
Explanation: We can now run 30 episodes. This records the rewards for each episode. It might have been a good idea to disable the output so it does not print all these lines - you can do this as an exercise.
End of explanation
rewards = agent.episode_rewards
print("Rewards for {0} episodes:".format(len(rewards)))
print("- Min: ", np.min(rewards))
print("- Mean: ", np.mean(rewards))
print("- Max: ", np.max(rewards))
print("- Stdev: ", np.std(rewards))
Explanation: We can now print some statistics for the episode rewards, which vary greatly from one episode to the next.
End of explanation
_ = plt.hist(rewards, bins=30)
Explanation: We can also plot a histogram with the episode rewards.
End of explanation
def print_q_values(idx):
Print Q-values and actions from the replay-memory at the given index.
# Get the Q-values and action from the replay-memory.
q_values = replay_memory.q_values[idx]
action = replay_memory.actions[idx]
print("Action: Q-Value:")
print("====================")
# Print all the actions and their Q-values.
for i, q_value in enumerate(q_values):
# Used to display which action was taken.
if i == action:
action_taken = "(Action Taken)"
else:
action_taken = ""
# Text-name of the action.
action_name = agent.get_action_name(i)
print("{0:12}{1:.3f} {2}".format(action_name, q_value,
action_taken))
# Newline.
print()
Explanation: Example States
We can plot examples of states from the game-environment and the Q-values that are estimated by the Neural Network.
This helper-function prints the Q-values for a given index in the replay-memory.
End of explanation
def plot_state(idx, print_q=True):
Plot the state in the replay-memory with the given index.
# Get the state from the replay-memory.
state = replay_memory.states[idx]
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(1, 2)
# Plot the image from the game-environment.
ax = axes.flat[0]
ax.imshow(state[:, :, 0], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# Plot the motion-trace.
ax = axes.flat[1]
ax.imshow(state[:, :, 1], vmin=0, vmax=255,
interpolation='lanczos', cmap='gray')
# This is necessary if we show more than one plot in a single Notebook cell.
plt.show()
# Print the Q-values.
if print_q:
print_q_values(idx=idx)
Explanation: This helper-function plots a state from the replay-memory and optionally prints the Q-values.
End of explanation
num_used = replay_memory.num_used
num_used
Explanation: The replay-memory has room for 200k states but it is only partially full from the above call to agent.run(num_episodes=1). This is how many states are actually used.
End of explanation
q_values = replay_memory.q_values[0:num_used, :]
Explanation: Get the Q-values from the replay-memory that are actually used.
End of explanation
q_values_min = q_values.min(axis=1)
q_values_max = q_values.max(axis=1)
q_values_dif = q_values_max - q_values_min
Explanation: For each state, calculate the min / max Q-values and their difference. This will be used to lookup interesting states in the following sections.
End of explanation
idx = np.argmax(replay_memory.rewards)
idx
Explanation: Example States: Highest Reward
This example shows the states surrounding the state with the highest reward.
During the training we limit the rewards to the range [-1, 1] so this basically just gets the first state that has a reward of 1.
End of explanation
for i in range(-5, 3):
plot_state(idx=idx+i)
Explanation: This state is where the ball hits the wall so the agent scores a point.
We can show the surrounding states leading up to and following this state. Note how the Q-values are very close for the different actions, because at this point it really does not matter what the agent does as the reward is already guaranteed. But note how the Q-values decrease significantly after the ball has hit the wall and a point has been scored.
Also note that the agent uses the Epsilon-greedy policy for taking actions, so there is a small probability that a random action is taken instead of the action with the highest Q-value.
End of explanation
idx = np.argmax(q_values_max)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Highest Q-Value
This example shows the states surrounding the one with the highest Q-values. This means that the agent has high expectation that several points will be scored in the following steps. Note that the Q-values decrease significantly after the points have been scored.
End of explanation
idx = np.argmax(replay_memory.end_life)
idx
for i in range(-10, 0):
plot_state(idx=idx+i)
Explanation: Example: Loss of Life
This example shows the states leading up to a loss of life for the agent.
End of explanation
idx = np.argmax(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Greatest Difference in Q-Values
This example shows the state where there is the greatest difference in Q-values, which means that the agent believes one action will be much more beneficial than another. But because the agent uses the Epsilon-greedy policy, it sometimes selects a random action instead.
End of explanation
idx = np.argmin(q_values_dif)
idx
for i in range(0, 5):
plot_state(idx=idx+i)
Explanation: Example: Smallest Difference in Q-Values
This example shows the state where there is the smallest difference in Q-values, which means that the agent believes it does not really matter which action it selects, as they all have roughly the same expectations for future rewards.
The Neural Network estimates these Q-values and they are not precise. The differences in Q-values may be so small that they fall within the error-range of the estimates.
End of explanation
def plot_layer_output(model, layer_name, state_index, inverse_cmap=False):
Plot the output of a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param state_index: Index into the replay-memory for a state that
will be input to the Neural Network.
:param inverse_cmap: Boolean whether to inverse the color-map.
# Get the given state-array from the replay-memory.
state = replay_memory.states[state_index]
# Get the output tensor for the given layer inside the TensorFlow graph.
# This is not the value-contents but merely a reference to the tensor.
layer_tensor = model.get_layer_tensor(layer_name=layer_name)
# Get the actual value of the tensor by feeding the state-data
# to the TensorFlow graph and calculating the value of the tensor.
values = model.get_tensor_value(tensor=layer_tensor, state=state)
# Number of image channels output by the convolutional layer.
num_images = values.shape[3]
# Number of grid-cells to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_images))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids, figsize=(10, 10))
print("Dim. of each image:", values.shape)
if inverse_cmap:
cmap = 'gray_r'
else:
cmap = 'gray'
# Plot the outputs of all the channels in the conv-layer.
for i, ax in enumerate(axes.flat):
# Only plot the valid image-channels.
if i < num_images:
# Get the image for the i'th output channel.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap=cmap)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Output of Convolutional Layers
The outputs of the convolutional layers can be plotted so we can see how the images from the game-environment are being processed by the Neural Network.
This is the helper-function for plotting the output of the convolutional layer with the given name, when inputting the given state from the replay-memory.
End of explanation
idx = np.argmax(q_values_max)
plot_state(idx=idx, print_q=False)
Explanation: Game State
This is the state that is being input to the Neural Network. The image on the left is the last image from the game-environment. The image on the right is the processed motion-trace that shows the trajectories of objects in the game-environment.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv1', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 1
This shows the images that are output by the 1st convolutional layer, when inputting the above state to the Neural Network. There are 16 output channels of this convolutional layer.
Note that you can invert the colors by setting inverse_cmap=True in the parameters to this function.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv2', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 2
These are the images output by the 2nd convolutional layer, when inputting the above state to the Neural Network. There are 32 output channels of this convolutional layer.
End of explanation
plot_layer_output(model=model, layer_name='layer_conv3', state_index=idx, inverse_cmap=False)
Explanation: Output of Convolutional Layer 3
These are the images output by the 3rd convolutional layer, when inputting the above state to the Neural Network. There are 64 output channels of this convolutional layer.
All these images are flattened to a one-dimensional array (or tensor) which is then used as the input to a fully-connected layer in the Neural Network.
During the training-process, the Neural Network has learnt what convolutional filters to apply to the images from the game-environment so as to produce these images, because they have proven to be useful when estimating Q-values.
Can you see what it is that the Neural Network has learned to detect in these images?
End of explanation
def plot_conv_weights(model, layer_name, input_channel=0):
Plot the weights for a convolutional layer.
:param model: An instance of the NeuralNetwork-class.
:param layer_name: Name of the convolutional layer.
:param input_channel: Plot the weights for this input-channel.
# Get the variable for the weights of the given layer.
# This is a reference to the variable inside TensorFlow,
# not its actual value.
weights_variable = model.get_weights_variable(layer_name=layer_name)
# Retrieve the values of the weight-variable from TensorFlow.
# The format of this 4-dim tensor is determined by the
# TensorFlow API. See Tutorial #02 for more details.
w = model.get_variable_value(variable=weights_variable)
# Get the weights for the given input-channel.
w_channel = w[:, :, input_channel, :]
# Number of output-channels for the conv. layer.
num_output_channels = w_channel.shape[2]
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w_channel)
w_max = np.max(w_channel)
# This is used to center the colour intensity at zero.
abs_max = max(abs(w_min), abs(w_max))
# Print statistics for the weights.
print("Min: {0:.5f}, Max: {1:.5f}".format(w_min, w_max))
print("Mean: {0:.5f}, Stdev: {1:.5f}".format(w_channel.mean(),
w_channel.std()))
# Number of grids to plot.
# Rounded-up, square-root of the number of output-channels.
num_grids = math.ceil(math.sqrt(num_output_channels))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i < num_output_channels:
# Get the weights for the i'th filter of this input-channel.
img = w_channel[:, :, i]
# Plot image.
ax.imshow(img, vmin=-abs_max, vmax=abs_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Weights for Convolutional Layers
We can also plot the weights of the convolutional layers in the Neural Network. These are the weights that are being optimized so as to improve the ability of the Neural Network to estimate Q-values. Tutorial #02 explains in greater detail what convolutional weights are.
There are also weights for the fully-connected layers but they are not shown here.
This is the helper-function for plotting the weights of a convoluational layer.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=0)
Explanation: Weights for Convolutional Layer 1
These are the weights of the first convolutional layer of the Neural Network, with respect to the first input channel of the state. That is, these are the weights that are used on the image from the game-environment. Some basic statistics are also shown.
Note how the weights are more negative (blue) than positive (red). It is unclear why this happens as these weights are found through optimization. It is apparently beneficial for the following layers to have this processing with more negative weights in the first convolutional layer.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv1', input_channel=1)
Explanation: We can also plot the convolutional weights for the second input channel, that is, the motion-trace of the game-environment. Once again we see that the negative weights (blue) have a much greater magnitude than the positive weights (red).
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv2', input_channel=0)
Explanation: Weights for Convolutional Layer 2
These are the weights of the 2nd convolutional layer in the Neural Network. There are 16 input channels and 32 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
End of explanation
plot_conv_weights(model=model, layer_name='layer_conv3', input_channel=0)
Explanation: Weights for Convolutional Layer 3
These are the weights of the 3rd convolutional layer in the Neural Network. There are 32 input channels and 64 output channels of this layer. You can change the number for the input-channel to see the associated weights.
Note again how the weights are more balanced between positive (red) and negative (blue) compared to the weights for the 1st convolutional layer above.
End of explanation |
13,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XDAWN Decoding From EEG data
ERP decoding with Xdawn ([1], [2]). For each event type, a set of
spatial Xdawn filters are trained and applied on the signal. Channels are
concatenated and rescaled to create features vectors that will be fed into
a logistic regression.
Step1: Set parameters and read data
Step2: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization. | Python Code:
# Authors: Alexandre Barachant <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import MinMaxScaler
from mne import io, pick_types, read_events, Epochs, EvokedArray
from mne.datasets import sample
from mne.preprocessing import Xdawn
from mne.decoding import Vectorizer
print(__doc__)
data_path = sample.data_path()
Explanation: XDAWN Decoding From EEG data
ERP decoding with Xdawn ([1], [2]). For each event type, a set of
spatial Xdawn filters are trained and applied on the signal. Channels are
concatenated and rescaled to create features vectors that will be fed into
a logistic regression.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.1, 0.3
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2,
'Visual/Left': 3, 'Visual/Right': 4}
n_filter = 3
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 20, fir_design='firwin')
events = read_events(event_fname)
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=False,
picks=picks, baseline=None, preload=True,
verbose=False)
# Create classification pipeline
clf = make_pipeline(Xdawn(n_components=n_filter),
Vectorizer(),
MinMaxScaler(),
LogisticRegression(penalty='l1', solver='liblinear',
multi_class='auto'))
# Get the labels
labels = epochs.events[:, -1]
# Cross validator
cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=42)
# Do cross-validation
preds = np.empty(len(labels))
for train, test in cv.split(epochs, labels):
clf.fit(epochs[train], labels[train])
preds[test] = clf.predict(epochs[test])
# Classification report
target_names = ['aud_l', 'aud_r', 'vis_l', 'vis_r']
report = classification_report(labels, preds, target_names=target_names)
print(report)
# Normalized confusion matrix
cm = confusion_matrix(labels, preds)
cm_normalized = cm.astype(float) / cm.sum(axis=1)[:, np.newaxis]
# Plot confusion matrix
fig, ax = plt.subplots(1)
im = ax.imshow(cm_normalized, interpolation='nearest', cmap=plt.cm.Blues)
ax.set(title='Normalized Confusion matrix')
fig.colorbar(im)
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
fig.tight_layout()
ax.set(ylabel='True label', xlabel='Predicted label')
Explanation: Set parameters and read data
End of explanation
fig, axes = plt.subplots(nrows=len(event_id), ncols=n_filter,
figsize=(n_filter, len(event_id) * 2))
fitted_xdawn = clf.steps[0][1]
tmp_info = epochs.info.copy()
tmp_info['sfreq'] = 1.
for ii, cur_class in enumerate(sorted(event_id)):
cur_patterns = fitted_xdawn.patterns_[cur_class]
pattern_evoked = EvokedArray(cur_patterns[:n_filter].T, tmp_info, tmin=0)
pattern_evoked.plot_topomap(
times=np.arange(n_filter),
time_format='Component %d' if ii == 0 else '', colorbar=False,
show_names=False, axes=axes[ii], show=False)
axes[ii, 0].set(ylabel=cur_class)
fig.tight_layout(h_pad=1.0, w_pad=1.0, pad=0.1)
Explanation: The patterns_ attribute of a fitted Xdawn instance (here from the last
cross-validation fold) can be used for visualization.
End of explanation |
13,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tubular surfaces
A tubular surface (or tube surface) is generated by a 3D curve, called spine, and a moving circle of radius r, with center on the spine and included in planes orthogonal to curve.
Tubular surfaces are associated to spines that are biregular, that is, they have a $C^2$ parameterization, $c
Step1: Integrate the system, with an initial point consisting in the initial Frenet frame (of three orthonormal vectors)
and the initial position of the curve, $c(0)$
Step2: Now we define a tubular surface that has as spine the above curve.
A tubular surface having as spine a curve, $c(s)$, parameterized by the arclength, is defined as follows
Step3: Define a function that sets the plot layout
Step4: The colorscale for the tubular surface
Step5: Define a function that evaluates the tube parameterization, $r(s,u)=(x, y, z)$, at the meshgrid np.meshgrid(s_div, u)
Step6: The keywords zmin, zmax are set when we connect at least two tubular surfaces. They define the color bounds for
the tubular structure.
Step7: Tubular surface with a spine curve of given parameterization
If a general biregular parameterization, $c(t)$, of the spine is given,
then we have to do some analytical computations by hand, in order to get the
directions $\dot{c}(t)$, $\ddot{c}(t)$, $\dot{c}(t)\times \ddot{c}(t)$, of the velocity (tangent), acceleration, and binormals along the curve.
Then we define Python functions, tangent, acceleration, curve_normals, that compute the unit vectors of these directions.
Finally the unit vector of the principal normal is computed as $n(t)=b(t)\times tg(t)$, where $b(t), tg(t)$ are the unit vectors of binormals and tangents.
The tube parameterization, $$r(t,u)=c(t)+\varepsilon(n(t)\cos(u)+b(t)\sin(u)), t\in[tm, tM], u\in[0,2\pi],$$
is evaluated at a meshgrid.
We illustrate a tubular structure, called Hopf link, defined by two tubes, having the spines parameterized by
Step8: If we take all combinations of signs for the parameters, a, b, we get an interesting configuration of tubes
communicating with each other
Step9: Canal (Channels) surfaces
Tubular surfaces are particular surfaces in the class of canal surfaces. A canal surface
is again defined by a biregular spine, $c(t)$, but the circles
ortogonal to spine have variable radii, gigen by a $C^1$-function, $r(t)$, with $|r'(t)|<||\dot{c}(t)||$.
The parameterization of a canal surface is
Step10: Finally, we stress that in order to get a tubular looking surface, we have to set the aspect ratio
of the plot that respects the real ratios between axes lengths. Otherwise the tube is deformed. | Python Code:
import numpy as np
from scipy import integrate
def curv(s):#curvature
return 3*np.sin(s/10.)*np.sin(s/10.)
def tors(s):#torsion is constant
return 0.35
def Frenet_eqns(x, s):# right side vector field of the system of ODE
return [ curv(s)*x[3],
curv(s)*x[4],
curv(s)*x[5],
-curv(s)*x[0]+tors(s)*x[6],
-curv(s)*x[1]+tors(s)*x[7],
-curv(s)*x[2]+tors(s)*x[8],
-tors(s)*x[3],
-tors(s)*x[4],
-tors(s)*x[5],
x[0], x[1], x[2]]
Explanation: Tubular surfaces
A tubular surface (or tube surface) is generated by a 3D curve, called spine, and a moving circle of radius r, with center on the spine and included in planes orthogonal to curve.
Tubular surfaces are associated to spines that are biregular, that is, they have a $C^2$ parameterization, $c:[a,b]\to \mathbb{R}^3$, with
velocity, $\dot{c}(t)$, and acceleration, $\ddot{c}(t)$, that are non-null and non-colinear vectors:
$\dot{c}(t)\times \ddot{c}(t)\neq 0$.
Tubular surface defined by a spine curve parameterized by arc length
A tube of prescribed curvature and torsion is defined by a spine parameterized by the arc length, i.e. by
$c(s)$, with constant speed, $||\dot{c}(s)||=1$, and non-null acceleration, $\ddot{c}(s)\neq 0$, for all $s$.
The given curvature and torsion, $\kappa(s)$, $\tau(s)$, define the Frenet-Serre equations:
$$\begin{array}{lll}
\dot{e}_1(s)&=&\kappa(t)e_2(s)\
\dot{e}_2(s)&=&-\kappa(s)e_1(s)+\tau(s)e_3(s)\
\dot{e}_3(s)&=&-\tau(s)e_2(s),\
\end{array} $$
where $e_1(s), e_2(s), e_3(s)$ are respectively the unit vectors of tangent, principal normal and binormal along the curve.
Frenet-Serre equations completed with the equation $ \dot{c}(s)=e_1(s)$ define a system of ordinary differential equations, with 12 equations and 12 unknown functions. The last three
coordinates of a solution represent the discretized curve, $c(s)$, starting from an initial point, with a prescribed Frenet frame at that point.
We define below a tubular surface with highly oscillating curvature and constant torsion of the spine.
End of explanation
x_init=np.array([1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0])
s_final=150# [0, s_final] is the interval of integration
N=1000
s_div=np.linspace(0, s_final, N)
X=integrate.odeint(Frenet_eqns, x_init, s_div)
normal=X[:, 3:6].T
binormal=X[:, 6:9].T
curve=X[:, 9:].T
xc, yc, zc=curve# lists of coordinates of the spine points
Explanation: Integrate the system, with an initial point consisting in the initial Frenet frame (of three orthonormal vectors)
and the initial position of the curve, $c(0)$:
End of explanation
import plotly.plotly as py
from plotly.graph_objs import *
Explanation: Now we define a tubular surface that has as spine the above curve.
A tubular surface having as spine a curve, $c(s)$, parameterized by the arclength, is defined as follows:
$r(s,u)=c(s)+\varepsilon(e_2(s)cos(u)+e_3(s)sin(u))$, $0<\varepsilon <<1$, $u\in[0, 2\pi]$.
$\varepsilon$ is the radius of circles orthogonal to the spine.
End of explanation
axis = dict(
showbackground=True,
backgroundcolor="rgb(230, 230,230)",
gridcolor="rgb(255, 255, 255)",
zerolinecolor="rgb(255, 255, 255)",
)
noaxis=dict(showbackground=False,
showgrid=False,
showline=False,
showticklabels=False,
ticks='',
title='',
zeroline=False
)
def set_layout(title='', width=800, height=800, axis_type=axis, aspect=(1, 1, 1)):
return Layout(
title=title,
autosize=False,
width=width,
height=height,
showlegend=False,
scene=Scene(xaxis=XAxis(axis_type),
yaxis=YAxis(axis_type),
zaxis=ZAxis(axis_type),
aspectratio=dict(x=aspect[0],
y=aspect[1],
z=aspect[2]
)
)
)
Explanation: Define a function that sets the plot layout:
End of explanation
my_colorscale=[[0.0, 'rgb(46, 107, 142)'],
[0.1, 'rgb(41, 121, 142)'],
[0.2, 'rgb(36, 134, 141)'],
[0.3, 'rgb(31, 147, 139)'],
[0.4, 'rgb(30, 160, 135)'],
[0.5, 'rgb(40, 174, 127)'],
[0.6, 'rgb(59, 186, 117)'],
[0.7, 'rgb(85, 198, 102)'],
[0.8, 'rgb(116, 208, 84)'],
[0.9, 'rgb(151, 216, 62)'],
[1.0, 'rgb(189, 222, 38)']]
Explanation: The colorscale for the tubular surface:
End of explanation
def create_tube(spine_points, normal, binormal,
epsilon=0.2, colorscale=my_colorscale, zmin=None, zmax=None):
#returns an instance of the Plotly Surface, representing a tube
u=np.linspace(0, 2*np.pi, 100)
x,y,z=[np.outer(spine_points[k,:], np.ones(u.shape))+
epsilon*(np.outer(normal[k, :], np.cos(u))+np.outer(binormal[k,:], np.sin(u)))
for k in range(3)]
if zmin is not None and zmax is not None:
return Surface(x=x, y=y, z=z, zmin=zmin, zmax=zmax,
colorscale=colorscale,
colorbar=dict(thickness=25, lenmode='fraction', len=0.75))
else:
return Surface(x=x, y=y, z=z,
colorscale=colorscale,
colorbar=dict(thickness=25, lenmode='fraction', len=0.75))
Explanation: Define a function that evaluates the tube parameterization, $r(s,u)=(x, y, z)$, at the meshgrid np.meshgrid(s_div, u):
End of explanation
tube=create_tube(curve, normal, binormal, epsilon=0.1)
data1=Data([tube])
layout1=set_layout(title='Tubular surface', aspect=(1,1,1.05))
fig1 = Figure(data=data1, layout=layout1)
py.sign_in('empet', '')
py.iplot(fig1, filename='tubular-cst-torsion')
Explanation: The keywords zmin, zmax are set when we connect at least two tubular surfaces. They define the color bounds for
the tubular structure.
End of explanation
from numpy import sin, cos, pi
def spine_param( a, b, tm, tM, nr):
#spine parameterization c:[tm, tM]-->R^3
# a, b are parameters on which the spine parameterization depends
# nr is the number of points to be evaluated on spine
t=np.linspace(tm, tM, nr )# nr is the number of points to ve evaluated on spine
return t, a+cos(t), sin(t), b*sin(t)
def tangent( a, b, t):
# returns the unit tangent vectors along the spine curve
v=np.vstack((-sin(t), cos(t), b*cos(t)))
return v/np.vstack((np.linalg.norm(v, axis=0),)*3)
def acceleration( a, b, t):
# returns the unit acceleration vectors along the spine
v=np.array([ -cos(t), -sin(t), -b*sin(t)])
return v/np.vstack((np.linalg.norm(v, axis=0),)*3)
def curve_normals(a, b):
# computes and returns the point coordinates on spine, and the unit normal vectors
t, xc, yc, zc=spine_param(a,b, 0.0, 2*pi, 100)
tang=tangent(a,b, t)
binormal=np.cross(tang, acceleration(a, b, t), axis=0)
binormal=binormal/np.vstack((np.linalg.norm(binormal, axis=0),)*3)
normal=np.cross(binormal, tang, axis=0)
return np.vstack((xc, yc, zc)), normal, binormal
epsilon=0.025 # the radius of each tube
zm=[]# list of min z-values on both tubes
zM=[]# list of max z-values on both tubes
spine1, normal1, binormal1=curve_normals(0.5, 0.2)
zm.append(min(spine1[2,:]))
zM.append(max(spine1[2,:]))
spine2, normal2, binormal2=curve_normals(-0.5, -0.2)
zm.append(min(spine2[2,:]))
zM.append(max(spine2[2,:]))
zmin=min(zm)
zmax=max(zM)
tube1=create_tube(spine1, normal1, binormal1, epsilon=epsilon, zmin=zmin, zmax=zmax)
tube2=create_tube(spine2, normal2, binormal2, epsilon=epsilon, zmin=zmin, zmax=zmax)
layout2=set_layout(title='Hopf link', aspect=(1, 0.75, 0.35))
data2=Data([tube1,tube2])
fig2 = Figure(data=data2, layout=layout2)
py.sign_in('empet', '')
py.iplot(fig2, filename='Hopf-link')
Explanation: Tubular surface with a spine curve of given parameterization
If a general biregular parameterization, $c(t)$, of the spine is given,
then we have to do some analytical computations by hand, in order to get the
directions $\dot{c}(t)$, $\ddot{c}(t)$, $\dot{c}(t)\times \ddot{c}(t)$, of the velocity (tangent), acceleration, and binormals along the curve.
Then we define Python functions, tangent, acceleration, curve_normals, that compute the unit vectors of these directions.
Finally the unit vector of the principal normal is computed as $n(t)=b(t)\times tg(t)$, where $b(t), tg(t)$ are the unit vectors of binormals and tangents.
The tube parameterization, $$r(t,u)=c(t)+\varepsilon(n(t)\cos(u)+b(t)\sin(u)), t\in[tm, tM], u\in[0,2\pi],$$
is evaluated at a meshgrid.
We illustrate a tubular structure, called Hopf link, defined by two tubes, having the spines parameterized by:
$$c(t)=(\pm a+\cos(t), \sin(t), \pm b\sin(t)), t\in[0, 2\pi]$$
The first spine corresponds to $a=0.5, b=0.2$, and the second one, to $a=-0.5, b=-0.2$.
End of explanation
from IPython.display import HTML
HTML('<iframe src=https://plot.ly/~empet/13930/comunicating-rings/ width=900 height=700></iframe>')
Explanation: If we take all combinations of signs for the parameters, a, b, we get an interesting configuration of tubes
communicating with each other:
End of explanation
def radius_deriv(t):
return 2+cos(2*t), -2*sin(2*t)
def create_canal(spine, normal, binormal, term,
colorscale=my_colorscale, zmin=None, zmax=None):
#returns an instance of the Plotly Surface, representing a canal surface
#term is the second term in the parameterization
u=np.linspace(0, 2*np.pi, 100)
x,y,z=[np.outer(spine[k,:]-term[k, :], np.ones(u.shape))+\
np.outer(normal[k, :], np.cos(u))+np.outer(binorm[k,:], np.sin(u)) for k in range(3)]
if zmin is not None and zmax is not None:
return Surface(x=x, y=y, z=z, zmin=zmin, zmax=zmax,
colorscale=colorscale,
colorbar=dict(thickness=25, lenmode='fraction', len=0.75))
else:
return Surface(x=x, y=y, z=z,
colorscale=colorscale,
colorbar=dict(thickness=25, lenmode='fraction', len=0.75))
t=np.linspace(0, 3*pi/2, 50)
xc, yc, zc= 10*cos(t), 10*sin(t), np.zeros(t.shape)
spine=np.vstack((xc,yc, zc))
rt,rdt=radius_deriv(t)# rt is the variable radius r(t), and rdt its derivative
tang=np.vstack((-10*sin(t), 10*cos(t), np.zeros(t.shape))) #c'(t)
cdot_norm=np.vstack((np.linalg.norm(tang, axis=0),)*3)# ||c'(t)||
factor=rt*rdt/cdot_norm**2
term=factor*tang#term.shape=(3, t.shape[0])# second term in canal surface parameterization
R=rt*np.sqrt(cdot_norm**2-rdt**2)/cdot_norm # R.shape (3, t.shape[0]) is the scalar factor in the third term
tangu= (tang/cdot_norm) #unit tangent vector
acceler=np.vstack((-10*cos(t), -10*sin(t), np.zeros(t.shape)))
acceler= acceler/np.vstack((np.linalg.norm(acceler, axis=0),)*3)#unit acceleration vector
binorm=np.cross(tangu, acceler, axis=0)
binorm=binorm/np.vstack((np.linalg.norm(binorm, axis=0),)*3)#unit binormal vector
normal=np.cross(binorm, tangu, axis=0)# unit normal vector
binorm=R*binorm
normal=R*normal
canal=create_canal(spine, normal, binorm, term, colorscale=my_colorscale)
layout3=set_layout(title='Canal surface', axis_type=axis, aspect=(1, 1, 0.25))
data3=Data([canal])
fig3 = Figure(data=data3, layout=layout3)
py.sign_in('empet', '')
py.iplot(fig3, filename='Canal-surf')
Explanation: Canal (Channels) surfaces
Tubular surfaces are particular surfaces in the class of canal surfaces. A canal surface
is again defined by a biregular spine, $c(t)$, but the circles
ortogonal to spine have variable radii, gigen by a $C^1$-function, $r(t)$, with $|r'(t)|<||\dot{c}(t)||$.
The parameterization of a canal surface is:
$$r(t,u)=c(t)-\displaystyle\frac{r(t)r'(t)}{||\dot{c}(t)||^2}\dot{c}(t)+
\displaystyle\frac{r(t)\sqrt{||\dot{c}(t) ||^2-r'(t)^2}}{||\dot{c}(t) ||}(n(t)\cos(u)+b(t)\sin(u))$$
We plot the canal surface of spine, $c(t)=(10\cos(t), 10\sin(t), 0)$, and radius function
$r(t)=2+\cos(2t)$, $t\in[0,2\pi]$.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Finally, we stress that in order to get a tubular looking surface, we have to set the aspect ratio
of the plot that respects the real ratios between axes lengths. Otherwise the tube is deformed.
End of explanation |
13,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 1
Imports
Step1: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0
Step2: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step3: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 1
Imports
End of explanation
def checkerboard(size):
a = np.zeros((size,size), dtype = np.float)
b = 2
if size % 2 != 0:
for element in np.nditer(a, op_flags=['readwrite']):
if size % 2 != 0:
if b % 2 == 0:
element[...] = element + 1.0
b += 1
else:
b += 1
return a
else:
c = [1,0]
d = [0,1]
e = []
f = size / 2
g = list(range(1, size + 1))
for item in g:
if item % 2 != 0:
e.append(c * f)
else:
e.append(d * f)
h = np.array(e, dtype = np.float)
return h
print checkerboard(4)
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
va.set_block_size(10)
va.vizarray(checkerboard(20))
assert True
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
va.set_block_size(5)
va.vizarray(checkerboard(27))
assert True
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation |
13,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #2
This notebook is due on Friday, October 7th, 2016 at 11
Step1: Question
Step2: Question 1
Step3: Question 2
Step4: Question 3
Step5: Section 3
Step6: Part 2
Step8: Section 4 | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
'''
count_times = the time since the start of data-taking when the data was
taken (in seconds)
count_rates = the number of counts since the last time data was taken, at
the time in count_times
'''
count_times = np.loadtxt("count_rates.txt", dtype=int)[0]
count_rates = np.loadtxt("count_rates.txt", dtype=int)[1]
# Put your code here - add additional cells if necessary
# number of bins to smooth over
Nsmooth = 100
'''
create arrays for smoothed counts. Given how we're going to
subsample, we want to make the arrays shorter by a factor of
2*Nsmooth. Use numpy's slicing to get times that start after
t=0 and end before the end of the array. Then just make
smooth_counts the same size, and zero it out. They should be
the size of count_rates.size-2*Nsmooth
'''
smooth_times = count_times[Nsmooth:-Nsmooth]
smooth_counts = np.zeros_like(smooth_times,dtype='float64')
'''
loop over the count_rates arrays, but starting Nsmooth into
count_rates and ending Nsmooth prior to the end. Then, go from
i-Nsmooth to i+Nsmooth and sum those up. After the loop, we're
going to divide by 2*Nsmooth+1 in order to normalize it (because
each value of the smoothed array has 2*Nsmooth+1 cells in it).
'''
for i in range(Nsmooth,count_rates.size-Nsmooth):
for j in range(i-Nsmooth,i+Nsmooth+1): # the +1 is because it'll then end at i+Nsmooth
smooth_counts[i-Nsmooth] += count_rates[j]
smooth_counts /= (2.0*Nsmooth+1.0)
# plot noisy counts, smoothed counts, each with their own line types
plt.plot(count_times,count_rates,'b.',smooth_times,smooth_counts,'r-',linewidth=5)
# some guesses for the various parameters in the model
# (which are basically lifted directly from the data)
N0 = 2000.0 # counts per 5-second bin
half_life = 1712.0 # half life (in seconds)
Nbackground = 292.0 # background counts per 5-second bin
# calculate estimated count rate using the parameters listed above
count_rate_estimate = N0 * 2.0**(-count_times/half_life) + Nbackground
plt.plot(count_times,count_rate_estimate,'c--',linewidth=5)
plt.xlabel('time (seconds)')
plt.ylabel('counts per bin')
plt.title("Counts per 5-second bin")
Explanation: Homework #2
This notebook is due on Friday, October 7th, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office hours and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment.
Some links that you may find helpful:
Markdown tutorial
The Pandas website
The Pandas tutorial
10-minute Panda Tutorial
All CMSE 201 YouTube videos
Your name
Put your name here!
Section 1: Radioactivity wrapup
In this part of the homework, we're going to finish what we started regarding modeling the count rate of radioactive data that you worked wtih in class, to try to estimate the strength of the radioactive background that was seen in the radioactive count rates.
In class, we discussed that for radioactive material with an initial amount $N_0$ and a half life $t_{1/2}$, the amount left after time t is $N(t) = N_0 2^{-t/t_{1/2}}$. The expected radioactive decay rate is then:
$\mathrm{CR}(t) = - \frac{dN}{dt} = \frac{N_0 \ln 2}{t_{1/2}}2^{-t/t_{1/2}}$
However, the data doesn't agree well with this - there's something contaminating our count rate data that's causing a radioactive "background" that is approximately constant with time. A better estimate of the count rates is more like:
$\mathrm{CR}(t) = \mathrm{CR}{\mathrm{S}}(t) + \mathrm{CR}{\mathrm{BG}}$
where $\mathrm{CR}{\mathrm{S}}(t)$ is the count rate from the sample, which has the shape expected above, and $\mathrm{CR}{\mathrm{BG}}$ is the count rate from the radioactive background.
We're now going to try to figure out the values that go into the expressions for $\mathrm{CR}{\mathrm{S}}(t)$ and $\mathrm{CR}{\mathrm{BG}}$ by using the data. What you're going to do is:
"Smooth" the decay rate data over N adjacent samples in time to get rid of some of the noise. Try writing a piece of code to loop over the array of data and average the sample you're interested in along with the N samples on either side (i.e., from element i-N to i+N, for an arbitrary number of cells). Store this smoothed data in a new array (perhaps using np.zeros_like() to create the new array?).
Plot your smoothed data on top of the noisy data to ensure that it agrees.
Create a new array with the analytic equation from above that describes for the decay rate as a function of time, taking into account what you're seeing in point (2), and try to find the values of the various constants in the equation. Plot the new array on top of the raw data and smoothed values.
Note that code to load the file count_rates.txt has been added below, and puts the data into two numpy arrays as it did in the in-class assignment.
End of explanation
import pandas
erie = pandas.read_csv('erie1918Ann.csv', skiprows=2)
miHuron = pandas.read_csv('miHuron1918Ann.csv', skiprows=2)
ontario = pandas.read_csv('ontario1918Ann.csv', skiprows=2)
superior = pandas.read_csv('superior1918Ann.csv', skiprows=2)
Explanation: Question: What are the constants that you came up with for the count rate equation? Do these values make sense given the experimental data? Why or why not?
The values that the students should get are approximately:
N0 = 2000 counts per bin (each bin is 5 seconds long)
half life = 1712 seconds
background count rate = 292 counts per bin (each bin 5 seconds)
Note that I do not expect students to get an answer that's that close - as long as the curve that is produced (dashed line above) gets reasonably close to the smoothed value, it's fine. Also, note that students may show the entire plot in counts/second instead of counts/bin - both are fine.
The plots make sense given the experimental data - the half life should be somewhere around 2000 seconds from looking at the curve, the count rates per bin for the noise should be somewhere between 200-400, and the counts at t=0 are somewhere around 2000-2200 counts/bin, after you subtract the noise from the bin.
Section 2: Great Lakes water levels
The water level in the Great Lakes fluctuates over the course of a year, and also fluctuates in many-year cycles. About two and a half years ago (in Feb. 2014), there was an article in Scientific American describing the historically low levels of the Great Lakes - in particular, that of Lake Michigan and Lake Huron, which together make up the largest body of fresh water in the world. In this part of the homework assignment, we're going to look at water height data from the Great Lakes Environmental Research Laboratory - in particular, data from 1918 to the present day. In the cell below this, we're using Pandas to load four CSV ("Comma-Separated Value") files with data from Lake Eric, Lakes Michigan and Huron combined, Lake Ontario, and Lake Superior into data frames. Each dataset contains the annual average water level for every year from 1918 to the present. Use these datasets to answer the questions posed below.
End of explanation
# Put your code here!
# expect to see students taking the mean value and printing it out!
erie_mean = erie['AnnAvg'].mean()
miHuron_mean = miHuron['AnnAvg'].mean()
ontario_mean = ontario['AnnAvg'].mean()
superior_mean = superior['AnnAvg'].mean()
print('Erie (meters): ', erie_mean)
print('Michigan/Huron (meters): ', miHuron_mean)
print('Ontario (meters): ', ontario_mean)
print('Superior (meters): ', superior_mean)
Explanation: Question 1: Calculate the mean water levels of all of the Great Lakes over the past century (treating Lakes Michigan and Huron as a single body of water). Are all of the values similar? Why does your answer make sense? (Hint: where is Niagara Falls, and what direction does the water flow?)
Answer: Three of the values (Erie, Michigan/Huron, Superior) are all pretty similar (to within 9 or 10 meters), but Lake Ontario is about 100 meters lower. The fact that Erie/Michigan/Superior are all of similar mean height makes sense because they're connected by waterways, and the water should level out. It makes sense that Ontario is lower, because Niagara Falls flows from Lake Erie into Lake Ontario, and Niagara Falls are really high. So, it makes sense that Lake Ontario is much lower than Lake Erie.
End of explanation
# Put your code here
# make a plot of the lakes' heights minus the mean values.
lake_erie, = plt.plot(erie['year'],erie['AnnAvg']-erie['AnnAvg'].mean(),'r-')
lake_mi, = plt.plot(miHuron['year'],miHuron['AnnAvg']-miHuron['AnnAvg'].mean(),'g-')
lake_ont, = plt.plot(ontario['year'],ontario['AnnAvg']-ontario['AnnAvg'].mean(),'b-')
lake_sup, = plt.plot(superior['year'],superior['AnnAvg']-superior['AnnAvg'].mean(),'k-')
plt.xlabel('year')
plt.ylabel('value minus historic mean')
plt.title('variation around historic mean for all lakes')
plt.legend( (lake_erie,lake_mi,lake_ont,lake_sup), ('Erie','MI/Huron','Ontario','Superior'),loc='upper left')
Explanation: Question 2: Make a plot where you show the fluctuations of each lake around the mean value from the last century (i.e., subtracting the mean value of the lake's water level from the data of water level over time). In general, do you see similar patterns of fluctuations in all of the lakes? What might this suggest to you about the source of the fluctuations?
Hint: you may want to use pyplot instead of the built-in Pandas plotting functions!
Answer: We do see similar patterns overall, though some of the lakes (Superior, for example) are more stable. This suggests that there's some sort of regional thing (weather, for example) that's causing fluctuations in all of the lakes.
End of explanation
# Put your code here
# basically the plot from above, but with different x limits.
lake_erie, = plt.plot(erie['year'],erie['AnnAvg']-erie['AnnAvg'].mean(),'r-')
lake_mi, = plt.plot(miHuron['year'],miHuron['AnnAvg']-miHuron['AnnAvg'].mean(),'g-')
lake_ont, = plt.plot(ontario['year'],ontario['AnnAvg']-ontario['AnnAvg'].mean(),'b-')
lake_sup, = plt.plot(superior['year'],superior['AnnAvg']-superior['AnnAvg'].mean(),'k-')
plt.xlabel('year')
plt.ylabel('value minus historic mean')
plt.title('variation around historic mean for all lakes')
plt.legend( (lake_erie,lake_mi,lake_ont,lake_sup), ('Erie','MI/Huron','Ontario','Superior'),loc='upper left')
plt.xlim(1996,2017)
Explanation: Question 3: Finally, let's look at the original issue - the water level of the Lake Michigan+Lake Huron system and how it changes over time. When you examine just the Lake Michigan data, zooming in on only the last 20 years of data, does the decrease in water level continue, does it reverse itself, or does it stay the same? In other words, was the low level reported in 2014 something we should continue to be worried about, or was it a fluke?
Answer: The lake Michigan/Huron system data has reversed itself in the last couple of years, and has returned to historically reasonable values. It's just a fluke.
End of explanation
# put your code and plots here!
# concentration (Q) is in units of micrograms/milliliter
# 2 tables * (325 mg/tablet) / 3000 mL * 1000 micrograms/mg
Q_start = 2.0 * 325./3000.0*1000.
t_half = 3.2
K = 0.693/t_half
time=0
t_end = 12.0
dt = 0.01
Q = []
t = []
Q_old = Q_start
while time <= t_end:
Q_new = Q_old - K*Q_old*dt
Q.append(Q_new)
t.append(time)
Q_old = Q_new
time += dt
plt.plot(t,Q,'r-')
plt.plot([0,12],[150,150],'b-')
plt.plot([0,12],[300,300],'b-')
plt.ylim(0,350)
plt.xlabel('time [hours]')
plt.ylabel('concentration [micrograms/mL]')
plt.title('concentration of aspirin over time')
Explanation: Section 3: Modeling drug doses in the human body
Modeling the behavior of drugs in the human body is very important in medicine. One frequently-used model is called the "Single-Compartment Drug Model", which takes the complex human body and treats it as one homogeneous unit, where drug distribution is instantaneous, the concentration of the drug in the blood (i.e., the amount of drug per volume of blood) is proportional to the drug dosage, and the rate of elimination of the drug is proportional to the amount of drug in the system. Using this model allows the prediction of the range of therapeutic doses where the drug will be effective.
We'll first model the concentration in the body of aspirin, which is commonly used to treat headaches and reduce fever. For adults, it is typical to take 1 or 2 325 mg tablets every four hours, up to a maximum of 12 tablets/day. This dose is assumed to be dissolved immediately into the blood plasma. (An average adult human has about 3 liters of blood plasma.) The concentration of drugs in the blood (represented with the symbol Q) is typically measured in $\mu$g/ml, where 1000 $\mu$g (micrograms) = 1 mg (milligram). For aspirin, the does that is effective for relieving headaches is typically between 150-300 $\mu$g/ml, and the half-life for removal of the drug from the system is about 3.2 hours (more on that later).
The rate of removal of aspirin from the body (elimination) is proportional to the amount present in the system:
$\frac{dQ}{dt} = -K Q$
Where Q is the concentration, and K is a constant of proportionality that is related to the half-life of removal of drug from the system: $K = 0.693 / t_{1/2}$.
Part 1: We're now going to make a simple model of the amount of aspirin in the human body. Assuming that an adult human has a headache and takes 2 325 mg aspirin tablets. If the drug immediately enters their system, for how long will their headache be relieved? Show a plot, with an appropriate title, x-axis label, and y-axis label, that shows the concentration of aspirin in the patient's blood over a 12-hour time span. In your model, make sure to resolve the time evolution well - make sure that your individual time steps are only a few minutes!
Put your answer immediately below, and the code you wrote (and plots you generated) to solve this immediately below that.
Answer: Their headache will be relieved for about an hour and a half, or perhaps an hour and 45 minutes (precise answer is 1.68 hours). You can tell because that's where the aspirin concentration dips below 150 micrograms/mL.
End of explanation
# put your code and plots here!
# concentration (Q) is in units of micrograms/milliliter
# 1 tablet * (100 mg/tablet) / 3000 mL * 1000 micrograms/mg
Q_dosage = 1.0 * 100./3000.0*1000.
t_half = 22.0
absorption_fraction = 0.12
K = 0.693/t_half
time=0
t_end = 10.0*24.0 # 10 days
dt = 0.01
Q = []
t = []
Q_old = absorption_fraction*Q_dosage
t_dosage = 0.0
dt_dosage = 8.0
while time <= t_end:
if time - t_dosage >= dt_dosage:
Q_old += absorption_fraction*Q_dosage
t_dosage = time
Q_new = Q_old - K*Q_old*dt
Q.append(Q_new)
t.append(time)
Q_old = Q_new
time += dt
plt.plot(t,Q,'r-')
plt.plot([0,250],[10,10],'b-')
plt.plot([0,250],[20,20],'b-')
plt.ylim(0,25)
#plt.xlim(0,50)
plt.xlabel('time [hours]')
plt.ylabel('concentration [micrograms/mL]')
plt.title('concentration of Dilantin over time')
Explanation: Part 2: We're now going to model the concentration of a drug that needs to be repeatedly administered - the drug Dilantin, which is used to treat epilepsy. The effective concentration of the drug in humans is 10-20 $\mu$g/ml, the half-life of Dilantin is approximately 22 hours, and the drug comes in 100 mg tablets which are effectively instantaneously released into your bloodstream. For this particular drug, only about 12% of the drug in each dose is actually available for absorption in the bloodstream, meaning that the effective amount added to the blood is 12 mg per dose.
Assuming that the drug is administered every 8 hours to a patient that starts out having none of the drug in their body, make a plot of the drug concentration over a ten day period and use it to answer the following two questions:
How long does it take to reach an effective concentration of the drug in the patient's blood?
By roughly when does the drug concentration reach a steady state in the patient's blood? (In other words, after how long is the concentration neither rising nor falling on average?)
Answer:
Assuming 100 mg per pill and 12% absorption (the corrected version of the homework): (1) we first reach the therapeutic concentration at about 24 hours, finally are completely above (i.e., never dip below) the therapeutic concentration around 40 hours. (2) We reach steady-state somewhere around 100-120 hours (~somewhat between 4-5 days).
Assuming 100 mg per pill and 100% absorption (the original version of the homework): (1) we're always above the effective concentration. (2) We reach steady-state somewhere around 120 hours (~somewhere around 5 days).
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/Px7wk9DcldfyCqMt2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Section 4: Feedback (required!)
End of explanation |
13,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Partition
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Partition in multiple ways to split the PCollection into multiple PCollections.
Partition accepts a function that receives the number of partitions,
and returns the index of the desired partition for the element.
The number of partitions passed must be a positive integer,
and it must return an integer in the range 0 to num_partitions-1.
Example 1
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/documentation/transforms/python/elementwise/partition-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/partition"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Partition
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Partition"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Separates elements in a collection into multiple output
collections. The partitioning function contains the logic that determines how
to separate the elements of the input collection into each resulting
partition output collection.
The number of partitions must be determined at graph construction time.
You cannot determine the number of partitions in mid-pipeline
See more information in the Beam Programming Guide.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
durations = ['annual', 'biennial', 'perennial']
def by_duration(plant, num_partitions):
return durations.index(plant['duration'])
with beam.Pipeline() as pipeline:
annuals, biennials, perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Partition' >> beam.Partition(by_duration, len(durations))
)
annuals | 'Annuals' >> beam.Map(lambda x: print('annual: {}'.format(x)))
biennials | 'Biennials' >> beam.Map(
lambda x: print('biennial: {}'.format(x)))
perennials | 'Perennials' >> beam.Map(
lambda x: print('perennial: {}'.format(x)))
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Partition in multiple ways to split the PCollection into multiple PCollections.
Partition accepts a function that receives the number of partitions,
and returns the index of the desired partition for the element.
The number of partitions passed must be a positive integer,
and it must return an integer in the range 0 to num_partitions-1.
Example 1: Partition with a function
In the following example, we have a known list of durations.
We partition the PCollection into one PCollection for every duration type.
End of explanation
import apache_beam as beam
durations = ['annual', 'biennial', 'perennial']
with beam.Pipeline() as pipeline:
annuals, biennials, perennials = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Partition' >> beam.Partition(
lambda plant, num_partitions: durations.index(plant['duration']),
len(durations),
)
)
annuals | 'Annuals' >> beam.Map(lambda x: print('annual: {}'.format(x)))
biennials | 'Biennials' >> beam.Map(
lambda x: print('biennial: {}'.format(x)))
perennials | 'Perennials' >> beam.Map(
lambda x: print('perennial: {}'.format(x)))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/partition.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Partition with a lambda function
We can also use lambda functions to simplify Example 1.
End of explanation
import apache_beam as beam
import json
def split_dataset(plant, num_partitions, ratio):
assert num_partitions == len(ratio)
bucket = sum(map(ord, json.dumps(plant))) % sum(ratio)
total = 0
for i, part in enumerate(ratio):
total += part
if bucket < total:
return i
return len(ratio) - 1
with beam.Pipeline() as pipeline:
train_dataset, test_dataset = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 'perennial'},
{'icon': '🥕', 'name': 'Carrot', 'duration': 'biennial'},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 'perennial'},
{'icon': '🍅', 'name': 'Tomato', 'duration': 'annual'},
{'icon': '🥔', 'name': 'Potato', 'duration': 'perennial'},
])
| 'Partition' >> beam.Partition(split_dataset, 2, ratio=[8, 2])
)
train_dataset | 'Train' >> beam.Map(lambda x: print('train: {}'.format(x)))
test_dataset | 'Test' >> beam.Map(lambda x: print('test: {}'.format(x)))
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/partition.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Partition with multiple arguments
You can pass functions with multiple arguments to Partition.
They are passed as additional positional arguments or keyword arguments to the function.
In machine learning, it is a common task to split data into
training and a testing datasets.
Typically, 80% of the data is used for training a model and 20% is used for testing.
In this example, we split a PCollection dataset into training and testing datasets.
We define split_dataset, which takes the plant element, num_partitions,
and an additional argument ratio.
The ratio is a list of numbers which represents the ratio of how many items will go into each partition.
num_partitions is used by Partitions as a positional argument,
while plant and ratio are passed to split_dataset.
If we want an 80%/20% split, we can specify a ratio of [8, 2], which means that for every 10 elements,
8 go into the first partition and 2 go into the second.
In order to determine which partition to send each element, we have different buckets.
For our case [8, 2] has 10 buckets,
where the first 8 buckets represent the first partition and the last 2 buckets represent the second partition.
First, we check that the ratio list's length corresponds to the num_partitions we pass.
We then get a bucket index for each element, in the range from 0 to 9 (num_buckets-1).
We could do hash(element) % len(ratio), but instead we sum all the ASCII characters of the
JSON representation to make it deterministic.
Finally, we loop through all the elements in the ratio and have a running total to
identify the partition index to which that bucket corresponds.
This split_dataset function is generic enough to support any number of partitions by any ratio.
You might want to adapt the bucket assignment to use a more appropriate or randomized hash for your dataset.
End of explanation |
13,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively
Step1: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color
Step2: <h3 style="color
Step3: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
Step4: <h3>Step 3
Step5: The choice for the interpolation is obvious
Step6: <h3>Step 4
Step7: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm
Step8: For reasons that will become clearer later, we want to consider other interpolation schemes
Step9: <h3 style="color
Step10: <h3>Step 5
Step11: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
import matplotlib.pyplot as plt #calls the plotting library hereafter referred as to plt
import numpy as np
Explanation: Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively:
$$
\int_{S_{i+1/2,j}}\phi(\vec{u}{i+\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy\text{ and }\int_{S_{i-1/2,j}}\phi(\vec{u}{i-\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy
$$
In the configuration depicted in Figure 1, the mass or heat variation is equal to the flux of $\phi$ entering the cell minus the flux exiting the cell, or:
$$
-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y \text{, when $\Delta y\rightarrow 0$}
$$
Assuming that there is no vertical velocity ($v=0$), this sum is equal to the variation of $\phi$ within the cell,
$$
\frac{\partial}{\partial t}\iint_{V_{i,j}}\phi dxdy\approx\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y \text{, when $\Delta x\rightarrow 0$ and $\Delta y\rightarrow 0$}
$$
yielding
$$
\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y=-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y\;,
$$
reducing to
$$
\frac{\partial \phi_{i,j}}{\partial t}=-\frac{\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j} - \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}}{\Delta x}\;.
$$
In the limit of $\Delta x\rightarrow 0$, we obtain the conservative form of the pure advection equation:
<p class='alert alert-danger'>
$$
\frac{\partial \phi}{\partial t}+\frac{\partial u\phi}{\partial x}=0
$$
</p>
<h2>1.2 Coding the Pure Advection Equation</h2>
The following takes you through the steps to solve numerically the pure advection equation with python. The boundary conditions are (all variables are non-dimensional):
<ol>
<li> Length of the domain: $0\leq x\leq L$ and $L=8\pi$ </li>
<li> Constant velocity $u_0=1$
<li> Inlet $x=0$ and outlet $x=L$: zero-flux variation (in space)</li>
<li> Initial condition:
$$\phi(x,t=0)=\begin{cases}
1+\cos\left(x-\frac{L}{2}\right)&,\text{ for }\left\vert x-\frac{L}{2}\right\vert\leq\pi\\
0&,\text{ for }\left\vert x-\frac{L}{2}\right\vert>\pi
\end{cases}
$$
</li>
</ol>
Here you will <b>discretize</b> your domain in $N$ small control volumes, such that the size of each control volume is
<p class='alert alert-danger'>
$$
\Delta x = \frac{L}{N}
$$
</p>
You will simulate the system defined so far of a time $T$, to be decided, discretized by small time-steps
<p class='alert alert-danger'>
$$
\Delta t = \frac{T}{N_t}
$$
</p>
We adopt the following index convention:
<ul>
<li> Each cell is labeled by a unique integer $i$ with $i\in[0,N-1]$. This is a python convention that vector and matrices start with index 0, instead of 1 for matlab.</li>
<li> A variable defined at the center of cell $i$ is noted with the subscript $i$: $\phi_i$.</li>
<li> A variable defined at the surface of cell $i$ is noted with the subscript $i\pm1/2$: $\phi_{i\pm 1/2}$</li>
<li> The solution $\phi(x_i,t_n)$, where
$$
x_i = i\Delta x\text{ with $x\in[0,N-1]$, and }t_n=n\Delta t\text{ with $n\in[0,N_t]$,}
$$</li>
is noted $\phi_i^n$.
</ul>
At first we will try to solve the advection equation with the following discretization:
$$
\frac{\phi_i^{n+1}-\phi_i^n}{\Delta t}=-\frac{\phi_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi_{i-\frac{1}{2}}u_{i-\frac{1}{2}}}{\Delta x}
$$
or
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(\phi^n_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi^n_{i-\frac{1}{2}}u_{i-\frac{1}{2}}\right)
$$
</p>
The velocity $u$ is constant, therefore defined anywhere in the system (cell center or cell surfaces), however $\phi$ is defined only at the cell center, requiring an interpolation at the cell surface $i\pm 1/2$. For now you will consider a mid-point interpolation:
<p class='alert alert-info'>
$$
\phi^n_{i+\frac{1}{2}} = \frac{\phi^n_{i+1}+\phi^n_i}{2}
$$
</p>
Lastly, our governing equation can be recast with the flux of $\phi$ across the surface $u$:
<p class='alert alert-info'>
$$
F^n_{i\pm\frac{1}{2}}=\phi^n_{i\pm\frac{1}{2}}u_{i\pm\frac{1}{2}}=\frac{\phi^n_{i\pm 1}+\phi^n_i}{2}u_{i\pm\frac{1}{2}}
$$
</p>
yielding the equation you will attempt to solve:
<p class='alert alert-danger'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
<h3> Step 1: Import libraries</h3>
Python has a huge collection of libraries contained functions to plot, build matrices, performed mathematical operations, etc. To avoid overloading the CPU and to allow you to choose the best library for your code, you need to first import the libraries you will need, here:
<ul>
<li> <FONT FACE="courier" style="color:blue">matplotlib </FONT>: <a href="http://matplotlib.org">http://matplotlib.org</a> for examples of plots you can make in python.</li>
<li><FONT FACE="courier" style="color:blue">numpy </FONT>: <a href="http://docs.scipy.org/doc/numpy/user/index.html">http://docs.scipy.org/doc/numpy/user/index.html</a> Library for operations on matrices and vectors.</li>
</ul>
Loading a libray in python is done by the command <FONT FACE="courier" style="color:blue">import</FONT>. The best practice is to take the habit to use
<FONT FACE="courier" style="color:blue">import [library] as [library_nickname]</FONT>
For example, the library <FONT FACE="courier" style="color:blue">numpy</FONT> contains vector and matrices operations such <FONT FACE="courier" style="color:blue">zeros</FONT>, which allocate memory for a vector or a matrix of specified dimensions and set all components of the vector and matrix to zero. If you import numpy as np,
<FONT FACE="courier" style="color:blue">import numpy as np</FONT>
the allocation of memory for matrix A of dimensions n and m becomes
<FONT FACE="courier" style="color:blue">A = np.zeros((n,m))</FONT>
The following is a standard initialization for the python codes you will write in this course:
End of explanation
L = 8*np.pi
N = 200
dx = L/N
u_0 = 1.
phi = np.zeros(N)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
Explanation: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color:blue">plt</FONT> and numpy as <FONT FACE="courier" style="color:blue">np</FONT>.
<h3>Step 2: Initialization of variables and allocations of memory</h3>
The first real coding task is to define your variables, with the exception of the time-related variables (you will understand why). Note that in our equation, we can store $\phi^n$ into one variable providing that we create a flux variable $F$.
<h3 style="color:red"> Q1: Explain why.</h3>
End of explanation
def init_simulation(x_phi,N):
phi = np.zeros(N)
phi = 1.+np.cos(x_phi-L/2.)
xmask = np.where(np.abs(x_phi-L/2.) > np.pi)
phi[xmask] = 0.
return phi
phi = init_simulation(x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3 style="color:red"> Q2: Search numpy function linspace and describe what <FONT FACE="courier">x_phi</FONT> and <FONT FACE="courier">x_u</FONT> define. Why are the dimensions different?</h3>
<h3>Step 3: Initialization</h3>
Now we define a function to initialize our variables. In python, <b>indentation matters!</b> A function is defined by the command <FONT FACE="courier">def</FONT> followed by the name of the function and the argument given to the function. The variables passed as argument in the function are local, meaning they may or may not have the same names as the variables in the core code. Any other variable used within the function needs to be defined in the function or before.
Note that python accepts implicit loops. Here <FONT FACE="courier">phi</FONT> and <FONT FACE="courier">x_phi</FONT> are two vectors of dimension $N$.
End of explanation
def init_simulation_slow(u,phi,x_phi,N):
for i in range(N):
if (np.abs(x_phi[i]-L/2.) > np.pi):
phi[i] = 0.
else:
phi[i] = 1.+np.cos(x_phi[i]-L/2.)
return phi
phi = init_simulation_slow(u,phi,x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
End of explanation
%%timeit
flux0 = np.zeros(N+1)
for i in range(1,N):
flux0[i] = 0.5*(phi[i-1]+phi[i])*u[i]
%%timeit
flux1 = np.zeros(N+1)
flux1[1:N] = 0.5*(phi[0:N-1]+phi[1:N])*u[1:N]
Explanation: <h3>Step 3: Code your interpolation/derivativation subroutine</h3>
Before we can simulate our system, we need to write and test our spatial interpolation and derivative procedure. Below we test the speed of two approaches, The first uses a for loop, whereas the second using the rules of indexing in python.
End of explanation
def compute_flux(a,v,N):
f=np.zeros(N+1)
f[1:N] = 0.5*(a[0:N-1]+a[1:N])*v[1:N]
f[0] = f[1]
f[N] = f[N-1]
return f
Explanation: The choice for the interpolation is obvious:
End of explanation
F_exact = np.zeros(N+1)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
plt.plot(x_u,F_exact,lw=2,label="exact")
plt.plot(x_u,F,'r--',lw=2,label="interpolated")
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.show()
Explanation: <h3>Step 4: Verification</h3>
The interpolation and derivation operations are critical components of the simulation that must be verified. Since the velocity is unity, $F_{i\pm1/2}=\phi_{i\pm1/2}$.
End of explanation
N = 200
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error = np.sqrt(np.sum(np.power(F-F_exact,2)))
errorx = np.power(F-F_exact,2)
plt.plot(x_u,errorx)
plt.show()
print('error norm L 2= %1.4e' %error)
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros(Nerror)
order = np.zeros(Nerror)
for ierror in range(Nerror):
N = Narray[ierror]
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error[ierror] = np.linalg.norm(F-F_exact)
#error[ierror] = np.sqrt(np.sum(np.power(F-F_exact,2)))
print('error norm L 2= %1.4e' %error[ierror])
order = 0.1*delta**(2)
plt.loglog(delta,error,lw=2,label='interpolate')
plt.loglog(delta,order,lw=2,label='$\propto\Delta x^2$')
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.show
Explanation: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm:
$$
\Vert F\Vert_2=\sqrt{\sum_{i=0}^{N}\left(F_i-F_i^e\right)^2}
$$
where $F_e$ is the exact solution for the flux.
End of explanation
Nscheme = 4
Scheme = np.array(['CS','US1','US2','US3'])
g_1 = np.array([1./2.,0.,0.,3./8.])
g_2 = np.array([0.,0.,1./2.,1./8.])
def compute_flux_advanced(a,v,N,num_scheme):
imask = np.where(Scheme == num_scheme)
g1 = g_1[imask]
g2 = g_2[imask]
f=np.zeros(N+1)
f[2:N] = ((1.-g1+g2)*a[1:N-1]+g1*a[2:N]-g2*a[0:N-2])*v[2:N]
if (num_scheme == 'US2') or (num_scheme == 'US3'):
f[1] = ((1.-g1)*a[0]+g1*a[1])*v[1]
f[0] = f[1]
f[N] = f[N-1]
return f
table = ListTable()
table.append(['Scheme', '$g_1$', '$g_2$'])
for i in range(4):
table.append([Scheme[i],g_1[i], g_2[i]])
table
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros((Nerror,Nscheme))
order = np.zeros((Nerror,Nscheme))
for ischeme in range(Nscheme):
num_scheme = Scheme[ischeme]
for ierror in range(Nerror):
N = Narray[ierror]
dx = L/N
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux_advanced(phi,u,N,num_scheme)
error[ierror,ischeme] = np.linalg.norm(F-F_exact)
#print('error norm L 2= %1.4e' %error[ierror,ischeme])
for ischeme in range(Nscheme):
plt.loglog(delta,error[:,ischeme],lw=2,label=Scheme[ischeme])
order = 2.0*(delta/delta[0])
plt.loglog(delta,order,'k:',lw=2,label='$\propto\Delta x$')
order = 0.1*(delta/delta[0])**(2)
plt.loglog(delta,order,'k-',lw=2,label='$\propto\Delta x^2$')
order = 0.1*(delta/delta[0])**(3)
plt.loglog(delta,order,'k--',lw=2,label='$\propto\Delta x^3$')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.xlim(L/300,L/9.)
plt.ylim(1e-5,1e2)
plt.show
Explanation: For reasons that will become clearer later, we want to consider other interpolation schemes:
$$
\phi_{i+\frac{1}{2}}=g_1\phi_{i+1}-g_2\phi_{i-1}+(1-g_1+g_2)\phi_i
$$
The scheme CS is the interpolation scheme we have used so far. Let us test them all, however we have to modify the interpolation function.
End of explanation
def flux_divergence(f,N,dx):
df = np.zeros(N)
df[0:N] = (f[1:N+1]-f[0:N])/dx
return df
Explanation: <h3 style="color:red">Q3: What do you observe? </h3>
<h3 style="color:red">Q4: Write a code to verify the divergence subroutine. </h3>
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'US2'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi -= dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3>Step 5: Writing the simulation code</h3>
The first code solves:
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
for whatever scheme you choose. Play with the different schemes. Consider that the analytical solution is:
$$
\phi(x,t)=\begin{cases}
1+\cos\left[x-\left(\frac{L}{2}+u_0t\right)\right]&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert\leq\pi\
0&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert>\pi
\end{cases}
$$
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phiold = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
rk_coef = np.array([0.5,1.])
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
phiold = phi
for irk in range(2):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi = phiold-rk_coef[irk]*dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta:
<p class='alert alert-info'>
\begin{eqnarray}
\phi_i^{n+1/2}&=&\phi_i^n-\frac{\Delta t}{2}\frac{F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}}{\Delta x}\\
\phi_i^{n+1}&=&\phi_i^n-\Delta t\frac{F^{n+1/2}_{i+\frac{1}{2}} - F^{n+1/2}_{i-\frac{1}{2}}}{\Delta x}
\end{eqnarray}
</p>
End of explanation |
13,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification example
In this example we will be exploring an exercise of binary classification using logistic regression to estimate whether a room is occupied or not, based on physical parameters measured from it using sensors.
The implementation of the logistic regression using gradient descend algorithm shares many similarities with that of linear regression explained in last unit. In this unit we will rely on the implementation offered by sklearn.
1) Reading and inspecting the data
For this example we will use the Occupancy Detection Dataset
obtained here
Step1: We can visualize its contents
Step2: A priori we can see that there is a big difference between Light and CO2 in occupied vs non occupied status. We will see whether these parameters play an important role in the classification.
To continue, we split the data into the input and output parameters
Step3: As we saw in last unit, in order to improve convergence speed and accuracy we usually normalize the input parameters to zero mean and unit variance.
Step4: 2) Applying Logistic regression on the whole data (don't do it at home...)
We are now ready to instantiate the logistic regression from sklearn and to learn parameters $\Theta$ to optimally map input parameters to output class.
Step5: We can see how this system performs on the whole data by implementing ourselves the comparison or by using the internal function to score the results. We will see both give the same value (+- numerical resolution differences).
Step6: Is this a good score? we check what the percentage of 1/0 are in the output data
Step7: This means that by always returning "yes" we would get a 79% accuracy. Not bad to obtain approx 20% absolute above chance.
Now, which features are most important in the classification? we can see this by looking at the estimated values of the $\Theta$ parameters
Step8: As expected, Light and CO2 are the most relevant variables, and Temperature follows. Note that we can compare these values only because we normalized the input features, else the individual $\theta$ variables would not be comparable.
3) Train-test sets
Applying any machine learning to datasets as a whole is always a bad idea as we are looking into predicted results over data that has been used for the training. This has a big danger of overfitting and giving us the wrong information.
To solve this, let's do a proper train/test set split on our data. We will train on one set and test on the other. If we ever need to set metaparameters after training the model we will usually define a third set (usually called cross validation or development) which is independent from training and test.
In this case we will split 70% to 30%. You will leran more about train/test sets in future units.
Step9: We now need to predict class labels for the test set. We will also generate the class probabilities, just to take a look.
Step10: The model is assigning a true whenever the value in the second column (probability of "true") is > 0.5
Let us now see some evaluation metrics
Step11: 4) Cross-validation datasets
Not to cunfuse these with the subset we can use to set some metaparameters, we can use the cross-validation technique (also called jackknifing technique) when we do not have much data over all and the idea of loosing some for testing is not a good idea. We normally split the data into 10 parts and perform train/test on each 9/1 groups. | Python Code:
%matplotlib inline
import pandas as pd #used for reading/writing data
import numpy as np #numeric library library
from matplotlib import pyplot as plt #used for plotting
import sklearn #machine learning library
occupancyData = pd.read_csv('data/occupancy_data/datatraining.txt')
Explanation: Classification example
In this example we will be exploring an exercise of binary classification using logistic regression to estimate whether a room is occupied or not, based on physical parameters measured from it using sensors.
The implementation of the logistic regression using gradient descend algorithm shares many similarities with that of linear regression explained in last unit. In this unit we will rely on the implementation offered by sklearn.
1) Reading and inspecting the data
For this example we will use the Occupancy Detection Dataset
obtained here: https://archive.ics.uci.edu/ml/datasets/Occupancy+Detection+
The dataset is described here:
Accurate occupancy detection of an office room from light, temperature, humidity and CO2 measurements using statistical learning models. Luis M. Candanedo, Véronique Feldheim. Energy and Buildings. Volume 112, 15 January 2016, Pages 28-39
End of explanation
occupancyData.head(10)
occupancyData.describe()
occupancyData.groupby('Occupancy').mean()
occupancyData.groupby('Occupancy').std()
Explanation: We can visualize its contents:
we first look at the first 10 records. Then, we can compute some general statistics for all records and finally we can also look at mean and std for the 2 classes we will want to classify into (occupied and not occupied).
End of explanation
occupancyDataInput = occupancyData.drop(['Occupancy', 'date'], axis=1)
occupancyDataOutput = occupancyData['Occupancy']
Explanation: A priori we can see that there is a big difference between Light and CO2 in occupied vs non occupied status. We will see whether these parameters play an important role in the classification.
To continue, we split the data into the input and output parameters
End of explanation
occupancyDataInput = (occupancyDataInput - occupancyDataInput.mean())/ occupancyDataInput.std()
occupancyDataInput.describe()
Explanation: As we saw in last unit, in order to improve convergence speed and accuracy we usually normalize the input parameters to zero mean and unit variance.
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(occupancyDataInput, occupancyDataOutput)
Explanation: 2) Applying Logistic regression on the whole data (don't do it at home...)
We are now ready to instantiate the logistic regression from sklearn and to learn parameters $\Theta$ to optimally map input parameters to output class.
End of explanation
predictedOccupancy = lr.predict(occupancyDataInput)
comparison = np.logical_xor(occupancyDataOutput, predictedOccupancy)
(occupancyDataOutput.shape[0] - np.sum(comparison))/occupancyDataOutput.shape[0]
lr.score(occupancyDataInput, occupancyDataOutput)
Explanation: We can see how this system performs on the whole data by implementing ourselves the comparison or by using the internal function to score the results. We will see both give the same value (+- numerical resolution differences).
End of explanation
occupancyDataOutput.mean()
Explanation: Is this a good score? we check what the percentage of 1/0 are in the output data:
End of explanation
pd.DataFrame(list(zip(occupancyDataInput.columns, np.transpose(lr.coef_))))
Explanation: This means that by always returning "yes" we would get a 79% accuracy. Not bad to obtain approx 20% absolute above chance.
Now, which features are most important in the classification? we can see this by looking at the estimated values of the $\Theta$ parameters
End of explanation
from sklearn.model_selection import train_test_split
occupancyDataInput_train, occupancyDataInput_test, occupancyDataOutput_train, occupancyDataOutput_test = train_test_split(occupancyDataInput, occupancyDataOutput, test_size=0.3, random_state=0)
lr2 = LogisticRegression()
lr2.fit(occupancyDataInput_train, occupancyDataOutput_train)
Explanation: As expected, Light and CO2 are the most relevant variables, and Temperature follows. Note that we can compare these values only because we normalized the input features, else the individual $\theta$ variables would not be comparable.
3) Train-test sets
Applying any machine learning to datasets as a whole is always a bad idea as we are looking into predicted results over data that has been used for the training. This has a big danger of overfitting and giving us the wrong information.
To solve this, let's do a proper train/test set split on our data. We will train on one set and test on the other. If we ever need to set metaparameters after training the model we will usually define a third set (usually called cross validation or development) which is independent from training and test.
In this case we will split 70% to 30%. You will leran more about train/test sets in future units.
End of explanation
predicted = lr2.predict(occupancyDataInput_test)
print(predicted)
probs = lr2.predict_proba(occupancyDataInput_test)
print(probs)
Explanation: We now need to predict class labels for the test set. We will also generate the class probabilities, just to take a look.
End of explanation
# generate evaluation metrics
from sklearn import metrics
print("Accuracy: %f", metrics.accuracy_score(occupancyDataOutput_test, predicted))
print("AUC: %f", metrics.roc_auc_score(occupancyDataOutput_test, probs[:, 1]))
print("Classification confusion matrix:")
print(metrics.confusion_matrix(occupancyDataOutput_test, predicted))
print("Classification report:")
print(metrics.classification_report(occupancyDataOutput_test, predicted))
Explanation: The model is assigning a true whenever the value in the second column (probability of "true") is > 0.5
Let us now see some evaluation metrics:
End of explanation
# evaluate the model using 10-fold cross-validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(LogisticRegression(), occupancyDataInput, occupancyDataOutput, scoring='accuracy', cv=10)
print(scores)
print(scores.mean())
Explanation: 4) Cross-validation datasets
Not to cunfuse these with the subset we can use to set some metaparameters, we can use the cross-validation technique (also called jackknifing technique) when we do not have much data over all and the idea of loosing some for testing is not a good idea. We normally split the data into 10 parts and perform train/test on each 9/1 groups.
End of explanation |
13,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Python tutorial
Launch this notebook with
Step2: types
Step3: variables
Step4: control statements
Step5: Excercise
Step6: Functions
Step7: Exercise
Step8: return
Step9: exercise
create the tousd() function
Create a function that takes an amount and "what" and returns the converted value.
Step10: lists
Step11: exercise
Step12: dictionaries
Step13: Exercise | Python Code:
1 + 1
12 * 44
Hello Data Skills!
'Hello Data Skills!'
print 'Hello Data Skills!'
print "Hello Data Skills!"
print 'Hello Data Skills!'
print Hello Data Skills!
Explanation: Python tutorial
Launch this notebook with:
1. Open terminal
2. type cd training/python
3. type ipython notebook
basics
End of explanation
type(1)
type(1 + 1)
type("hello")
1 == 2
1 == 1
type(1 == 1)
True
False
type(True)
12 + 12 + "hello"
str(1)
type(str(1))
type(3.14)
3 / 2
3 / 2.0
Explanation: types
End of explanation
x = 1
print x
y = x + 2
print y
z = "hello"
z
w = "python"
print z + w
print z + " " + w
x
x = x + 1
print x
type(x)
Explanation: variables
End of explanation
i = 10
print i % 2
i = 9
print i % 2
if i % 2 == 0:
print "even"
if i % 2 == 0:
print "even"
else:
print "odd"
i = 10
# execute "if" the code above with the new value
Explanation: control statements
End of explanation
v = 1
if v < 10:
print "it's small"
else:
print "it's big"
Explanation: Excercise: Write a statement which say "it's small" for v < 10 and it says "it's big" for v >= 10
End of explanation
def printeven(num):
if num % 2 == 0:
print "even"
else:
print "false"
printeven(10)
printeven(9)
printeven(i)
def printhuf(usd):
print "$" + str(usd) + " is " + str(272.9 * usd) + " Ft"
printhuf(10)
Explanation: Functions
End of explanation
def printusd(huf):
print str(huf) + " Ft" + " is $" + str(huf / 272.9)
printusd(1000)
def printconvert(amount, currency):
if currency == "usd":
printhuf(amount)
else:
printusd(amount)
printconvert(10, "usd")
Explanation: Exercise: Write a function that converts hufs to dollars and prints them
End of explanation
def tohuf(usd):
return 272.9 * usd
tohuf(10)
x = printhuf(10)
print x
x is None
isnone = x is None
print isnone
y = tohuf(10)
print y
Explanation: return
End of explanation
# solution
def tousd(huf):
return huf / 272.9
def convert(amount, currency):
if currency == "usd":
return tohuf(amount)
else:
return tousd(amount)
print convert(10,"usd")
Explanation: exercise
create the tousd() function
Create a function that takes an amount and "what" and returns the converted value.
End of explanation
l = [2,4,6]
type(l)
print l[0]
print l[2]
print l[3]
l.append(8)
print l
l[3]
for elem in l:
print elem
range(1,10)
range(1,11)
tenelements = range(1,11)
for e in tenelements:
print e
for e in tenelements:
print tohuf(e)
Explanation: lists
End of explanation
# solution
for e in range(1,11):
if e % 2 == 0:
print e
def even(num):
return num % 2 == 0
for e in range(1,11):
if even(e):
print e
Explanation: exercise:
create a loop which goes from 1 to 10 but only print the even numbers
transform this loop so it uses an "even" function which returns if a number is even or not (bool)
End of explanation
u = {
"username": "lili",
"password": "1982"
}
u["username"]
u["password"]
print u
import json
with open("passwords.json") as f:
d = json.load(f)
d
type(d)
d[2]
for u in d:
print u
for u in d:
print u["username"]
Explanation: dictionaries
End of explanation
# Solution
def printifinfile(username):
with open("passwords.json") as f:
j = json.load(f)
for record in j:
if record["username"] == username:
print "user found!!"
printifinfile("lili")
printifinfile("newuser")
Explanation: Exercise:
create a function that gets a username as an argument and prints "user found!!" that user is in the json file or not
End of explanation |
13,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bike Availability Preprocessing
Data Dictionary
The raw data contains the following data per station per reading
Step4: Parse Raw Data
Define the Parsing Functions
Step5: Quick Data View
Load Single Day Data
Step6: All Station View
Step7: Single Station View
Step8: Observations
There are some duplicate rows <- remove duplicates
RemovalDate may contain a lot of nulls <- remove if not helpful
Locked and Installed might be constant <- remove if not helpful
Build Dataset
Work with Chunks
Due to memory constraints we'll parse the data in chunks. In each chunk we'll remove the redundant candidate keys and also duplicate rows.
Step9: Tables
We will have two different tables, one for the stations and one for the availability readings
Step10: Build the Dataset
Step11: Read the Parsed Data
Step12: Technically Correct Data
The data is set to be technically correct if it
Step13: Derive Data
Step14: Add Station Priority Column
Priorities downloaded from https
Step15: Consistent Data
Stations Analysis
Overview
Step17: Observations
Step18: Given these records have the same location and Id but different Name or TerminalName, we'll assume the station changed name and remove the first entries.
Step19: Check Locations
Let's have a closer look at the station locations. All of them should be in Greater London.
Step20: This station looks like a test dation, so we'll remove it.
Step21: We will investigate the fact that there are stations with duplicate latitude or longitude values.
Step22: We can observe that the stations are different and that having the same Longitude is just a coincidence.
Let's plot all the stations in a map to see how it looks
Step23: Readings Analysis
Overview
Step24: Observations
Step25: Readings Consistency Through Days
Lets get some insight about which stations do not have readings during an entire day
Step26: Stations with no readings in at least one day
Step27: Stations with no readings in at least one day during the weekend
Step28: Stations with no readings in at least one day during weekdays
Step29: Observations
Step30: Stations | Python Code:
%matplotlib inline
import logging
import itertools
import json
import os
import pickle
import folium
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from datetime import datetime
from os import listdir
from os.path import isfile, join
from IPython.display import Image
from datetime import date
from src.data.parse_dataset import parse_dir, parse_json_files, get_file_list
from src.data.string_format import format_name, to_short_name
from src.data.visualization import lon_min_longitude, lon_min_latitude, lon_max_longitude, lon_max_latitude, lon_center_latitude, lon_center_longitude, create_london_map
logger = logging.getLogger()
logger.setLevel(logging.INFO)
Explanation: Bike Availability Preprocessing
Data Dictionary
The raw data contains the following data per station per reading:
Id - String - API Resource Id
Name - String - The common name of the station
PlaceType - String ?
TerminalName - String - ?
NbBikes - Integer - The number of available bikes
NbDocks - Integer - The total number of docking spaces
NbEmptyDocks - Integer - The number of available empty docking spaces
Timestamp - DateTime - The moment this reading was captured
InstallDate - DateTime - Date when the station was installed
RemovalDate - DateTime - Date when the station was removed
Installed - Boolean - If the station is installed or not
Locked - Boolean - ?
Temporary - Boolean - If the station is temporary or not (TfL adds temporary stations to cope with demand.)
Latitude - Float - Latitude Coordinate
Longitude - Float - Longitude Coordinate
The following variables will be derived from the raw data.
NbUnusableDocks - Integer - The number of non-working docking spaces. Computed with NbUnusableDocks = NbDocks - (NbBikes + NbEmptyDocks)
Set up
Imports
End of explanation
def parse_cycles(json_obj):
Parses TfL's BikePoint JSON response
return [parse_station(element) for element in json_obj]
def parse_station(element):
Parses a JSON bicycle station object to a dictionary
obj = {
'Id': element['id'],
'Name': element['commonName'],
'Latitude': element['lat'],
'Longitude': element['lon'],
'PlaceType': element['placeType'],
}
for p in element['additionalProperties']:
obj[p['key']] = p['value']
if 'timestamp' not in obj:
obj['Timestamp'] = p['modified']
elif obj['Timestamp'] != p['modified']:
raise ValueError('The properties\' timestamps for station %s do not match: %s != %s' % (
obj['id'], obj['Timestamp'], p['modified']))
return obj
def bike_file_date_fn(file_name):
Gets the file's date
return datetime.strptime(os.path.basename(file_name), 'BIKE-%Y-%m-%d:%H:%M:%S.json')
def create_between_dates_filter(file_date_fn, date_start, date_end):
def filter_fn(file_name):
file_date = file_date_fn(file_name)
return file_date >= date_start and file_date <= date_end
return filter_fn
Explanation: Parse Raw Data
Define the Parsing Functions
End of explanation
filter_fn = create_between_dates_filter(bike_file_date_fn,
datetime(2016, 5, 16, 7, 0, 0),
datetime(2016, 5, 16, 23, 59, 59))
records = parse_dir('/home/jfconavarrete/Documents/Work/Dissertation/spts-uoe/data/raw/cycles',
parse_cycles, sort_fn=bike_file_date_fn, filter_fn=filter_fn)
# records is a list of lists of dicts
df = pd.DataFrame(list(itertools.chain.from_iterable(records)))
Explanation: Quick Data View
Load Single Day Data
End of explanation
df.head()
Explanation: All Station View
End of explanation
df[df['Id'] == 'BikePoints_1'].head()
Explanation: Single Station View
End of explanation
def chunker(seq, size):
return (seq[pos:pos + size] for pos in xrange(0, len(seq), size))
Explanation: Observations
There are some duplicate rows <- remove duplicates
RemovalDate may contain a lot of nulls <- remove if not helpful
Locked and Installed might be constant <- remove if not helpful
Build Dataset
Work with Chunks
Due to memory constraints we'll parse the data in chunks. In each chunk we'll remove the redundant candidate keys and also duplicate rows.
End of explanation
def split_data(parsed_data):
master_df = pd.DataFrame(list(itertools.chain.from_iterable(parsed_data)))
readings_df = pd.DataFrame(master_df, columns=['Id', 'Timestamp', 'NbBikes', 'NbDocks', 'NbEmptyDocks'])
stations_df = pd.DataFrame(master_df, columns=['Id', 'Name', 'TerminalName' , 'PlaceType', 'Latitude',
'Longitude', 'Installed', 'Temporary', 'Locked',
'RemovalDate', 'InstallDate'])
return (readings_df, stations_df)
Explanation: Tables
We will have two different tables, one for the stations and one for the availability readings
End of explanation
# get the files to parse
five_weekdays_filter = create_between_dates_filter(bike_file_date_fn,
datetime(2016, 6, 19, 0, 0, 0),
datetime(2016, 6, 27, 23, 59, 59))
files = get_file_list('data/raw/cycles', filter_fn=None, sort_fn=bike_file_date_fn)
# process the files in chunks
files_batches = chunker(files, 500)
# start with an empty dataset
readings_dataset = pd.DataFrame()
stations_dataset = pd.DataFrame()
# append each chunk to the datasets while removing duplicates
for batch in files_batches:
parsed_data = parse_json_files(batch, parse_cycles)
# split the data into two station data and readings data
readings_df, stations_df = split_data(parsed_data)
# append the datasets
readings_dataset = pd.concat([readings_dataset, readings_df])
stations_dataset = pd.concat([stations_dataset, stations_df])
# remove duplicated rows
readings_dataset.drop_duplicates(inplace=True)
stations_dataset.drop_duplicates(inplace=True)
# put the parsed data in pickle files
pickle.dump(readings_dataset, open("data/parsed/readings_dataset_raw.p", "wb"))
pickle.dump(stations_dataset, open("data/parsed/stations_dataset_raw.p", "wb"))
Explanation: Build the Dataset
End of explanation
stations_dataset = pickle.load(open('data/parsed/stations_dataset_raw.p', 'rb'))
readings_dataset = pickle.load(open('data/parsed/readings_dataset_raw.p', 'rb'))
Explanation: Read the Parsed Data
End of explanation
# convert columns to their appropriate datatypes
stations_dataset['InstallDate'] = pd.to_numeric(stations_dataset['InstallDate'], errors='raise')
stations_dataset['RemovalDate'] = pd.to_numeric(stations_dataset['RemovalDate'], errors='raise')
stations_dataset['Installed'].replace({'true': True, 'false': False}, inplace=True)
stations_dataset['Temporary'].replace({'true': True, 'false': False}, inplace=True)
stations_dataset['Locked'].replace({'true': True, 'false': False}, inplace=True)
readings_dataset['NbBikes'] = readings_dataset['NbBikes'].astype('uint16')
readings_dataset['NbDocks'] = readings_dataset['NbDocks'].astype('uint16')
readings_dataset['NbEmptyDocks'] = readings_dataset['NbEmptyDocks'].astype('uint16')
# format station name
stations_dataset['Name'] = stations_dataset['Name'].apply(format_name)
# convert string timestamp to datetime
stations_dataset['InstallDate'] = pd.to_datetime(stations_dataset['InstallDate'], unit='ms', errors='raise')
stations_dataset['RemovalDate'] = pd.to_datetime(stations_dataset['RemovalDate'], unit='ms', errors='raise')
readings_dataset['Timestamp'] = pd.to_datetime(readings_dataset['Timestamp'], format='%Y-%m-%dT%H:%M:%S.%f', errors='raise').dt.tz_localize('UTC')
# sort the datasets
stations_dataset.sort_values(by=['Id'], ascending=True, inplace=True)
readings_dataset.sort_values(by=['Timestamp'], ascending=True, inplace=True)
Explanation: Technically Correct Data
The data is set to be technically correct if it:
can be directly recognized as belonging to a certain variable
is stored in a data type that represents the value domain of the real-world variable.
End of explanation
stations_dataset['ShortName'] = stations_dataset['Name'].apply(to_short_name)
readings_dataset['NbUnusableDocks'] = readings_dataset['NbDocks'] - (readings_dataset['NbBikes'] + readings_dataset['NbEmptyDocks'])
Explanation: Derive Data
End of explanation
stations_priorities = pd.read_csv('data/raw/priorities/station_priorities.csv', encoding='latin-1')
stations_priorities['Site'] = stations_priorities['Site'].apply(format_name)
stations_dataset = pd.merge(stations_dataset, stations_priorities, how='left', left_on='ShortName', right_on='Site')
stations_dataset['Priority'].replace({'One': '1', 'Two': '2', 'Long Term Suspended': np.NaN, 'Long term suspension': np.NaN}, inplace=True)
stations_dataset.drop(['Site'], axis=1, inplace=True)
stations_dataset.drop(['Borough'], axis=1, inplace=True)
stations_dataset
Explanation: Add Station Priority Column
Priorities downloaded from https://www.whatdotheyknow.com/request/tfl_boris_bike_statistics?unfold=1
End of explanation
stations_dataset.shape
stations_dataset.info(memory_usage='deep')
stations_dataset.head()
stations_dataset.describe()
stations_dataset.apply(lambda x:x.nunique())
stations_dataset.isnull().sum()
Explanation: Consistent Data
Stations Analysis
Overview
End of explanation
def find_duplicate_ids(df):
Find Ids that have more than one value in the given columns
df = df.drop_duplicates()
value_counts_grouped_by_id = df.groupby('Id').count()
is_duplicate_id = value_counts_grouped_by_id.applymap(lambda x: x > 1).any(axis=1)
duplicate_ids = value_counts_grouped_by_id[is_duplicate_id == True].index.values
return df[df['Id'].isin(duplicate_ids)]
diplicate_ids = find_duplicate_ids(stations_dataset)
diplicate_ids
Explanation: Observations:
Id, Name and Terminal name seem to be candidate keys
The minimum latitude and the maximum longitude are 0
Some stations have the same latitude or longitude
Id, TerminalName and Name have different unique values
Placetype, Installed, Temporary and Locked appear to be constant
Some stations do not have an install date
Some Stations have a removal date (very sparse)
Remove Duplicate Stations
End of explanation
# remove the one not in merchant street
stations_dataset.drop(417, inplace=True)
# remove the one with the shortest name
stations_dataset.drop(726, inplace=True)
# remove the one that is not in kings cross (as the name of the station implies)
stations_dataset.drop(745, inplace=True)
# remove the duplicated entries
stations_dataset.drop([747, 743, 151, 754, 765, 768], inplace=True)
# make sure there are no repeated ids
assert len(find_duplicate_ids(stations_dataset)) == 0
Explanation: Given these records have the same location and Id but different Name or TerminalName, we'll assume the station changed name and remove the first entries.
End of explanation
def find_locations_outside_box(locations, min_longitude, min_latitude, max_longitude, max_latitude):
latitude_check = ~(locations['Latitude'] >= min_latitude) & (locations['Latitude'] <= max_latitude)
longitude_check = ~(locations['Longitude'] >= min_longitude) & (locations['Longitude'] <= max_longitude)
return locations[(latitude_check | longitude_check)]
outlier_locations_df = find_locations_outside_box(stations_dataset, lon_min_longitude, lon_min_latitude,
lon_max_longitude, lon_max_latitude)
outlier_locations_df
Explanation: Check Locations
Let's have a closer look at the station locations. All of them should be in Greater London.
End of explanation
outlier_locations_idx = outlier_locations_df.index.values
stations_dataset.drop(outlier_locations_idx, inplace=True)
# make sure there are no stations outside London
assert len(find_locations_outside_box(stations_dataset, lon_min_longitude, lon_min_latitude,
lon_max_longitude, lon_max_latitude)) == 0
Explanation: This station looks like a test dation, so we'll remove it.
End of explanation
# find stations with duplicate longitude
id_counts_groupedby_longitude = stations_dataset.groupby('Longitude')['Id'].count()
nonunique_longitudes = id_counts_groupedby_longitude[id_counts_groupedby_longitude != 1].index.values
nonunique_longitude_stations = stations_dataset[stations_dataset['Longitude'].isin(nonunique_longitudes)].sort_values(by=['Longitude'])
id_counts_groupedby_latitude = stations_dataset.groupby('Latitude')['Id'].count()
nonunique_latitudes = id_counts_groupedby_latitude[id_counts_groupedby_latitude != 1].index.values
nonunique_latitudes_stations = stations_dataset[stations_dataset['Latitude'].isin(nonunique_latitudes)].sort_values(by=['Latitude'])
nonunique_coordinates_stations = pd.concat([nonunique_longitude_stations, nonunique_latitudes_stations])
nonunique_coordinates_stations
def draw_stations_map(stations_df):
stations_map = create_london_map()
for index, station in stations_df.iterrows():
folium.Marker([station['Latitude'],station['Longitude']], popup=station['Name']).add_to(stations_map)
return stations_map
draw_stations_map(nonunique_coordinates_stations)
Explanation: We will investigate the fact that there are stations with duplicate latitude or longitude values.
End of explanation
london_longitude = -0.127722
london_latitude = 51.507981
MAX_RECORDS = 100
stations_map = create_london_map()
for index, station in stations_dataset[0:MAX_RECORDS].iterrows():
folium.Marker([station['Latitude'],station['Longitude']], popup=station['Name']).add_to(stations_map)
stations_map
#folium.Map.save(stations_map, 'reports/maps/stations_map.html')
Explanation: We can observe that the stations are different and that having the same Longitude is just a coincidence.
Let's plot all the stations in a map to see how it looks
End of explanation
readings_dataset.shape
readings_dataset.info(memory_usage='deep')
readings_dataset.head()
readings_dataset.describe()
readings_dataset.apply(lambda x:x.nunique())
readings_dataset.isnull().sum()
timestamps = readings_dataset['Timestamp']
ax = timestamps.groupby([timestamps.dt.year, timestamps.dt.month, timestamps.dt.day]).count().plot(kind="bar")
ax.set_xlabel('Date')
ax.set_title('Readings per Day')
Explanation: Readings Analysis
Overview
End of explanation
start_date = date(2016, 5, 15)
end_date = date(2016, 6, 27)
days = set(pd.date_range(start=start_date, end=end_date, closed='left'))
readings_dataset = readings_dataset[(timestamps > start_date) & (timestamps < end_date)]
Explanation: Observations:
The number of readings in each day varies widely
Discard Out of Range Data
End of explanation
# get a subview of the readings dataset
id_timestamp_view = readings_dataset.loc[:,['Id','Timestamp']]
# remove the time component of the timestamp
id_timestamp_view['Timestamp'] = id_timestamp_view['Timestamp'].apply(lambda x: x.replace(hour=0, minute=0, second=0, microsecond=0))
# compute the days of readings per stations
days_readings = id_timestamp_view.groupby('Id').aggregate(lambda x: set(x))
days_readings['MissingDays'] = days_readings['Timestamp'].apply(lambda x: list(days - x))
days_readings['MissingDaysCount'] = days_readings['MissingDays'].apply(lambda x: len(x))
pickle.dump(days_readings.query('MissingDaysCount > 0'), open("data/parsed/missing_days.p", "wb"))
def expand_datetime(df, datetime_col):
df['Weekday'] = df[datetime_col].apply(lambda x: x.weekday())
return df
# get the stations with missing readings only
missing_days_readings = days_readings[days_readings['MissingDaysCount'] != 0]
missing_days_readings = missing_days_readings['MissingDays'].apply(lambda x: pd.Series(x)).unstack().dropna()
missing_days_readings.index = missing_days_readings.index.droplevel()
# sort and format in their own DF
missing_days_readings = pd.DataFrame(missing_days_readings, columns=['MissingDay'], index=None).reset_index().sort_values(by=['Id', 'MissingDay'])
# expand the missing day date
expand_datetime(missing_days_readings, 'MissingDay')
missing_days_readings
missing_days_readings['Id'].nunique()
# plot the missing readings days
days = missing_days_readings['MissingDay']
missing_days_counts = days.groupby([days.dt.year, days.dt.month, days.dt.day]).count()
ax = missing_days_counts.plot(kind="bar")
ax.set_xlabel('Date')
ax.set_ylabel('Number of Stations')
Explanation: Readings Consistency Through Days
Lets get some insight about which stations do not have readings during an entire day
End of explanation
missing_days_readings_stations = stations_dataset[stations_dataset['Id'].isin(missing_days_readings['Id'].unique())]
draw_stations_map(missing_days_readings_stations)
Explanation: Stations with no readings in at least one day
End of explanation
weekend_readings = missing_days_readings[missing_days_readings['Weekday'] > 4]
missing_dayreadings_stn = stations_dataset[stations_dataset['Id'].isin(weekend_readings['Id'].unique())]
draw_stations_map(missing_dayreadings_stn)
Explanation: Stations with no readings in at least one day during the weekend
End of explanation
weekday_readings = missing_days_readings[missing_days_readings['Weekday'] < 5]
missing_dayreadings_stn = stations_dataset[stations_dataset['Id'].isin(weekday_readings['Id'].unique())]
draw_stations_map(missing_dayreadings_stn)
Explanation: Stations with no readings in at least one day during weekdays
End of explanation
stations_to_remove = set(readings_dataset.Id) - set(stations_dataset.Id)
readings_dataset = readings_dataset[~readings_dataset.Id.isin(stations_to_remove)]
readings_dataset.reset_index(inplace=True, drop=True)
readings_dataset.head()
readings_dataset.describe()
readings_dataset.info(memory_usage='deep')
pickle.dump(readings_dataset, open("data/parsed/readings_dataset_utc.p", "wb"))
Explanation: Observations:
* There are 29 stations that do not have readings in at least one day
* There were more stations without readings during May than in June
* Other than that, there is no visible pattern
Build Datasets
Readings
End of explanation
stations_dataset.reset_index(inplace=True, drop=True)
stations_dataset.head()
stations_dataset.describe()
stations_dataset.info(memory_usage='deep')
pickle.dump(stations_dataset, open("data/parsed/stations_dataset_final.p", "wb"))
Explanation: Stations
End of explanation |
13,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 9 - Dataset preprocessing
Before we utilize machine learning algorithms we must first prepare our dataset. This can often take a significant amount of time and can have a large impact on the performance of our models.
We will be looking at four different types of data
Step1: Tabular data
Missing data
Normalization
Categorical data
Missing data
There are a number of ways to handle missing data
Step2: Normalization
Many machine learning algorithms expect features to have similar distributions and scales.
A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.
There are two common approaches to normalization
Step3: Categorical data
Categorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.
Continuous variables can be converted to categorical variables by applying a threshold.
Step5: Exercises
Substitute missing values in x with the column mean and add an additional column to indicate when missing values have been substituted. The isnull method on the pandas dataframe may be useful.
Convert x to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.
Convert x['C'] into a categorical variable using a threshold of 0.125
Image data
Depending on the type of task being performed there are a variety of steps we may want to take in working with images
Step6: Text
When working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.
The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.
We can extend this to look at not just individual words but also bigrams and trigrams. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: Week 9 - Dataset preprocessing
Before we utilize machine learning algorithms we must first prepare our dataset. This can often take a significant amount of time and can have a large impact on the performance of our models.
We will be looking at four different types of data:
Tabular data
Image data
Text
Tabular data
We will look at three different steps we may need to take when handling tabular data:
Missing data
Normalization
Categorical data
Image data
Image data can present a number of issues that we must address to maximize performance:
Histogram normalization
Windows
Pyramids (for detection at different scales)
Centering
Text
Text can present a number of issues, mainly due to the number of words that can be found in our features. There are a number of ways we can convert from text to usable features:
Bag of words
Parsing
End of explanation
from sklearn import linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
clf = linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
import pandas as pd
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
Explanation: Tabular data
Missing data
Normalization
Categorical data
Missing data
There are a number of ways to handle missing data:
Drop all records with a value missing
Substitute all missing values with an average value
Substitute all missing values with some placeholder value, i.e. 0, 1e9, -1e9, etc
Predict missing values based on other attributes
Add additional feature indicating when a value is missing
If the machine learning model will be used with new data it is important to consider the possibility of receiving records with values missing that we have not observed previously in the training dataset.
The simplest approach is to remove any records that have missing data. Unfortunately missing values are often not randomly distributed through a dataset and removing them can introduce bias.
An alternative approach is to substitute the missing values. This can be with the mean of the feature across all the records or the value can be predicted based on the values of the other features in the dataset. Placeholder values can also be used with decision trees but do not work as well for most other algorithms.
Finally, missing values can themselves be useful features. Adding an additional feature indicating when a value is missing is often used to include this information.
End of explanation
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
from sklearn import preprocessing
scaling = preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
Explanation: Normalization
Many machine learning algorithms expect features to have similar distributions and scales.
A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.
There are two common approaches to normalization:
Z-score standardization
Min-max scaling
Z-score standardization
Z-score standardization rescales values so that they have a mean of zero and a standard deviation of 1. Specifically we perform the following transformation:
$$z = \frac{x - \mu}{\sigma}$$
Min-max scaling
An alternative is min-max scaling that transforms data into the range of 0 to 1. Specifically:
$$x_{norm} = \frac{x - x_{min}}{x_{max} - x_{min}}$$
Min-max scaling is less commonly used but can be useful for image data and in some neural networks.
End of explanation
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
Explanation: Categorical data
Categorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.
Continuous variables can be converted to categorical variables by applying a threshold.
End of explanation
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
Plot an image along with its histogram and cumulative histogram.
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from sklearn import datasets
digits = datasets.load_digits()
#print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
Explanation: Exercises
Substitute missing values in x with the column mean and add an additional column to indicate when missing values have been substituted. The isnull method on the pandas dataframe may be useful.
Convert x to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.
Convert x['C'] into a categorical variable using a threshold of 0.125
Image data
Depending on the type of task being performed there are a variety of steps we may want to take in working with images:
Histogram normalization
Windows and pyramids (for detection at different scales)
Centering
Occasionally the camera used to generate an image will use 10- to 14-bits while a 16-bit file format will be used. In this situation all the pixel intensities will be in the lower values. Rescaling to the full range (or to 0-1) can be useful.
Further processing can be done to alter the histogram of the image.
When looking for particular features in an image a sliding window can be used to check different locations. This can be combined with an image pyramid to detect features at different scales. This is often needed when objects can be at different distances from the camera.
If objects are sparsely distributed in an image a faster approach than using sliding windows is to identify objects with a simple threshold and then test only the bounding boxes containing objects. Before running these through a model centering based on intensity can be a useful approach. Small offsets, rotations and skewing can be used to generate additional training data.
End of explanation
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
Explanation: Text
When working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.
The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.
We can extend this to look at not just individual words but also bigrams and trigrams.
End of explanation |
13,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading Data From An HTTP Server Tutorial
MLDB gives users full control over where and how data is persisted. MLDB handles multiple protocol for URLs (see Files and URLs). In this tutorial, we provide examples to load files via <code> http
Step1: Loading data with http
Step2: We can now take a look
Step3: Accessing a specific file inside an archive
If the targeted file is inside an archive (.tar or .zip), we can specify the specific file we want to extract, as seen in the example below. Here, we load the 3980.circles file within the facebook folder
Step4: Let's query our dataset to see what the data looks like | Python Code:
from pymldb import Connection
mldb = Connection()
Explanation: Loading Data From An HTTP Server Tutorial
MLDB gives users full control over where and how data is persisted. MLDB handles multiple protocol for URLs (see Files and URLs). In this tutorial, we provide examples to load files via <code> http:// </code> or <code> https://</code> for files accessible on a HTTP server on the public internet or a private intranet.
For an example using the <code> file:// </code> for a file inside an MLDB container, see the Loading Data Tutorial for an example. MLDB also supports loading files from Amazon S3 and SFTP servers transparently. See the documentation for Files and URLs for more details.
The notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.
End of explanation
dataUrl = "http://snap.stanford.edu/data/facebook_combined.txt.gz"
print mldb.put("/v1/procedures/import_data", {
"type": "import.text",
"params": {
"dataFileUrl": dataUrl,
"headers": ["node", "edge"],
"delimiter": " ",
"quoteChar": "",
"outputDataset": "import_URL1",
"runOnCreation": True
}
})
Explanation: Loading data with http:// or https://
MLDB makes it very easy to load data from a public web server, since a file location can be specified using a remote URI. To illustrate this, we have chosen to load a file from the Facebook Social Circles dataset, hosted by the Stanford Network Analysis Project (SNAP), who provide many public datasets.
We will simply import the file http://snap.stanford.edu/data/facebook_combined.txt.gz using the import.text procedure. Notice that not only is the file hosted on a remote server, but it is also compressed. MLDB will decompress it seamlessly as it's being downloaded.
End of explanation
mldb.query("SELECT * FROM import_URL1 LIMIT 5")
Explanation: We can now take a look:
End of explanation
dataUrl = "http://snap.stanford.edu/data/facebook.tar.gz"
print mldb.put("/v1/procedures/import_data", {
"type": "import.text",
"params": {
"dataFileUrl": "archive+" + dataUrl + "#facebook/3980.circles",
"headers": ["circles"],
"delimiter": " ",
"quoteChar": "",
"outputDataset": "import_URL2",
"runOnCreation": True
}
})
Explanation: Accessing a specific file inside an archive
If the targeted file is inside an archive (.tar or .zip), we can specify the specific file we want to extract, as seen in the example below. Here, we load the 3980.circles file within the facebook folder:
End of explanation
mldb.query("SELECT * from import_URL2 LIMIT 5")
Explanation: Let's query our dataset to see what the data looks like:
End of explanation |
13,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Example
Step2: MCMC inference
Step3: With PyMC3 version >=3.9 the return_inferencedata=True kwarg makes the sample function return an arviz.InferenceData object instead of a MultiTrace.
Step4: Variational inference
We use automatic differentiation VI.
Details can be found at https | Python Code:
# import pymc3 # colab uses 3.7 by default (as of April 2021)
# arviz needs 3.8+
#!pip install pymc3>=3.8 # fails to update
#!pip install pymc3==3.11 # latest number is hardcoded
!pip install -U pymc3>=3.8
import pymc3 as pm
print(pm.__version__)
#!pip install arviz
import arviz as az
print(az.__version__)
import sklearn
import scipy.stats as stats
import scipy.optimize
import matplotlib.pyplot as plt
import seaborn as sns
import time
import numpy as np
import os
import pandas as pd
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/pymc3_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Brief introduction to PyMC3
PyMC3.
End of explanation
mu = 8.5
tau = 1.0
sigma = 0.75
y = 9.5
m = (sigma**2 * mu + tau**2 * y) / (sigma**2 + tau**2)
s2 = (sigma**2 * tau**2) / (sigma**2 + tau**2)
s = np.sqrt(s2)
print(m)
print(s)
# Specify the model
with pm.Model() as model:
theta = pm.Normal("theta", mu=mu, sd=tau)
obs = pm.Normal("obs", mu=theta, sd=sigma, observed=y)
Explanation: Example: 1d Gaussian with unknown mean.
We use the simple example from the Pyro intro. The goal is to infer the weight $\theta$ of an object, given noisy measurements $y$. We assume the following model:
$$
\begin{align}
\theta &\sim N(\mu=8.5, \tau^2=1.0)\
y \sim &N(\theta, \sigma^2=0.75^2)
\end{align}
$$
Where $\mu=8.5$ is the initial guess.
By Bayes rule for Gaussians, we know that the exact posterior,
given a single observation $y=9.5$, is given by
$$
\begin{align}
\theta|y &\sim N(m, s^s) \
m &=\frac{\sigma^2 \mu + \tau^2 y}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 8.5 + 1 \times 9.5}{0.75^2 + 1^2}
= 9.14 \
s^2 &= \frac{\sigma^2 \tau^2}{\sigma^2 + \tau^2}
= \frac{0.75^2 \times 1^2}{0.75^2 + 1^2}= 0.6^2
\end{align}
$$
End of explanation
# run MCMC (defaults to using the NUTS algorithm with 2 chains)
with model:
trace = pm.sample(1000, random_seed=123)
az.summary(trace)
trace
samples = trace["theta"]
print(samples.shape)
post_mean = np.mean(samples)
post_std = np.std(samples)
print([post_mean, post_std])
Explanation: MCMC inference
End of explanation
with model:
idata = pm.sample(1000, random_seed=123, return_inferencedata=True)
idata
az.plot_trace(idata);
Explanation: With PyMC3 version >=3.9 the return_inferencedata=True kwarg makes the sample function return an arviz.InferenceData object instead of a MultiTrace.
End of explanation
niter = 10000
with model:
post = pm.fit(niter, method="advi"); # mean field approximation
# Plot negative ELBO vs iteration to assess convergence
plt.plot(post.hist);
# convert analytic posterior to a bag of iid samples
trace = post.sample(10000)
samples = trace["theta"]
print(samples.shape)
post_mean = np.mean(samples)
post_std = np.std(samples)
print([post_mean, post_std])
az.summary(trace)
Explanation: Variational inference
We use automatic differentiation VI.
Details can be found at https://docs.pymc.io/notebooks/variational_api_quickstart.html
End of explanation |
13,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brainstorm Elekta phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see
Step1: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of
Step2: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event
Step3: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory
Step4: Our phantom produces sinusoidal bursts at 20 Hz
Step5: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
Step6: Let's use a sphere head geometry model <eeg_sphere_model>
and let's see the coordinate alignment and the sphere location. The phantom
is properly modeled by a single-shell sphere with origin (0., 0., 0.).
Even though this is a VectorView/TRIUX phantom, we can use the Otaniemi
phantom subject as a surrogate because the "head" surface (hemisphere outer
shell) has the same geometry for both phantoms, even though the internal
dipole locations differ. The phantom_otaniemi scan was aligned to the
phantom's head coordinate frame, so an identity trans is appropriate
here.
Step7: Let's do some dipole fits. We first compute the noise covariance,
then do the fits for each event_id taking the time instant that maximizes
the global field power.
Step8: Do a quick visualization of how much variance we explained, putting the
data and residuals on the same scale (here the "time points" are the
32 dipole peak values that we fit)
Step9: Now we can compare to the actual locations, taking the difference in mm
Step10: Let's plot the positions and the orientations of the actual and the estimated
dipoles | Python Code:
# Authors: Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets import fetch_phantom
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
Explanation: Brainstorm Elekta phantom dataset tutorial
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see :footcite:TadelEtAl2011 and
the original Brainstorm tutorial_.
End of explanation
data_path = bst_phantom_elekta.data_path(verbose=True)
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
Explanation: The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
are read to construct instances of :class:mne.io.Raw.
End of explanation
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG1933', 'MEG2421']
Explanation: Data channel array consisted of 204 MEG planor gradiometers,
102 axial magnetometers, and 3 stimulus channels. Let's get the events
for the phantom, where each dipole (1-32) gets its own event:
End of explanation
raw.plot_psd(tmax=30., average=False)
Explanation: The data have strong line frequency (60 Hz and harmonics) and cHPI coil
noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
to save memory:
End of explanation
raw.plot(events=events)
Explanation: Our phantom produces sinusoidal bursts at 20 Hz:
End of explanation
tmin, tmax = -0.1, 0.1
bmax = -0.05 # Avoid capture filter ringing into baseline
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, bmax),
preload=False)
epochs['1'].average().plot(time_unit='s')
Explanation: Now we epoch our data, average it, and look at the first dipole response.
The first peak appears around 3 ms. Because we low-passed at 40 Hz,
we can also decimate our data to save memory.
End of explanation
subjects_dir = data_path
fetch_phantom('otaniemi', subjects_dir=subjects_dir)
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
subject = 'phantom_otaniemi'
trans = mne.transforms.Transform('head', 'mri', np.eye(4))
mne.viz.plot_alignment(
epochs.info, subject=subject, show_axes=True, bem=sphere, dig=True,
surfaces=('head-dense', 'inner_skull'), trans=trans, mri_fiducials=True,
subjects_dir=subjects_dir)
Explanation: Let's use a sphere head geometry model <eeg_sphere_model>
and let's see the coordinate alignment and the sphere location. The phantom
is properly modeled by a single-shell sphere with origin (0., 0., 0.).
Even though this is a VectorView/TRIUX phantom, we can use the Otaniemi
phantom subject as a surrogate because the "head" surface (hemisphere outer
shell) has the same geometry for both phantoms, even though the internal
dipole locations differ. The phantom_otaniemi scan was aligned to the
phantom's head coordinate frame, so an identity trans is appropriate
here.
End of explanation
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(epochs, tmax=bmax)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
# Avoid the first and last trials -- can contain dipole-switching artifacts
evoked = epochs[str(ii)][1:-1].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=None)
Explanation: Let's do some dipole fits. We first compute the noise covariance,
then do the fits for each event_id taking the time instant that maximizes
the global field power.
End of explanation
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
for text in list(ax.texts):
text.remove()
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
Explanation: Do a quick visualization of how much variance we explained, putting the
data and residuals on the same scale (here the "time points" are the
32 dipole peak values that we fit):
End of explanation
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
Explanation: Now we can compare to the actual locations, taking the difference in mm:
End of explanation
actual_amp = np.ones(len(dip)) # misc amp to create Dipole instance
actual_gof = np.ones(len(dip)) # misc GOF to create Dipole instance
dip_true = \
mne.Dipole(dip.times, actual_pos, actual_amp, actual_ori, actual_gof)
fig = mne.viz.plot_alignment(
evoked.info, trans, subject, bem=sphere, surfaces={'head-dense': 0.2},
coord_frame='head', meg='helmet', show_axes=True,
subjects_dir=subjects_dir)
# Plot the position and the orientation of the actual dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip_true, mode='arrow',
subject=subject, color=(0., 0., 0.),
fig=fig)
# Plot the position and the orientation of the estimated dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip, mode='arrow', subject=subject,
color=(0.2, 1., 0.5), fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=70, elevation=80, distance=0.5)
Explanation: Let's plot the positions and the orientations of the actual and the estimated
dipoles
End of explanation |
13,277 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
draw a Histogram of array L
| Python Code::
import matplotlib.pyplot as plt
plt.hist(L)
|
13,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to generate histograms using the Apache Spark DataFrame API
This provides and example of how to generate frequency histograms using the Spark DataFrame API.
Disambiguation
Step1: Generate a DataFrame with toy data for demo purposes
Step2: Compute the histogram
Step3: Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events).
Step5: Note added
Use this to define the computeHistogram function if you cannot pip install sparkhistogram | Python Code:
# Start the Spark Session
# This uses local mode for simplicity
# the use of findspark is optional
# install pyspark if needed
# ! pip install pyspark
# import findspark
# findspark.init("/home/luca/Spark/spark-3.3.0-bin-hadoop3")
from pyspark.sql import SparkSession
spark = (SparkSession.builder
.appName("PySpark histograms")
.master("local[*]")
.getOrCreate()
)
Explanation: How to generate histograms using the Apache Spark DataFrame API
This provides and example of how to generate frequency histograms using the Spark DataFrame API.
Disambiguation: we refer here to computing histograms of the DataFrame data, rather than histograms of the columns statistics used by the cost based optimizer.
End of explanation
num_events = 100
scale = 100
seed = 4242
df = spark.sql(f"select random({seed}) * {scale} as random_value from range({num_events})")
df.show(5)
Explanation: Generate a DataFrame with toy data for demo purposes
End of explanation
# import the computeHistogram function
# see implementation details at:
# https://github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_Histograms/python/sparkhistogram/histogram.py
# requires the package sparkhistogram
! pip install sparkhistogram
from sparkhistogram import computeHistogram
# Compute the histogram using the computeHistogram function
histogram = computeHistogram(df, "random_value", -20, 90, 11)
# Alternative syntax: compute the histogram using transform on the DataFrame
# requires Spark 3.3.0 or higher
# histogram = df.transform(computeHistogram, "random_value", -20, 90, 11)
# this triggers the computation as show() is an action
histogram.show()
# Fetch the histogram data into a Pandas DataFrame for visualization
# At this stage data is reduced to a small number of rows (one row per bin)
# so it can be easily handled by the local machine/driver
# toPandas() is an action and triggers the computation
hist_pandasDF = histogram.toPandas()
hist_pandasDF
# Optionally normalize the event count into a frequency
# dividing by the total number of events
hist_pandasDF["frequency"] = hist_pandasDF["count"] / sum(hist_pandasDF["count"])
hist_pandasDF
Explanation: Compute the histogram
End of explanation
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["count"]
# bar plot
ax.bar(x, y, width = 3.0, color='red')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event count")
ax.set_title("Distribution of event counts")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["value"]
y = hist_pandasDF["frequency"]
# bar plot
ax.bar(x, y, width = 3.0, color='blue')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event frequency")
ax.set_title("Distribution of event frequencies")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
spark.stop()
Explanation: Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events).
End of explanation
def computeHistogram(df: "DataFrame", value_col: str, min: float, max: float, bins: int) -> "DataFrame":
This is a dataframe function to compute the count/frequecy histogram of a column
Parameters
----------
df: the dataframe with the data to compute
value_col: column name on which to compute the histogram
min: minimum value in the histogram
max: maximum value in the histogram
bins: number of histogram buckets to compute
Output DataFrame
----------------
bucket: the bucket number, range from 1 to bins (included)
value: midpoint value of the given bucket
count: number of values in the bucket
step = (max - min) / bins
# this will be used to fill in for missing buckets, i.e. buckets with no corresponding values
df_buckets = spark.sql(f"select id+1 as bucket from range({bins})")
histdf = (df
.selectExpr(f"width_bucket({value_col}, {min}, {max}, {bins}) as bucket")
.groupBy("bucket")
.count()
.join(df_buckets, "bucket", "right_outer") # add missing buckets and remove buckets out of range
.selectExpr("bucket", f"{min} + (bucket - 1/2) * {step} as value", # use center value of the buckets
"nvl(count, 0) as count") # buckets with no values will have a count of 0
.orderBy("bucket")
)
return histdf
Explanation: Note added
Use this to define the computeHistogram function if you cannot pip install sparkhistogram
End of explanation |
13,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability is
How likely something is to happen.
Let's start with obligatory example of coin-toss<br>
Here is our virtual coin so that everyone can see it
Step1: Seems like it will take endless time click everyone and record it to prove as we increase n then probability of Heads is going to be
Step2: Still taking lot of time to get 0.5 probability. Lets try it with Python
Simulating coin-toss experiment with Python
Step3: For n=100000
Step4: Exercise 1 | Python Code:
from IPython.display import HTML
HTML('<iframe src="https://nipunsadvilkar.github.io/coin-flip/" width="100%" height="700px" scrolling="no" style="margin-top: -70px;" frameborder="0"></iframe>')
Explanation: Probability is
How likely something is to happen.
Let's start with obligatory example of coin-toss<br>
Here is our virtual coin so that everyone can see it:<br>
https://nipunsadvilkar.github.io/coin-flip/
End of explanation
from IPython.display import HTML
HTML('<iframe src="http://localhost/Seeing-Theory/basic-probability/index.html#first" width="100%" height="700px" scrolling="no" style="margin-top: -70px;" frameborder="0"></iframe>')
Explanation: Seems like it will take endless time click everyone and record it to prove as we increase n then probability of Heads is going to be: $$\frac{1}{2}$$
<br>and same for Tails.<br> Lets try with a different way - another virtual coin flipping and comparing it with theoretical probability of Heads and Tails $$\frac{1}{2}$$
End of explanation
%matplotlib inline
from utils import comp_prob_inference
import matplotlib.pyplot as plt
comp_prob_inference.flip_fair_coin()
flips = comp_prob_inference.flip_fair_coins(100)
comp_prob_inference.plot_discrete_histogram(flips)
comp_prob_inference.plot_discrete_histogram(flips, frequency=True)
Explanation: Still taking lot of time to get 0.5 probability. Lets try it with Python
Simulating coin-toss experiment with Python
End of explanation
# TODO: make this plot more beautiful(with grids and color) and should be able to show value on hover
flips = comp_prob_inference.flip_fair_coins(100000)
comp_prob_inference.plot_discrete_histogram(flips, frequency=True)
n = 100000
heads_so_far = 0
fraction_of_heads = []
for i in range(n):
if comp_prob_inference.flip_fair_coin() == 'heads':
heads_so_far += 1
fraction_of_heads.append(heads_so_far / (i+1))
plt.figure(figsize=(8, 4))
plt.plot(range(1, n+1), fraction_of_heads)
plt.xlabel('Number of flips')
plt.ylabel('Fraction of heads')
Explanation: For n=100000
End of explanation
from IPython.core.pylabtools import figsize
import numpy as np
from matplotlib import pyplot as plt
figsize(11, 9)
import scipy.stats as stats
dist = stats.beta
n_trials = [0, 1, 2, 3, 4, 5, 8, 15, 50, 500]
data = stats.bernoulli.rvs(0.5, size=n_trials[-1])
x = np.linspace(0, 1, 100)
# For the already prepared, I'm using Binomial's conj. prior.
for k, N in enumerate(n_trials):
sx = plt.subplot(len(n_trials)/2, 2, k+1)
plt.xlabel("$p$, probability of heads") \
if k in [0, len(n_trials)-1] else None
plt.setp(sx.get_yticklabels(), visible=False)
heads = data[:N].sum()
y = dist.pdf(x, 1 + heads, 1 + N - heads)
plt.plot(x, y, label="observe %d tosses,\n %d heads" % (N, heads))
plt.fill_between(x, 0, y, color="#348ABD", alpha=0.4)
plt.vlines(0.5, 0, 4, color="k", linestyles="--", lw=1)
leg = plt.legend()
leg.get_frame().set_alpha(0.4)
plt.autoscale(tight=True)
plt.suptitle("Bayesian updating of posterior probabilities",
y=1.02,
fontsize=14)
plt.tight_layout()
Explanation: Exercise 1:
Do the same simulation for die experiment. You should get stable line at 1/6 when number of throws is higher(~n=10K).
End of explanation |
13,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collections
Let's start with lists
Step1: You can mix all kind of types inside a list
Even other lists, of course
Step2: What about tuples?
Step3: What about both together?
Step4: Let's go back to lists
Step5: str and unicode are also sequences
Python ordered sequence types (arrays in other languages, not linked lists)
Step6: In slicing Python is able to cleverly set the indexes
No IndexError when slicing index is out of range
First (0) and last (-1) index is automatically filled
Step is 1 by default and does not need to be multiple of sequence length
Step7: Let's see some slice modifications
Step8: SOURCES
http
Step9: Still more ways to declare dictionaries
Step10: Python mappings
dict
Step11: Let's play a bit with inplace modifications of dicts content | Python Code:
spam = ["eggs", 7.12345] # This is a list, a comma-separated sequence of values between square brackets
print spam
print type(spam)
eggs = [spam,
1.2345,
"fooo"] # No problem with multi-line declaration
print eggs
Explanation: Collections
Let's start with lists
End of explanation
spam = [] # And this is an empty list
print spam
Explanation: You can mix all kind of types inside a list
Even other lists, of course
End of explanation
spam = ("eggs", 7.12345) # This is a tuple, a comma-separated sequence of values between parentheses
print spam
print type(spam)
eggs = (spam,
1.2345,
"fooo") # Again, no problem with multiline declaration
print eggs
spam = ("eggs", ) # Single item tuple requires the comma
print spam
# what will be the output of this
spamOne = ("eggs")
print spamOne
spam = "eggs", # Actually, it is enough with the comma
print spam
spam = "eggs", 7.12345 # This is called tuple packing
print spam
val1, val2 = spam # And this is the opposite, tuple unpacking
print val1
print val2
Explanation: What about tuples?
End of explanation
spam = "spam"
eggs = "eggs"
eggs, spam = spam, eggs
print spam
print eggs
Explanation: What about both together?
End of explanation
spam = ["eggs", 7.12345]
val1, val2 = spam # Unpacking also works with lists (but packing always generates tuples)
print val1
print val2
# And what about strings? Remember they are sequences too...
spam = "spam"
s, p, a, m = spam # Unpacking even works with strings
print s
print p
print a
print m
Explanation: Let's go back to lists
End of explanation
spam = ["1st", "2nd", "3rd", "4th", "5th"]
eggs = (spam, 1.2345, "fooo")
print "eggs" in spam
print "fooo" not in eggs
print "am" in "spam" # Check items membership
print "spam".find("am") # NOT recommended for membership
print spam.count("1st") # Count repetitions (slow)
print spam + spam
print eggs + eggs
print "spam" + "eggs" # Concatenation (shallow copy), must be of the same type
print spam * 5
print eggs * 3
print "spam" * 3 # Also "multiply" creating shallow copies concatenated
print len(spam)
print len(eggs)
print len("spam") # Obtain its length
# Let's obtain min and max values (slow)
print min([5, 6, 2])
print max("xyzw abcd XYZW ABCD")
# Let's see how indexing works
spam = ["1st", "2nd", "3rd", "4th", "5th"]
eggs = (spam, 1.2345, "fooo")
print spam[0]
print eggs[1]
print "spam"[2] # Access by index, starting from 0 to length - 1, may raise an exception
print spam[-1]
print eggs[-2]
print "spam"[-3] # Access by index, even negative
print eggs[0]
print eggs[0][0]
print eggs[0][0][-1] # Concatenate index accesses
# Let's see how slicing works
spam = ("1st", "2nd", "3rd", "4th", "5th")
print spam[1:3] # Use colon and a second index for slicing
print type(spam[1:4]) # It generates a brand new object (shallow copy)
spam = ["1st", "2nd", "3rd", "4th", "5th"]
print spam[:3]
print spam[1:7]
print spam[-2:7] # Negative indexes are also valid
print spam[3:-2]
print spam[:] # Without indexes it performs a shallow copy
print spam[1:7:2] # Use another colon and a third int to specify the step
print spam[::2]
print spam[::-2] # A negative step traverse the sequence in the other way
print spam[::-1] # Useful to reverse a sequence
Explanation: str and unicode are also sequences
Python ordered sequence types (arrays in other languages, not linked lists):
Python ordered sequence types (arrays in other languages, not linked lists):
- They are arrays, not linked lists, so they have constant O(1) time for index access
- list:
- Comma-separated with square brackets
- Mutable
- Kind of dynamic array implementation (reserve space in advanced)
- Resizing is O(n)
- Arbitrary insertion is O(n)
- Appending is amortized O(1)
- tuple:
- Comma-separated with parentheses
- Parentheses only required in empty tuple
- Immutable
- Slightly better traversing perfomance than lists
- str and unicode:
- One or three single or double quotes
- They have special methods
- Immutable
Standard library also provides other bult-in collection formats:
set and frozenset: unordered, without repeated values (content must be hashable)
High performant in operations like intersection, union, difference, membership check
bytearray, buffer, xrange: special sequences for concrete use cases
collections module, with deque, namedtuple, Counter, OrderedDict and defaultdict
Let's a play a bit with sequences operations
End of explanation
# Let's try something different
spam = ["1st", "2nd", "3rd", "4th", "5th"]
spam[3] = 1
print spam # Index direct modification, may raise an exception
Explanation: In slicing Python is able to cleverly set the indexes
No IndexError when slicing index is out of range
First (0) and last (-1) index is automatically filled
Step is 1 by default and does not need to be multiple of sequence length
End of explanation
spam = [1, 2, 3, 4, 5]
eggs = ['a', 'b', 'c']
spam[1:3] = eggs
print spam # We can use slicing here too!
spam = [1, 2, 3, 4, 5, 6, 7, 8]
eggs = ['a', 'b', 'c']
spam[1:7:2] = eggs
print spam # We can use even slicing with step!!
spam = [1, 2, 3, 4, 5]
spam.append("a")
print spam # We can append an element at the end (amortized O(1))
spam = [1, 2, 3, 4, 5]
eggs = ['a', 'b', 'c']
spam.extend(eggs)
print spam # We can append another sequence elements at the end (amortized O(1))
spam = [1, 2, 3, 4, 5]
eggs = ['a', 'b', 'c']
spam.append(eggs)
print spam # Take care to not mix both commands!!
spam = [1, 2, 3, 4, 5]
spam.insert(3, "a")
print spam # The same like spam[3:3] = ["a"]
spam = [1, 2, 3, 4, 5]
print spam.pop()
print spam # Pop (remove and return) last item
print spam.pop(2)
print spam # Pop (remove and return) given item
spam = [1, 2, 3, 4, 5]
del spam[3]
print spam # Delete an item
spam = tuple([1, 2, 3, 4, 5, 6, 7, 8])
eggs = list(('a', 'b', 'c')) # Shallow copy constructors
print spam
print eggs
Explanation: Let's see some slice modifications
End of explanation
spam = {"one": 1, "two": 2, "three": 3} # This is a dictionary
print spam
print type(spam)
eggs = {1: "one",
2: "two",
3: "three"} # Again, no problem with multiline declaration
print eggs
Explanation: SOURCES
http://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex
http://docs.python.org/2/tutorial/introduction.html#lists
http://docs.python.org/2/tutorial/datastructures.html#tuples-and-sequences
http://wiki.python.org/moin/TimeComplexity
http://docs.python.org/2/tutorial/datastructures.html#sets
http://docs.python.org/2/library/stdtypes.html#set-types-set-frozenset
DICTIONARIES
End of explanation
spam = dict(one=1, two=2, three=3) # Use keyword arguments (we will talk about them in short)
print spam
eggs = dict([(1, "one"), (2, "two"), (3, "three")]) # Sequence of two elements sequences (key and object)
print eggs # Note that these tuples require the parentheses just to group
spam = dict(eggs) # Shallow copy constructor
print spam
Explanation: Still more ways to declare dictionaries
End of explanation
spam = {"one": 1, "two": 2, "three": 3}
print spam["two"] # Access by key, may raise an exception
spam = {"one": 1, "two": 2, "three": 3}
print "two" in spam # Check keys membership
print 2 not in spam # Check keys membership
spam = {"one": 1, "two": 2, "three": 3}
print spam.get("two")
print spam.get("four")
print spam.get("four", 4) # Safer access by key, never raises an exception, optional default value
spam = {"one": 1, "two": 2, "three": 3}
print spam.keys() # Retrieve keys list (copy) in arbitrary order
print spam.values() # Retrieve values list (copy) in arbitrary order
print spam.items() # Retrieve key, values pairs list (copy) in arbitrary order
Explanation: Python mappings
dict:
Comma-separated list of hashable key, colon and arbitrary object between curly brackets
Mutable
Unordered
Access by key
Heavily optimized:
Creation with n items is O(n)
Arbitrary access is O(1)
Adding a new key is amortized O(1)
dictview:
Dynamic subset of a dictionary data which is kept updated
Improved in Py3k (specially items, keys and values methods)
Let's play a bit with dictionaries
End of explanation
spam = {"one": 1, "two": 2, "three": 3}
spam["two"] = 22 # Set or replace a key value
spam["four"] = 44 # Set or replace a key value
print spam
spam = {"one": 1, "two": 2, "three": 3}
print spam.popitem()
print spam
spam = {"one": 1, "two": 2, "three": 3}
print spam.pop("two") # Pop (remove and return) given item, may raise an exception
print spam.pop("four", 4) # Pop (remove and return) given item with optional default value
print spam
spam = {"one": 1, "two": 2, "three": 3}
eggs = {"three": 33, "four": 44}
spam.update(eggs) # Update dictionary with other dict content
print spam
spam = {"one": 1, "two": 2, "three": 3}
eggs = {1: "one", 2: "two", 3: "three"}
spam.update(two=22, four=44) # Like dict constructor, it accepts keyword arguments
eggs.update([(0, "ZERO"), (1, "ONE")]) # Like dict constructor, it accepts a sequence of pairs
print spam
print eggs
Explanation: Let's play a bit with inplace modifications of dicts content
End of explanation |
13,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read the parquet file into a pandas dataframe. Using fastparquet here because pyarrow couldn't read in a file of this size for some reason
Step1: Get the list of ids from the processed xml files so we can select a subset of the mimic notes
Step2: Select the subset of notes that we have xml output from ctakes for. Reset the index and drop some unnecessary columns
Step3: process the xml files and store in parquet locally
TODO
Step4: Creating templates
The plan
Step5: Pull in the dataframes for elements we need for processing
Step6: Prep sentences DF
Add raw text from notes to sentences
Step7: Add position of sentence in document to sentences df
Step8: remove sentences without entities
Step9: Prep UMLS DF
Remove umls concepts which don't have a preferred text field
Step10: Pref Mentions DF
Transform mentions
Drop some unused fields
Only keep the first umls code from ontology array ( no longer doing this as it limits the cui codes we can choose
from in the umls concepts table)
Sort by begin and end offsets. Remove mentions that end on the same offset. Only want to keep the full span and not split entities up. This should give us better semantic meaning
Add raw text to mentions
Add in umls concept information (CUI) to mentions
a. There are many possible cuis for a the text span of an entity. Here, we're going to use
the edit distance from the original span and the umls preferred text. For now, just choose the first
umls concept with the best score (lowest)
Step11: Add original text to mentions
Step12: Add sentence position to mentions
Step13: Prep Predicates DF
Transform predicates
Simple transformation. Just modify the frameset string to remove everything after the '.'
Step14: Remove predicates not in sentences with mentions
Step15: Add original text to predicates
Step16: Linking CUI codes to entities (mentions)
Assign cui codes to mentions (entities)
cTAKES over-generates cui and tui codes for text spans in a clinical note. There can be multiple coding schemes that have a code for a term and a cui could apply to the original text span specifically or be a generalization or abstraction over the meaning of the span. For generating text we want the cui that most closely matches the original text span. Future work could look at these generalizations to get a better sense of semantic meaning. However, this will require a deep understanding of the UMLS ontology an how to work with it to extract this kind of information.
For each mention
Step17: Set the template tokens we're going to use
For mentions this is either
Step18: Append the two template tokens dataframes
Step19: Get the semantic templates
Group the rows of the above template tokens dataframe by sentence id and join them together into a single string. Must sort by begin offset.
Step20: Gather corpus statistics
Average sentences per doc
Step21: Average sentences w/ entities per doc
Step22: Count of unique cuis (When removing overlapping text spans)
Step23: Average # of cuis per doc
Step24: Average # of cuis per sentence
Step25: Average # of words per doc (excluding newline tokens and symbols)
Step26: Average # of words per sentence
Step27: Get frequency of mentions
Step28: Frequency of mentions by sentence position
Step29: Frequency of CUIs
Step30: Frequency with preferred text
Step31: Frequency of CUIs by sentence position
Step32: Number of unique templates
Step33: Frequency of templates (identified by sentence number)
Step34: Frequency of templates by sentence position
Step35: Write dataframes to parquet
We want to write these to a parquet file so that they can be used by a separate notebook to do clustering and note generation. This is just prep-work for those processes. | Python Code:
notes_file = 'synthnotes/data/note-events.parquet'
pq_root_path = 'synthnotes/data/xml_extracted'
pf = ParquetFile(notes_file)
df = pf.to_pandas()
Explanation: Read the parquet file into a pandas dataframe. Using fastparquet here because pyarrow couldn't read in a file of this size for some reason
End of explanation
xml_dir = 'synthnotes/data/xml_files'
xml_files = os.listdir(xml_dir)
ids = [int(f.split('.txt.xmi')[0]) for f in xml_files]
Explanation: Get the list of ids from the processed xml files so we can select a subset of the mimic notes
End of explanation
notes = df[df.ROW_ID.isin(ids)]
notes = notes.reset_index(drop=True)
notes = notes.drop(['CHARTDATE','CHARTTIME','STORETIME','CGID','ISERROR'],axis=1)
def get_notes_sample(df, n=100, category='Nursing'):
notes = df[notes_df['CATEGORY'] == 'Nursing']
notes = notes[notes['ISERROR'].isnull()]
notes = notes[notes['DESCRIPTION'] == 'Generic Note']
notes = notes.sample(n=n)
notes = notes.reset_index(drop=True)
return notes
Explanation: Select the subset of notes that we have xml output from ctakes for. Reset the index and drop some unnecessary columns
End of explanation
# parser = CtakesXmlParser()
# schemas = list()
# for file in xml_files:
# xml_out = parser.parse(f'{xml_dir}/{file}')
# for k, v in xml_out.items():
# feature_df = pd.DataFrame(list(v))
# if feature_df.shape[0] > 0:
# table = pa.Table.from_pandas(feature_df)
# pq.write_to_dataset(table, f'{pq_root_path}/{k}')
# else:
# print(f"{k} was empty for {file}")
Explanation: process the xml files and store in parquet locally
TODO: switch this to use columnar format: need to change how we extract different types of elements
End of explanation
def get_df_from_pq(root, name):
return pq.read_table(f'{root}/{name}').to_pandas()
def transform_preds(df):
df['frameset'] = df['frameset'].apply(lambda x: x.split('.')[0])
return df
def transform_mentions(mentions):
# Don't want this to fail if these have already been removed
try:
mentions = mentions.drop(
['conditional', 'history_of', 'generic', 'polarity', 'discovery_technique', 'subject'],
axis=1)
except:
pass
sorted_df = mentions.groupby(['sent_id', 'begin']) \
.apply(lambda x: x.sort_values(['begin', 'end']))
# Drop the mentions that are parts of a larger span. Only keep the containing span that holds multiple
# mentions
deduped = sorted_df.drop_duplicates(['sent_id', 'begin'], keep='last')
deduped = deduped.drop_duplicates(['sent_id', 'end'], keep='first')
return deduped.reset_index(drop=True)
def set_template_token(df, column):
df['template_token'] = df[column]
return df
def get_template_tokens(row):
return pd.Series({
'doc_id': row['doc_id'],
'sent_id': row['sent_id'],
'token': row['template_token'],
'begin': row['begin'],
'end': row['end']
})
# def merge_mentions_umls(mentions, umls):
# umls['umls_xmi_id'] = umls['xmi_id']
# mentions = mentions.merge(umls[['umls_xmi_id', 'cui']], on='umls_xmi_id')
# return mentions
# def umls_dedup(umls):
# return umls.drop_duplicates(subset=['cui'])
# def set_umls_join_key(umls):
# umls['umls_xmi_id'] = umls['xmi_id']
# return umls
def set_sentence_pos(df):
df = df.groupby(["doc_id"]).apply(lambda x: x.sort_values(["begin"])).reset_index(drop=True)
df['sentence_number'] = df.groupby("doc_id").cumcount()
return df
def get_root_verb(row):
pass
def extract_sent(row):
begin = row['begin']
end = row['end']
row['TEXT'] = row['TEXT'][begin:end]
return row
def write_notes(row):
fn = f'raw_notes/{row["ROW_ID"]}'
with open(fn, 'w') as f:
f.write(row['TEXT'])
def get_text_from_sentence(row, notes):
doc = notes[notes['ROW_ID'] == row['doc_id']]
b = row['begin']
e = row['end']
return doc['TEXT'].iloc[0][b:e]
def edit_dist(row, term2):
term1 = row.loc['preferred_text']
return lev_norm(term1, term2)
def get_cui( mention, umls_df):
ont_arr = list(map(int, mention['ontology_arr'].split())) or None
ment_text = mention['text']
concepts = umls_df[umls_df['xmi_id'].isin(ont_arr)].loc[:, ['cui', 'preferred_text', 'xmi_id']]
concepts['dist'] = concepts.apply(edit_dist, args=(ment_text,), axis=1)
sorted_df = concepts.sort_values(by='dist', ascending=True).reset_index(drop=True)
cui = sorted_df['cui'].iloc[0]
xmi_id = sorted_df['xmi_id'].iloc[0]
pref_text = sorted_df['preferred_text'].iloc[0]
return cui, xmi_id, pref_text
Explanation: Creating templates
The plan:<br>
For each sentences in all documents:
1. Get the predicates for the sentence
2. Get the entities for the sentence
3. For each entity:
- append the cui code from umls concept to the end
4. Combine predicates and entities and sort based on their begin position
5. Save to a dataframe
Some helper functions:
End of explanation
preds = get_df_from_pq(pq_root_path, 'predicates')
mentions = get_df_from_pq(pq_root_path, 'mentions')
umls = get_df_from_pq(pq_root_path, 'umls_concepts')
sents = get_df_from_pq(pq_root_path, 'sentences')
tokens = get_df_from_pq(pq_root_path, 'tokens')
sents = sents.rename({'id': 'sent_id'}, axis=1)
sents.head()
Explanation: Pull in the dataframes for elements we need for processing
End of explanation
sents = sents.rename({'id': 'sent_id'}, axis=1)
sents = sents.merge(notes[['ROW_ID', 'TEXT']],
left_on='doc_id', right_on='ROW_ID').drop('ROW_ID', axis=1)
sents = sents.apply(extract_sent, axis=1)
sents = sents.rename({'TEXT': 'text'}, axis=1)
Explanation: Prep sentences DF
Add raw text from notes to sentences
End of explanation
sents = set_sentence_pos(sents)
Explanation: Add position of sentence in document to sentences df
End of explanation
sents_with_mentions = sents[
sents['sent_id'].isin(
mentions.drop_duplicates(subset='sent_id')['sent_id']
)
]
Explanation: remove sentences without entities
End of explanation
umls = umls[~umls['preferred_text'].isna()]
Explanation: Prep UMLS DF
Remove umls concepts which don't have a preferred text field
End of explanation
mentions = get_df_from_pq(pq_root_path, 'mentions')
mentions = transform_mentions(mentions)
mentions.head()
Explanation: Pref Mentions DF
Transform mentions
Drop some unused fields
Only keep the first umls code from ontology array ( no longer doing this as it limits the cui codes we can choose
from in the umls concepts table)
Sort by begin and end offsets. Remove mentions that end on the same offset. Only want to keep the full span and not split entities up. This should give us better semantic meaning
Add raw text to mentions
Add in umls concept information (CUI) to mentions
a. There are many possible cuis for a the text span of an entity. Here, we're going to use
the edit distance from the original span and the umls preferred text. For now, just choose the first
umls concept with the best score (lowest)
End of explanation
mentions['text'] = mentions.apply(get_text_from_sentence, args=(notes,), axis=1)
mentions.head()
Explanation: Add original text to mentions
End of explanation
mentions = mentions.merge(sents_with_mentions[['sent_id', 'sentence_number']],
on='sent_id')
mentions.head()
Explanation: Add sentence position to mentions
End of explanation
preds = transform_preds(preds)
Explanation: Prep Predicates DF
Transform predicates
Simple transformation. Just modify the frameset string to remove everything after the '.'
End of explanation
print(preds.shape)
preds = preds[
preds['sent_id'].isin( sents_with_mentions['sent_id'] )
]
print(preds.shape)
Explanation: Remove predicates not in sentences with mentions
End of explanation
preds['text'] = preds.apply(get_text_from_sentence, args=(notes,), axis=1)
Explanation: Add original text to predicates
End of explanation
mentions[['cui', 'umls_xmi_id', 'preferred_text']] = mentions. \
apply(get_cui, args=(umls,), axis=1, result_type='expand')
mentions.head()
Explanation: Linking CUI codes to entities (mentions)
Assign cui codes to mentions (entities)
cTAKES over-generates cui and tui codes for text spans in a clinical note. There can be multiple coding schemes that have a code for a term and a cui could apply to the original text span specifically or be a generalization or abstraction over the meaning of the span. For generating text we want the cui that most closely matches the original text span. Future work could look at these generalizations to get a better sense of semantic meaning. However, this will require a deep understanding of the UMLS ontology an how to work with it to extract this kind of information.
For each mention:
1. Collect all the umls concept rows (based on xmi_id) that are in the mention's ontology array
2. Compute edit distance between the above umls rows' preferred text column and the mention's original text
3. Sort edit distances in ascending order
4. Choose the first umls concept row (a lower edit distance means the two texts are more similar)
End of explanation
mentions['template_token'] = mentions['mention_type']
preds['template_token'] = preds['frameset']
preds_toks = preds.apply(get_template_tokens, axis=1)
mentions_toks = mentions.apply(get_template_tokens, axis=1)
mentions_toks.groupby(['sent_id', 'end']).head()
preds_toks.groupby(['sent_id', 'end']).head()
Explanation: Set the template tokens we're going to use
For mentions this is either: the type of mention, the CUI code, or the two concatenated together
For predicates it is the frameset trimmed of everything after the '.'
End of explanation
template_tokens = preds_toks.append(mentions_toks)
temp_tokens = template_tokens.groupby(['sent_id']).apply(lambda x: x.sort_values(['begin']))
temp_tokens.head()
Explanation: Append the two template tokens dataframes
End of explanation
sem_templates = template_tokens.sort_values('begin').groupby('sent_id')['token'].apply(' '.join)
sem_templates.head()
temp_tokens.token.unique().shape
sem_df = pd.DataFrame(sem_templates) # What is this?
sem_df.head()
sem_df.reset_index(level=0, inplace=True)
sem_df = sem_df.rename(columns={'token': 'sem_template'})
sem_df = sem_df.merge(sents[['sent_id', 'sentence_number', 'doc_id', 'begin', 'end']],
left_on='sent_id', right_on='sent_id' )#.drop('id', axis=1)
sem_df.head()
Explanation: Get the semantic templates
Group the rows of the above template tokens dataframe by sentence id and join them together into a single string. Must sort by begin offset.
End of explanation
avg_sents_per_doc = sents.groupby('doc_id').size().mean()
print(avg_sents_per_doc)
Explanation: Gather corpus statistics
Average sentences per doc
End of explanation
avg_sents_with_ents_per_doc = sents_with_mentions.groupby('doc_id').size().mean()
print(avg_sents_with_ents_per_doc)
Explanation: Average sentences w/ entities per doc
End of explanation
print(mentions['cui'].nunique())
Explanation: Count of unique cuis (When removing overlapping text spans)
End of explanation
mentions.groupby('doc_id').size().mean()
Explanation: Average # of cuis per doc
End of explanation
mentions.groupby('sent_id').size().mean()
Explanation: Average # of cuis per sentence
End of explanation
tokens = tokens[(~tokens['sent_id'].isnull()) & (tokens['token_type'] != 'NewlineToken')]
wc_by_doc = tokens.groupby('doc_id').count()['id'].reset_index(name='count')
wc_by_doc['count'].mean()
Explanation: Average # of words per doc (excluding newline tokens and symbols)
End of explanation
wc_by_sentence = tokens.groupby('sent_id')['id'].count().reset_index(name='count')
wc_by_sentence['count'].mean()
Explanation: Average # of words per sentence
End of explanation
mention_counts = mentions.groupby('mention_type').size().reset_index(name='count')
mention_counts
mention_counts['frequency'] = mention_counts['count'] / mention_counts['count'].sum()
mention_counts
Explanation: Get frequency of mentions
End of explanation
mentions_by_pos = pd.crosstab(
mentions['mention_type'],
mentions['sentence_number']).apply(lambda x: x / x.sum(), axis=0)
mentions_by_pos
Explanation: Frequency of mentions by sentence position
End of explanation
cui_counts = mentions.groupby('cui').size().reset_index(name='count')
cui_counts = cui_counts.sort_values('count', ascending=False).reset_index(drop=True)
cui_counts.head(10)
cui_counts['frequency'] = cui_counts['count'] / cui_counts['count'].sum()
cui_counts.head(10)
Explanation: Frequency of CUIs
End of explanation
cui_counts_with_text = cui_counts.merge(mentions[['cui', 'preferred_text']], on='cui') \
.drop_duplicates('cui') \
.reset_index(drop=True)
cui_counts_with_text.head(10)
Explanation: Frequency with preferred text
End of explanation
cui_by_pos = pd.crosstab(mentions['cui'], mentions['sentence_number']).apply(lambda x: x / x.sum(), axis=0)
cui_by_pos.head()
cui_by_pos.loc[:, 0].sort_values(ascending=False)[:10]
Explanation: Frequency of CUIs by sentence position
End of explanation
sem_df.head()
sem_df['sem_template'].nunique()
Explanation: Number of unique templates
End of explanation
count_temps = sem_df.groupby('sem_template').size().reset_index(name='count')
count_temps = count_temps.sort_values('count', ascending=False).reset_index(drop=True)
count_temps.head(10)
count_temps['frequency'] = count_temps['count'] / count_temps['count'].sum()
count_temps.head(10)
Explanation: Frequency of templates (identified by sentence number)
End of explanation
sem_df.head()
sem_df['sentence_number'].shape
temp_by_pos = pd.crosstab(sem_df['sem_template'], sem_df['sentence_number']).apply(lambda x: x / x.sum(), axis=0)
temp_by_pos.head()
Explanation: Frequency of templates by sentence position
End of explanation
df_dir = 'data/processed_dfs'
# Write sentences, mentions, predicates, and umls concepts to parquet, sem_df
sents_with_mentions.to_parquet(f'{df_dir}/sentences.parquet')
mentions.to_parquet(f'{df_dir}/mentions.parquet')
preds.to_parquet(f'{df_dir}/predicates.parquet')
umls.to_parquet(f'{df_dir}/umls.parquet')
sem_df.to_parquet(f'{df_dir}/templates.parquet')
temp_by_pos.to_parquet(f'{df_dir}/templates_by_pos.parquet')
Explanation: Write dataframes to parquet
We want to write these to a parquet file so that they can be used by a separate notebook to do clustering and note generation. This is just prep-work for those processes.
End of explanation |
13,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Copyright 2017 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step2: 2-Bar Drums Model
Below are 4 pre-trained models to experiment with. The first 3 map the 61 MIDI drum "pitches" to a reduced set of 9 classes (bass, snare, closed hi-hat, open hi-hat, low tom, mid tom, high tom, crash cymbal, ride cymbal) for a simplified but less expressive output space. The last model uses a NADE to represent all possible MIDI drum "pitches".
drums_2bar_oh_lokl
Step3: Generate Samples
Step4: Generate Interpolations
Step5: 2-Bar Melody Model
The pre-trained model consists of a single-layer bidirectional LSTM encoder with 2048 nodes in each direction, a 3-layer LSTM decoder with 2048 nodes in each layer, and Z with 512 dimensions. The model was given 0 free bits, and had its beta valued annealed at an exponential rate of 0.99999 from 0 to 0.43 over 200k steps. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. The final accuracy is 0.95 and KL divergence is 58 bits.
Step6: Generate Samples
Step7: Generate Interpolations
Step8: 16-bar Melody Models
The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, a 2-layer LSTM core decoder with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 256 free bits, and had a fixed beta value of 0.2. After 25k steps, the final accuracy is 0.90 and KL divergence is 277 bits.
Step9: Generate Samples
Step10: Generate Means
Step11: 16-bar "Trio" Models (lead, bass, drums)
We present two pre-trained models for 16-bar trios
Step12: Generate Samples
Step13: Generate Means | Python Code:
#@title Setup Environment
#@test {"output": "ignore"}
import glob
BASE_DIR = "gs://download.magenta.tensorflow.org/models/music_vae/colab2"
print('Installing dependencies...')
!apt-get update -qq && apt-get install -qq libfluidsynth1 fluid-soundfont-gm build-essential libasound2-dev libjack-dev
!pip install -q pyfluidsynth
!pip install -qU magenta
# Hack to allow python to pick up the newly-installed fluidsynth lib.
# This is only needed for the hosted Colab environment.
import ctypes.util
orig_ctypes_util_find_library = ctypes.util.find_library
def proxy_find_library(lib):
if lib == 'fluidsynth':
return 'libfluidsynth.so.1'
else:
return orig_ctypes_util_find_library(lib)
ctypes.util.find_library = proxy_find_library
print('Importing libraries and defining some helper functions...')
from google.colab import files
import magenta.music as mm
from magenta.models.music_vae import configs
from magenta.models.music_vae.trained_model import TrainedModel
import numpy as np
import os
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# Necessary until pyfluidsynth is updated (>1.2.5).
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
def play(note_sequence):
mm.play_sequence(note_sequence, synth=mm.fluidsynth)
def interpolate(model, start_seq, end_seq, num_steps, max_length=32,
assert_same_length=True, temperature=0.5,
individual_duration=4.0):
Interpolates between a start and end sequence.
note_sequences = model.interpolate(
start_seq, end_seq,num_steps=num_steps, length=max_length,
temperature=temperature,
assert_same_length=assert_same_length)
print('Start Seq Reconstruction')
play(note_sequences[0])
print('End Seq Reconstruction')
play(note_sequences[-1])
print('Mean Sequence')
play(note_sequences[num_steps // 2])
print('Start -> End Interpolation')
interp_seq = mm.sequences_lib.concatenate_sequences(
note_sequences, [individual_duration] * len(note_sequences))
play(interp_seq)
mm.plot_sequence(interp_seq)
return interp_seq if num_steps > 3 else note_sequences[num_steps // 2]
def download(note_sequence, filename):
mm.sequence_proto_to_midi_file(note_sequence, filename)
files.download(filename)
print('Done')
Explanation: Copyright 2017 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music.
Adam Roberts, Jesse Engel, Colin Raffel, Curtis Hawthorne, and Douglas Eck
MusicVAE learns a latent space of musical scores, providing different modes
of interactive musical creation, including:
Random sampling from the prior distribution.
Interpolation between existing sequences.
Manipulation of existing sequences via attribute vectors.
Examples of these interactions can be generated below, and selections can be heard in our
YouTube playlist.
For short sequences (e.g., 2-bar "loops"), we use a bidirectional LSTM encoder
and LSTM decoder. For longer sequences, we use a novel hierarchical LSTM
decoder, which helps the model learn longer-term structures.
We also model the interdependencies between instruments by training multiple
decoders on the lowest-level embeddings of the hierarchical decoder.
For additional details, check out our blog post and paper.
This colab notebook is self-contained and should run natively on google cloud. The code and checkpoints can be downloaded separately and run locally, which is required if you want to train your own model.
Basic Instructions
Double click on the hidden cells to make them visible, or select "View > Expand Sections" in the menu at the top.
Hover over the "[ ]" in the top-left corner of each cell and click on the "Play" button to run it, in order.
Listen to the generated samples.
Make it your own: copy the notebook, modify the code, train your own models, upload your own MIDI, etc.!
Environment Setup
Includes package installation for sequence synthesis. Will take a few minutes.
End of explanation
#@title Load Pretrained Models
drums_models = {}
# One-hot encoded.
drums_config = configs.CONFIG_MAP['cat-drums_2bar_small']
drums_models['drums_2bar_oh_lokl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/drums_2bar_small.lokl.ckpt')
drums_models['drums_2bar_oh_hikl'] = TrainedModel(drums_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/drums_2bar_small.hikl.ckpt')
# Multi-label NADE.
drums_nade_reduced_config = configs.CONFIG_MAP['nade-drums_2bar_reduced']
drums_models['drums_2bar_nade_reduced'] = TrainedModel(drums_nade_reduced_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/drums_2bar_nade.reduced.ckpt')
drums_nade_full_config = configs.CONFIG_MAP['nade-drums_2bar_full']
drums_models['drums_2bar_nade_full'] = TrainedModel(drums_nade_full_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/drums_2bar_nade.full.ckpt')
Explanation: 2-Bar Drums Model
Below are 4 pre-trained models to experiment with. The first 3 map the 61 MIDI drum "pitches" to a reduced set of 9 classes (bass, snare, closed hi-hat, open hi-hat, low tom, mid tom, high tom, crash cymbal, ride cymbal) for a simplified but less expressive output space. The last model uses a NADE to represent all possible MIDI drum "pitches".
drums_2bar_oh_lokl: This low KL model was trained for more realistic sampling. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 0 free bits, and had a fixed beta value of 0.8. After 300k steps, the final accuracy is 0.73 and KL divergence is 11 bits.
drums_2bar_oh_hikl: This high KL model was trained for better reconstruction and interpolation. The output is a one-hot encoding of 2^9 combinations of hits. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM decoder with 256 nodes in each layer, and a Z with 256 dimensions. During training it was given 96 free bits and had a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k, steps the final accuracy is 0.97 and KL divergence is 107 bits.
drums_2bar_nade_reduced: This model outputs a multi-label "pianoroll" with 9 classes. It has a single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 9-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 96 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.98 and KL divergence is 110 bits.
drums_2bar_nade_full: The output is a multi-label "pianoroll" with 61 classes. A single-layer bidirectional LSTM encoder with 512 nodes in each direction, a 2-layer LSTM-NADE decoder with 512 nodes in each layer and 61-dimensional NADE with 128 hidden units, and a Z with 256 dimensions. During training it was given 0 free bits and has a fixed beta value of 0.2. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 300k steps, the final accuracy is 0.90 and KL divergence is 116 bits.
End of explanation
#@title Generate 4 samples from the prior of one of the models listed above.
drums_sample_model = "drums_2bar_oh_lokl" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
drums_samples = drums_models[drums_sample_model].sample(n=4, length=32, temperature=temperature)
for ns in drums_samples:
play(ns)
#@title Optionally download generated MIDI samples.
for i, ns in enumerate(drums_samples):
download(ns, '%s_sample_%d.mid' % (drums_sample_model, i))
Explanation: Generate Samples
End of explanation
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_drums_midi_data = [
tf.io.gfile.GFile(fn, mode='rb').read()
for fn in sorted(tf.io.gfile.glob(BASE_DIR + '/midi/drums_2bar*.mid'))]
#@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_drums_midi_data = files.upload().values() or input_drums_midi_data
#@title Extract drums from MIDI files. This will extract all unique 2-bar drum beats using a sliding window with a stride of 1 bar.
drums_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_drums_midi_data]
extracted_beats = []
for ns in drums_input_seqs:
extracted_beats.extend(drums_nade_full_config.data_converter.from_tensors(
drums_nade_full_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_beats):
print("Beat", i)
play(ns)
#@title Interpolate between 2 beats, selected from those in the previous cell.
drums_interp_model = "drums_2bar_oh_hikl" #@param ["drums_2bar_oh_lokl", "drums_2bar_oh_hikl", "drums_2bar_nade_reduced", "drums_2bar_nade_full"]
start_beat = 0 #@param {type:"integer"}
end_beat = 1 #@param {type:"integer"}
start_beat = extracted_beats[start_beat]
end_beat = extracted_beats[end_beat]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
num_steps = 13 #@param {type:"integer"}
drums_interp = interpolate(drums_models[drums_interp_model], start_beat, end_beat, num_steps=num_steps, temperature=temperature)
#@title Optionally download interpolation MIDI file.
download(drums_interp, '%s_interp.mid' % drums_interp_model)
Explanation: Generate Interpolations
End of explanation
#@title Load the pre-trained model.
mel_2bar_config = configs.CONFIG_MAP['cat-mel_2bar_big']
mel_2bar = TrainedModel(mel_2bar_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/mel_2bar_big.ckpt')
Explanation: 2-Bar Melody Model
The pre-trained model consists of a single-layer bidirectional LSTM encoder with 2048 nodes in each direction, a 3-layer LSTM decoder with 2048 nodes in each layer, and Z with 512 dimensions. The model was given 0 free bits, and had its beta valued annealed at an exponential rate of 0.99999 from 0 to 0.43 over 200k steps. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. The final accuracy is 0.95 and KL divergence is 58 bits.
End of explanation
#@title Generate 4 samples from the prior.
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_2_samples = mel_2bar.sample(n=4, length=32, temperature=temperature)
for ns in mel_2_samples:
play(ns)
#@title Optionally download samples.
for i, ns in enumerate(mel_2_samples):
download(ns, 'mel_2bar_sample_%d.mid' % i)
Explanation: Generate Samples
End of explanation
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_mel_midi_data = [
tf.io.gfile.GFile(fn, 'rb').read()
for fn in sorted(tf.io.gfile.glob(BASE_DIR + '/midi/mel_2bar*.mid'))]
#@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_mel_midi_data = files.upload().values() or input_mel_midi_data
#@title Extract melodies from MIDI files. This will extract all unique 2-bar melodies using a sliding window with a stride of 1 bar.
mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_midi_data]
extracted_mels = []
for ns in mel_input_seqs:
extracted_mels.extend(
mel_2bar_config.data_converter.from_tensors(
mel_2bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_mels):
print("Melody", i)
play(ns)
#@title Interpolate between 2 melodies, selected from those in the previous cell.
start_melody = 0 #@param {type:"integer"}
end_melody = 1 #@param {type:"integer"}
start_mel = extracted_mels[start_melody]
end_mel = extracted_mels[end_melody]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
num_steps = 13 #@param {type:"integer"}
mel_2bar_interp = interpolate(mel_2bar, start_mel, end_mel, num_steps=num_steps, temperature=temperature)
#@title Optionally download interpolation MIDI file.
download(mel_2bar_interp, 'mel_2bar_interp.mid')
Explanation: Generate Interpolations
End of explanation
#@title Load the pre-trained models.
mel_16bar_models = {}
hierdec_mel_16bar_config = configs.CONFIG_MAP['hierdec-mel_16bar']
mel_16bar_models['hierdec_mel_16bar'] = TrainedModel(hierdec_mel_16bar_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/mel_16bar_hierdec.ckpt')
flat_mel_16bar_config = configs.CONFIG_MAP['flat-mel_16bar']
mel_16bar_models['baseline_flat_mel_16bar'] = TrainedModel(flat_mel_16bar_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/mel_16bar_flat.ckpt')
Explanation: 16-bar Melody Models
The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, a 2-layer LSTM core decoder with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 256 free bits, and had a fixed beta value of 0.2. After 25k steps, the final accuracy is 0.90 and KL divergence is 277 bits.
End of explanation
#@title Generate 4 samples from the selected model prior.
mel_sample_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_16_samples = mel_16bar_models[mel_sample_model].sample(n=4, length=256, temperature=temperature)
for ns in mel_16_samples:
play(ns)
#@title Optionally download MIDI samples.
for i, ns in enumerate(mel_16_samples):
download(ns, '%s_sample_%d.mid' % (mel_sample_model, i))
Explanation: Generate Samples
End of explanation
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_mel_16_midi_data = [
tf.io.gfile.GFile(fn, 'rb').read()
for fn in sorted(tf.io.gfile.glob(BASE_DIR + '/midi/mel_16bar*.mid'))]
#@title Option 2: upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_mel_16_midi_data = files.upload().values() or input_mel_16_midi_data
#@title Extract melodies from MIDI files. This will extract all unique 16-bar melodies using a sliding window with a stride of 1 bar.
mel_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_mel_16_midi_data]
extracted_16_mels = []
for ns in mel_input_seqs:
extracted_16_mels.extend(
hierdec_mel_16bar_config.data_converter.from_tensors(
hierdec_mel_16bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_16_mels):
print("Melody", i)
play(ns)
#@title Compute the reconstructions and mean of the two melodies, selected from the previous cell.
mel_interp_model = "hierdec_mel_16bar" #@param ["hierdec_mel_16bar", "baseline_flat_mel_16bar"]
start_melody = 0 #@param {type:"integer"}
end_melody = 1 #@param {type:"integer"}
start_mel = extracted_16_mels[start_melody]
end_mel = extracted_16_mels[end_melody]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
mel_16bar_mean = interpolate(mel_16bar_models[mel_interp_model], start_mel, end_mel, num_steps=3, max_length=256, individual_duration=32, temperature=temperature)
#@title Optionally download mean MIDI file.
download(mel_16bar_mean, '%s_mean.mid' % mel_interp_model)
Explanation: Generate Means
End of explanation
#@title Load the pre-trained models.
trio_models = {}
hierdec_trio_16bar_config = configs.CONFIG_MAP['hierdec-trio_16bar']
trio_models['hierdec_trio_16bar'] = TrainedModel(hierdec_trio_16bar_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/trio_16bar_hierdec.ckpt')
flat_trio_16bar_config = configs.CONFIG_MAP['flat-trio_16bar']
trio_models['baseline_flat_trio_16bar'] = TrainedModel(flat_trio_16bar_config, batch_size=4, checkpoint_dir_or_path=BASE_DIR + '/checkpoints/trio_16bar_flat.ckpt')
Explanation: 16-bar "Trio" Models (lead, bass, drums)
We present two pre-trained models for 16-bar trios: a hierarchical model and a flat (baseline) model.
The pre-trained hierarchical model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 16-step 2-layer LSTM "conductor" decoder with 1024 nodes in each layer, 3 (lead, bass, drums) 2-layer LSTM core decoders with 1024 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.82 for lead, 0.87 for bass, and 0.90 for drums, and the KL divergence is 1027 bits.
The pre-trained flat model consists of a 2-layer stacked bidirectional LSTM encoder with 2048 nodes in each direction for each layer, a 3-layer LSTM decoder with 2048 nodes in each layer, and a Z with 512 dimensions. It was given 1024 free bits, and had a fixed beta value of 0.1. It was trained with scheduled sampling with an inverse sigmoid schedule and a rate of 1000. After 50k steps, the final accuracy is 0.67 for lead, 0.66 for bass, and 0.79 for drums, and the KL divergence is 1016 bits.
End of explanation
#@title Generate 4 samples from the selected model prior.
trio_sample_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
trio_16_samples = trio_models[trio_sample_model].sample(n=4, length=256, temperature=temperature)
for ns in trio_16_samples:
play(ns)
#@title Optionally download MIDI samples.
for i, ns in enumerate(trio_16_samples):
download(ns, '%s_sample_%d.mid' % (trio_sample_model, i))
Explanation: Generate Samples
End of explanation
#@title Option 1: Use example MIDI files for interpolation endpoints.
input_trio_midi_data = [
tf.io.gfile.GFile(fn, 'rb').read()
for fn in sorted(tf.io.gfile.glob(BASE_DIR + '/midi/trio_16bar*.mid'))]
#@title Option 2: Upload your own MIDI files to use for interpolation endpoints instead of those provided.
input_trio_midi_data = files.upload().values() or input_trio_midi_data
#@title Extract trios from MIDI files. This will extract all unique 16-bar trios using a sliding window with a stride of 1 bar.
trio_input_seqs = [mm.midi_to_sequence_proto(m) for m in input_trio_midi_data]
extracted_trios = []
for ns in trio_input_seqs:
extracted_trios.extend(
hierdec_trio_16bar_config.data_converter.from_tensors(
hierdec_trio_16bar_config.data_converter.to_tensors(ns)[1]))
for i, ns in enumerate(extracted_trios):
print("Trio", i)
play(ns)
#@title Compute the reconstructions and mean of the two trios, selected from the previous cell.
trio_interp_model = "hierdec_trio_16bar" #@param ["hierdec_trio_16bar", "baseline_flat_trio_16bar"]
start_trio = 0 #@param {type:"integer"}
end_trio = 1 #@param {type:"integer"}
start_trio = extracted_trios[start_trio]
end_trio = extracted_trios[end_trio]
temperature = 0.5 #@param {type:"slider", min:0.1, max:1.5, step:0.1}
trio_16bar_mean = interpolate(trio_models[trio_interp_model], start_trio, end_trio, num_steps=3, max_length=256, individual_duration=32, temperature=temperature)
#@title Optionally download mean MIDI file.
download(trio_16bar_mean, '%s_mean.mid' % trio_interp_model)
Explanation: Generate Means
End of explanation |
13,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introdução ao NumPy
Operações matriciais
Uma das principais vantagens da estrutura ndarray é sua habilidade de processamento matricial.
Assim, para se multiplicar todos os elementos de um array por um escalar basta escrever a * 5 por
exemplo. Para se fazer qualquer operação lógica ou aritmética entre arrays, basta escrever a <oper> b
Step1: Multiplicação de array por escalar
Step2: Soma de arrays
Step3: Transposta de uma matriz
Step4: Multiplicação de matrizes
Step5: Linspace e Arange
As funções do numpy linspace e arange tem o mesmo objetivo
Step6: Já na função arange, define-se o intervalo semi-aberto [inicio,fim) e o passo que será dado entre um elemento e outro.
Dessa forma, para gerar
um numpy.array entre 0 e 1 com 10 elementos, temos que calcular o passo (0.1) e passar esse passo como parâmetro.
Step7: Confirme que a principal diferença entre os dois que pode ser verificada nos exemplos acima é que
no linspace o limite superior da distribuição é inclusivo (intervalo fechado),
enquanto no arange isso não ocorre (intervalo semi-aberto).
Funções indices e meshgrid
As funções indices e meshgrid são extremamente úteis na geração de imagens sintéticas e o seu aprendizado permite também
entender as vantagens de programação matricial, evitando-se a varredura seqüencial da imagem muito usual na programação na linguagem C.
Operador indices em pequenos exemplos numéricos
A função indices recebe como parâmetros uma tupla com as dimensões (H,W) das matrizes a serem criadas. No exemplo a seguir, estamos
gerando matrizes de 5 linhas e 10 colunas. Esta função retorna uma tupla de duas matrizes que podem ser obtidas fazendo suas atribuições
como no exemplo a seguir onde criamos as matrizes r e c, ambas de tamanho (5,10), isto é, 5 linhas e 10 colunas
Step8: Note que a matriz r é uma matriz onde cada elemento é a sua coordenada linha e a matriz c é uma matriz onde cada elemento é
a sua coordenada coluna. Desta forma, qualquer operação matricial feita com r e c, na realidade você está processando as
coordenadas da matriz. Assim, é possível gerar diversas imagens sintéticas a partir de uma função de suas coordenadas.
Como o NumPy processa as matrizes diretamente, sem a necessidade de fazer um for explícito, a notação do programa fica bem simples
e a eficiência também. O único inconveniente é o uso da memória para se calcular as matrizes de índices r e c. Iremos
ver mais à frente que isto pode ser minimizado.
Por exemplo seja a função que seja a soma de suas coordenadas $f(r,c) = r + c$
Step9: Ou ainda a função diferença entre a coordenada linha e coluna $f(r,c) = r - c$
Step10: Ou ainda a função $f(r,c) = (r + c) \% 2$ onde % é operador módulo. Esta função retorna 1 se a soma das coordenadas for ímpar e 0 caso contrário.
É uma imagem no estilo de um tabuleiro de xadrez de valores 0 e 1
Step11: Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2}c)$
Step12: Ou ainda a função parabólica dada pela soma do quadrado de suas coordenadas $f(r,c) = r^2 + c^2$
Step13: Ou ainda a função do círculo de raio 4, com centro em (0,0) $f(r,c) = (r^2 + c^2 < 4^2)$
Step14: Clip
A função clip substitui os valores de um array que estejam abaixo de um limiar mínimo ou que estejam acima de um limiar máximo,
por esses limiares mínimo e máximo, respectivamente. Esta função é especialmente útil em processamento de imagens para evitar
que os índices ultrapassem os limites das imagens.
Exemplos
Step15: Exemplo com ponto flutuante
Observe que se os parâmetros do clip estiverem em ponto flutuante, o resultado também será em ponto flutuante
Step16: Formatando arrays para impressão
Imprimindo arrays de ponto flutuante
Ao se imprimir arrays com valores em ponto flutuante, o NumPy em geral, imprime o array com muitas as casas
decimais e com notação científica, o que dificulta a visualização.
Step17: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando
a função set_printoption do numpy
Step18: Imprimindo arrays binários
Array booleanos são impressos com as palavras True e False, como no exemplo a seguir
Step19: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando
o método astype(int) | Python Code:
a = np.arange(20).reshape(5,4)
b = 2 * np.ones((5,4))
c = np.arange(12,0,-1).reshape(4,3)
print('a=\n', a )
print('b=\n', b )
print('c=\n', c )
Explanation: Introdução ao NumPy
Operações matriciais
Uma das principais vantagens da estrutura ndarray é sua habilidade de processamento matricial.
Assim, para se multiplicar todos os elementos de um array por um escalar basta escrever a * 5 por
exemplo. Para se fazer qualquer operação lógica ou aritmética entre arrays, basta escrever a <oper> b:
End of explanation
b5 = 5 * b
print('b5=\n', b5 )
Explanation: Multiplicação de array por escalar: b x 5
End of explanation
amb = a + b
print('amb=\n', amb )
Explanation: Soma de arrays: a + b
End of explanation
at = a.T
print('a.shape=',a.shape )
print('a.T.shape=',a.T.shape )
print('a=\n', a )
print('at=\n', at )
Explanation: Transposta de uma matriz: a.T
A transposta de uma matriz, troca os eixos das coordenadas. O elemento que
estava na posição (r,c) vai agora estar na posição (c,r). O shape
da matriz resultante ficará portanto com os valores trocados. A operação
de transposição é feita através de cópia rasa, portanto é uma operação
muito eficiente e deve ser utilizada sempre que possível.
Veja o exemplo a seguir:
End of explanation
ac = a.dot(c)
print('a.shape:',a.shape )
print('c.shape:',c.shape )
print('a=\n',a )
print('c=\n',c )
print('ac=\n', ac )
print('ac.shape:',ac.shape )
Explanation: Multiplicação de matrizes: a x c
A multiplicação de matrizes é feita através do operador dot.
Para que a multiplicação seja possível é importante que o número de
colunas do primeiro ndarray seja igual ao número de linhas do
segundo. As dimensões do resultado será o número de linhas do
primeiro ndarray pelo número de colunas do segundo ndarray. Confira:
End of explanation
# gera um numpy.array de 10 elementos, linearmente espaçados entre 0 a 1
print(np.linspace(0, 1.0, num=10).round(2) )
Explanation: Linspace e Arange
As funções do numpy linspace e arange tem o mesmo objetivo: gerar numpy.arrays linearmente
espaçados em um intervalo indicado como parâmetro.
A diferença primordial entre essas funções é como será realizada a divisão no intervalo especificado.
Na função linspace essa divisão é feita através da definição do intervalo fechado [inicio,fim], isto é, contém o
início e o fim, e da quantidade de
elementos que o numpy.array final terá. O passo portanto é calculado como (fim - inicio)/(n - 1).
Dessa forma, se queremos gerar um numpy.array entre 0 e 1 com 10 elementos, utilizaremos o linspace da seguinte forma
End of explanation
# gera um numpy.array linearmente espaçados entre 0 a 1 com passo 0.1
print(np.arange(0, 1.0, 0.1) )
Explanation: Já na função arange, define-se o intervalo semi-aberto [inicio,fim) e o passo que será dado entre um elemento e outro.
Dessa forma, para gerar
um numpy.array entre 0 e 1 com 10 elementos, temos que calcular o passo (0.1) e passar esse passo como parâmetro.
End of explanation
r,c = np.indices( (5, 10) )
print('r=\n', r )
print('c=\n', c )
Explanation: Confirme que a principal diferença entre os dois que pode ser verificada nos exemplos acima é que
no linspace o limite superior da distribuição é inclusivo (intervalo fechado),
enquanto no arange isso não ocorre (intervalo semi-aberto).
Funções indices e meshgrid
As funções indices e meshgrid são extremamente úteis na geração de imagens sintéticas e o seu aprendizado permite também
entender as vantagens de programação matricial, evitando-se a varredura seqüencial da imagem muito usual na programação na linguagem C.
Operador indices em pequenos exemplos numéricos
A função indices recebe como parâmetros uma tupla com as dimensões (H,W) das matrizes a serem criadas. No exemplo a seguir, estamos
gerando matrizes de 5 linhas e 10 colunas. Esta função retorna uma tupla de duas matrizes que podem ser obtidas fazendo suas atribuições
como no exemplo a seguir onde criamos as matrizes r e c, ambas de tamanho (5,10), isto é, 5 linhas e 10 colunas:
End of explanation
f = r + c
print('f=\n', f )
Explanation: Note que a matriz r é uma matriz onde cada elemento é a sua coordenada linha e a matriz c é uma matriz onde cada elemento é
a sua coordenada coluna. Desta forma, qualquer operação matricial feita com r e c, na realidade você está processando as
coordenadas da matriz. Assim, é possível gerar diversas imagens sintéticas a partir de uma função de suas coordenadas.
Como o NumPy processa as matrizes diretamente, sem a necessidade de fazer um for explícito, a notação do programa fica bem simples
e a eficiência também. O único inconveniente é o uso da memória para se calcular as matrizes de índices r e c. Iremos
ver mais à frente que isto pode ser minimizado.
Por exemplo seja a função que seja a soma de suas coordenadas $f(r,c) = r + c$:
End of explanation
f = r - c
print('f=\n', f )
Explanation: Ou ainda a função diferença entre a coordenada linha e coluna $f(r,c) = r - c$:
End of explanation
f = (r + c) % 2
print('f=\n', f )
Explanation: Ou ainda a função $f(r,c) = (r + c) \% 2$ onde % é operador módulo. Esta função retorna 1 se a soma das coordenadas for ímpar e 0 caso contrário.
É uma imagem no estilo de um tabuleiro de xadrez de valores 0 e 1:
End of explanation
f = (r == c//2)
print('f=\n', f )
Explanation: Ou ainda a função de uma reta $f(r,c) = (r = \frac{1}{2}c)$:
End of explanation
f = r**2 + c**2
print('f=\n', f )
Explanation: Ou ainda a função parabólica dada pela soma do quadrado de suas coordenadas $f(r,c) = r^2 + c^2$:
End of explanation
f = ((r**2 + c**2) < 4**2)
print('f=\n', f * 1 )
a = np.array([[0,1],[2,3]])
print('a = \n', a )
print()
print('np.resize(a,(1,7)) = \n', np.resize(a,(1,7)) )
print()
print('np.resize(a,(2,5)) = \n', np.resize(a,(2,5)) )
Explanation: Ou ainda a função do círculo de raio 4, com centro em (0,0) $f(r,c) = (r^2 + c^2 < 4^2)$:
End of explanation
a = np.array([11,1,2,3,4,5,12,-3,-4,7,4])
print('a = ',a )
print('np.clip(a,0,10) = ', np.clip(a,0,10) )
Explanation: Clip
A função clip substitui os valores de um array que estejam abaixo de um limiar mínimo ou que estejam acima de um limiar máximo,
por esses limiares mínimo e máximo, respectivamente. Esta função é especialmente útil em processamento de imagens para evitar
que os índices ultrapassem os limites das imagens.
Exemplos
End of explanation
a = np.arange(10).astype(np.int)
print('a=',a )
print('np.clip(a,2.5,7.5)=',np.clip(a,2.5,7.5) )
Explanation: Exemplo com ponto flutuante
Observe que se os parâmetros do clip estiverem em ponto flutuante, o resultado também será em ponto flutuante:
End of explanation
A = np.exp(np.linspace(0.1,10,32)).reshape(4,8)/3000.
print('A: \n', A )
Explanation: Formatando arrays para impressão
Imprimindo arrays de ponto flutuante
Ao se imprimir arrays com valores em ponto flutuante, o NumPy em geral, imprime o array com muitas as casas
decimais e com notação científica, o que dificulta a visualização.
End of explanation
np.set_printoptions(suppress=True, precision=3)
print('A: \n', A )
Explanation: É possível diminuir o número de casas decimais e suprimir a notação exponencial utilizando
a função set_printoption do numpy:
End of explanation
A = np.random.rand(5,10) > 0.5
print('A = \n', A )
Explanation: Imprimindo arrays binários
Array booleanos são impressos com as palavras True e False, como no exemplo a seguir:
End of explanation
print ('A = \n', A.astype(int))
Explanation: Para facilitar a visualização destes arrays, é possível converter os valores para inteiros utilizando
o método astype(int):
End of explanation |
13,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:06
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
13,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling HIV infection
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: During the initial phase of HIV infection, the concentration of the virus in the bloodstream typically increases quickly and then decreases.
The most obvious explanation for the decline is an immune response that destroys the virus or controls its replication.
However, at least in some patients, the decline occurs even without any detectable immune response.
In 1996 Andrew Phillips proposed another explanation for the decline ("Reduction of HIV Concentration During Acute Infection
Step2: The behavior of the system is controlled by 9 parameters.
That might seem like a lot, but they are not entirely free parameters; their values are constrained by measurements and background knowledge (although some are more constrained than others).
Here are the values from Table 1.
Note
Step3: Here's a System object with the initial conditions and the duration of the simulation (120 days).
Normally we would store the parameters in the System object, but the code will be less cluttered if we leave them as global variables.
Step4: Exercise
Step5: Test your slope function with the initial conditions.
The results should be approximately
-2.16e-08, 2.16e-09, 1.944e-08, -8e-07
Step6: Exercise
Step7: The next few cells plot the results on the same scale as the figures in the paper.
Exericise | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Modeling HIV infection
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
init = State(R=200, L=0, E=0, V=4e-7)
Explanation: During the initial phase of HIV infection, the concentration of the virus in the bloodstream typically increases quickly and then decreases.
The most obvious explanation for the decline is an immune response that destroys the virus or controls its replication.
However, at least in some patients, the decline occurs even without any detectable immune response.
In 1996 Andrew Phillips proposed another explanation for the decline ("Reduction of HIV Concentration During Acute Infection: Independence from a Specific Immune Response", available from https://people.math.gatech.edu/~weiss/uploads/5/8/6/1/58618765/phillips1996.pdf).
Phillips presents a system of differential equations that models the concentrations of the HIV virus and the CD4 cells it infects.
The model does not include an immune response; nevertheless, it demonstrates behavior that is qualitatively similar to what is seen in patients during the first few weeks after infection.
His conclusion is that the observed decline in the concentration of HIV might not be caused by an immune response; it could be due to the dynamic interaction between HIV and the cells it infects.
In this notebook, we'll implement Phillips's model and consider whether it does the work it is meant to do.
The Model
The model has four state variables, R, L, E, and V. Read the paper to understand what they represent.
Here are the initial conditional we can glean from the paper.
End of explanation
gamma = 1.36
mu = 1.36e-3
tau = 0.2
beta = 0.00027
p = 0.1
alpha = 3.6e-2
sigma = 2
delta = 0.33
pi = 100
Explanation: The behavior of the system is controlled by 9 parameters.
That might seem like a lot, but they are not entirely free parameters; their values are constrained by measurements and background knowledge (although some are more constrained than others).
Here are the values from Table 1.
Note: the parameter $\rho$ (the Greek letter "rho") in the table appears as $p$ in the equations. Since it represents a proportion, we'll use $p$.
End of explanation
system = System(init=init,
t_end=120,
num=481)
Explanation: Here's a System object with the initial conditions and the duration of the simulation (120 days).
Normally we would store the parameters in the System object, but the code will be less cluttered if we leave them as global variables.
End of explanation
# Solution
def slope_func(t, state, system):
R, L, E, V = state
infections = beta * R * V
conversions = alpha * L
dRdt = gamma * tau - mu * R - infections
dLdt = p * infections - mu * L - conversions
dEdt = (1-p) * infections + conversions - delta * E
dVdt = pi * E - sigma * V
return dRdt, dLdt, dEdt, dVdt
Explanation: Exercise: Use the equations in the paper to write a slope function that takes a State object with the current values of R, L, E, and V, and returns their derivatives in the corresponding order.
End of explanation
# Solution
slope_func(0, init, system)
Explanation: Test your slope function with the initial conditions.
The results should be approximately
-2.16e-08, 2.16e-09, 1.944e-08, -8e-07
End of explanation
# Solution
results, details = run_solve_ivp(system, slope_func)
details.message
# Solution
results.head()
Explanation: Exercise: Now use run_solve_ivp to simulate the system of equations.
End of explanation
results.V.plot(label='V')
decorate(xlabel='Time (days)',
ylabel='Free virions V',
yscale='log',
ylim=[0.1, 1e4])
results.R.plot(label='R', color='C1')
decorate(xlabel='Time (days)',
ylabel='Number of cells',
)
results.L.plot(color='C2', label='L')
results.E.plot(color='C4', label='E')
decorate(xlabel='Time (days)',
ylabel='Number of cells',
yscale='log',
ylim=[0.1, 100])
Explanation: The next few cells plot the results on the same scale as the figures in the paper.
Exericise: Compare your results to the results in the paper.
Are they consistent?
End of explanation |
13,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview Scientific Packages for Python
Package can be seen as a container of variables and functions provided by others to help us accomplish our tasks
Import Packages into our enviroment
To use a package, the first step is to import them into our environment. There are several ways of doing so in Python.
Step1: To reduce the name space, we can specify another name to refer to the original package
Step2: Import some variables or functions into the enviroment
Then we can use variables or functions in the enviroment directly
Step3: Import all the variables and functions into the enviroment at once | Python Code:
import math
math.factorial(5) # functions in math package
math.e # variables in math package
Explanation: Overview Scientific Packages for Python
Package can be seen as a container of variables and functions provided by others to help us accomplish our tasks
Import Packages into our enviroment
To use a package, the first step is to import them into our environment. There are several ways of doing so in Python.
End of explanation
import math as m
m.factorial(5)
m.e
Explanation: To reduce the name space, we can specify another name to refer to the original package
End of explanation
from math import factorial, e
factorial(5)
e
Explanation: Import some variables or functions into the enviroment
Then we can use variables or functions in the enviroment directly
End of explanation
from math import *
factorial(10)
e
Explanation: Import all the variables and functions into the enviroment at once
End of explanation |
13,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 4
Regression
Allen Downey
MIT License
Step1: Simple regression
An important thing to remember about regression is that it is not symmetric; that is, the regression of A onto B is not the same as the regression of B onto A.
To demonstrate, I'll load data from the BRFSS.
Step2: A few people report many vegetable servings per day. To simplify the visualization, I'm going to replace values greater than 8 with 8.
Step3: We can use SciPy to compute servings of vegetables as a function of income class.
Step4: Increasing income class by 1 is associated with an increase of 0.07 vegetables per day.
So if we hypothesize that people with higher incomes eat more vegetables, this result would not get us too excited.
We can see what the regression looks like by plotting the line of best fit on top of the scatter plot.
Step5: Now let's do it the other way around, regressing income as a function of vegetable servings.
Step6: Again, we can plot the line of best fit on top of the scatter plot.
Step7: The slope looks more impressive now. Each additional serving corresponds to 0.24 income codes, and each income code is several thousand dollars. So a result that seemed unimpressive in one direction seems more intruiging in the other direction.
But the primary point here is that regression is not symmetric. To see it more clearly, I'll plot both regression lines on top of the scatter plot.
The green line is income as a function of vegetables; the orange line is vegetables as a function of income.
Step8: And here's the same thing the other way around.
Step9: StatsModels
So far we have used scipy.linregress to run simple regression. Sadly, that function doesn't do multiple regression, so we have to switch to a new library, StatsModels.
Here's the same example from the previous section, using StatsModels.
Step10: The result is an OLS object, which we have to fit
Step11: results contains a lot of information about the regression, which we can view using summary.
Step12: One of the parts we're interested in is params, which is a Pandas Series containing the estimated parameters.
Step13: And rsquared contains the coefficient of determination, $R^2$, which is pretty small in this case.
Step14: We can confirm that $R^2 = \rho^2$
Step15: Exercise
Step16: Multiple regression
For experiments with multiple regression, let's load the GSS data again.
Step17: Let's explore the relationship between income and education, starting with simple regression
Step18: It looks like people with more education have higher incomes, about $3586 per additional year of education.
Now that we are using StatsModels, it is easy to add explanatory variables. For example, we can add age to the model like this.
Step19: It looks like the effect of age is small, and adding it to the model has only a small effect on the estimated parameter for education.
But it's possible we are getting fooled by a nonlinear relationship. To see what the age effect looks like, I'll group by age and plot the mean income in each age group.
Step20: Yeah, that looks like a nonlinear effect.
We can model it by adding a quadratic term to the model.
Step21: Now the coefficient associated with age is substantially larger. And the coefficient of the quadratic term is negative, which is consistent with the observation that the relationship has downward curvature.
Exercise
Step22: Exercise
Step23: Making predictions
The parameters of a non-linear model can be hard to interpret, but maybe we don't have to. Sometimes it is easier to judge a model by its predictions rather than its parameters.
The results object provides a predict method that takes a DataFrame and uses the model to generate a prediction for each row. Here's how we can create the DataFrame
Step24: age contains equally-spaced points from 18 to 85, and age2 contains those values squared.
Now we can set educ to 12 years of education and generate predictions
Step25: This plot shows the structure of the model, which is a parabola. We also plot the data as an average in each age group.
Exercise
Step26: Adding categorical variables
In a formula string, we can use C() to indicate that a variable should be treated as categorical. For example, the following model contains sex as a categorical variable.
Step27: The estimated parameter indicates that sex=2, which indicates women, is associated with about \$4150 lower income, after controlling for age and education.
Exercise
Step28: Exercise
Step29: Logistic regression
Let's use logistic regression to see what factors are associated with support for gun control. The variable we'll use is gunlaw, which represents the response to this question
Step30: 1 means yes, 2 means no, 0 means the question wasn't asked; 8 and 9 mean the respondent doesn't know or refused to answer.
First I'll replace 0, 8, and 9 with NaN
Step31: In order to put gunlaw on the left side of a regression, we have to recode it so 0 means no and 1 means yes.
Step32: Here's what it looks like after recoding.
Step33: Now we can run a logistic regression model
Step34: Here are the results.
Step35: Here are the parameters. The coefficient of sex=2 is positive, which indicates that women are more likely to support gun control, at least for this question.
Step36: The other parameters are not easy to interpret, but again we can use the regression results to generate predictions, which makes it possible to visualize the model.
I'll make a DataFrame with a range of ages and a fixed level of education, and generate predictions for men and women.
Step37: Over the range of ages, women are more likely to support gun control than men, by about 15 percentage points.
Exercise
Step38: Exercise | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white')
from utils import decorate
from thinkstats2 import Pmf, Cdf
import thinkstats2
import thinkplot
Explanation: Homework 4
Regression
Allen Downey
MIT License
End of explanation
%time brfss = pd.read_hdf('brfss.hdf5', 'brfss')
brfss.head()
Explanation: Simple regression
An important thing to remember about regression is that it is not symmetric; that is, the regression of A onto B is not the same as the regression of B onto A.
To demonstrate, I'll load data from the BRFSS.
End of explanation
rows = brfss['_VEGESU1'] > 8
brfss.loc[rows, '_VEGESU1'] = 8
Explanation: A few people report many vegetable servings per day. To simplify the visualization, I'm going to replace values greater than 8 with 8.
End of explanation
from scipy.stats import linregress
subset = brfss.dropna(subset=['INCOME2', '_VEGESU1'])
xs = subset['INCOME2']
ys = subset['_VEGESU1']
res = linregress(xs, ys)
res
Explanation: We can use SciPy to compute servings of vegetables as a function of income class.
End of explanation
x_jitter = xs + np.random.normal(0, 0.15, len(xs))
plt.plot(x_jitter, ys, 'o', markersize=1, alpha=0.02)
plt.xlabel('Income code')
plt.ylabel('Vegetable servings per day')
fx1 = np.array([xs.min(), xs.max()])
fy1 = res.intercept + res.slope * fx1
plt.plot(fx1, fy1, '-', color='C1');
Explanation: Increasing income class by 1 is associated with an increase of 0.07 vegetables per day.
So if we hypothesize that people with higher incomes eat more vegetables, this result would not get us too excited.
We can see what the regression looks like by plotting the line of best fit on top of the scatter plot.
End of explanation
xs = subset['_VEGESU1']
ys = subset['INCOME2']
res = linregress(xs, ys)
res
Explanation: Now let's do it the other way around, regressing income as a function of vegetable servings.
End of explanation
y_jitter = ys + np.random.normal(0, 0.3, len(xs))
plt.plot(xs, y_jitter, 'o', markersize=1, alpha=0.02)
plt.ylabel('Income code')
plt.xlabel('Vegetable servings per day')
fx2 = np.array([xs.min(), xs.max()])
fy2 = res.intercept + res.slope * fx2
plt.plot(fx2, fy2, '-', color='C2');
Explanation: Again, we can plot the line of best fit on top of the scatter plot.
End of explanation
y_jitter = ys + np.random.normal(0, 0.3, len(xs))
plt.plot(xs, y_jitter, 'o', markersize=1, alpha=0.02)
plt.ylabel('Income code')
plt.xlabel('Vegetable servings per day')
fx2 = np.array([xs.min(), xs.max()])
fy2 = res.intercept + res.slope * fx2
plt.plot(fx2, fy2, '-', color='C2')
plt.plot(fy1, fx1, '-', color='C1');
Explanation: The slope looks more impressive now. Each additional serving corresponds to 0.24 income codes, and each income code is several thousand dollars. So a result that seemed unimpressive in one direction seems more intruiging in the other direction.
But the primary point here is that regression is not symmetric. To see it more clearly, I'll plot both regression lines on top of the scatter plot.
The green line is income as a function of vegetables; the orange line is vegetables as a function of income.
End of explanation
xs = subset['INCOME2']
ys = subset['_VEGESU1']
res = linregress(xs, ys)
res
x_jitter = xs + np.random.normal(0, 0.15, len(xs))
plt.plot(x_jitter, ys, 'o', markersize=1, alpha=0.02)
plt.xlabel('Income code')
plt.ylabel('Vegetable servings per day')
fx1 = np.array([xs.min(), xs.max()])
fy1 = res.intercept + res.slope * fx1
plt.plot(fx1, fy1, '-', color='C1')
plt.plot(fy2, fx1, '-', color='C2');
Explanation: And here's the same thing the other way around.
End of explanation
import statsmodels.formula.api as smf
model = smf.ols('INCOME2 ~ _VEGESU1', data=brfss)
model
Explanation: StatsModels
So far we have used scipy.linregress to run simple regression. Sadly, that function doesn't do multiple regression, so we have to switch to a new library, StatsModels.
Here's the same example from the previous section, using StatsModels.
End of explanation
results = model.fit()
results
Explanation: The result is an OLS object, which we have to fit:
End of explanation
results.summary()
Explanation: results contains a lot of information about the regression, which we can view using summary.
End of explanation
results.params
Explanation: One of the parts we're interested in is params, which is a Pandas Series containing the estimated parameters.
End of explanation
results.rsquared
Explanation: And rsquared contains the coefficient of determination, $R^2$, which is pretty small in this case.
End of explanation
np.sqrt(results.rsquared)
columns = ['INCOME2', '_VEGESU1']
brfss[columns].corr()
Explanation: We can confirm that $R^2 = \rho^2$:
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Run this regression in the other direction and confirm that you get the same estimated slope we got from linregress. Also confirm that $R^2$ is the same in either direction (which we know because correlation is the same in either direction).
End of explanation
%time gss = pd.read_hdf('gss.hdf5', 'gss')
gss.shape
gss.head()
gss.describe()
Explanation: Multiple regression
For experiments with multiple regression, let's load the GSS data again.
End of explanation
model = smf.ols('realinc ~ educ', data=gss)
model
results = model.fit()
results.params
Explanation: Let's explore the relationship between income and education, starting with simple regression:
End of explanation
model = smf.ols('realinc ~ educ + age', data=gss)
results = model.fit()
results.params
Explanation: It looks like people with more education have higher incomes, about $3586 per additional year of education.
Now that we are using StatsModels, it is easy to add explanatory variables. For example, we can add age to the model like this.
End of explanation
grouped = gss.groupby('age')
grouped
mean_income_by_age = grouped['realinc'].mean()
plt.plot(mean_income_by_age, 'o', alpha=0.5)
plt.xlabel('Age (years)')
plt.ylabel('Income (1986 $)');
Explanation: It looks like the effect of age is small, and adding it to the model has only a small effect on the estimated parameter for education.
But it's possible we are getting fooled by a nonlinear relationship. To see what the age effect looks like, I'll group by age and plot the mean income in each age group.
End of explanation
gss['age2'] = gss['age']**2
model = smf.ols('realinc ~ educ + age + age2', data=gss)
results = model.fit()
results.summary()
Explanation: Yeah, that looks like a nonlinear effect.
We can model it by adding a quadratic term to the model.
End of explanation
# Solution goes here
Explanation: Now the coefficient associated with age is substantially larger. And the coefficient of the quadratic term is negative, which is consistent with the observation that the relationship has downward curvature.
Exercise: To see what the relationship between income and education looks like, group the dataset by educ and plot mean income at each education level.
End of explanation
gss['educ2'] = gss['educ']**2
model = smf.ols('realinc ~ educ + educ2 + age + age2', data=gss)
results = model.fit()
results.summary()
Explanation: Exercise: Maybe the relationship with education is nonlinear, too. Add a quadratic term for educ to the model and summarize the results.
End of explanation
df = pd.DataFrame()
df['age'] = np.linspace(18, 85)
df['age2'] = df['age']**2
Explanation: Making predictions
The parameters of a non-linear model can be hard to interpret, but maybe we don't have to. Sometimes it is easier to judge a model by its predictions rather than its parameters.
The results object provides a predict method that takes a DataFrame and uses the model to generate a prediction for each row. Here's how we can create the DataFrame:
End of explanation
plt.plot(mean_income_by_age, 'o', alpha=0.5)
df['educ'] = 12
df['educ2'] = df['educ']**2
pred12 = results.predict(df)
plt.plot(df['age'], pred12, label='High school')
plt.xlabel('Age (years)')
plt.ylabel('Income (1986 $)')
plt.legend();
Explanation: age contains equally-spaced points from 18 to 85, and age2 contains those values squared.
Now we can set educ to 12 years of education and generate predictions:
End of explanation
# Solution goes here
Explanation: This plot shows the structure of the model, which is a parabola. We also plot the data as an average in each age group.
Exercise: Generate the same plot, but show predictions for three levels of education: 12, 14, and 16 years.
End of explanation
formula = 'realinc ~ educ + educ2 + age + age2 + C(sex)'
results = smf.ols(formula, data=gss).fit()
results.params
Explanation: Adding categorical variables
In a formula string, we can use C() to indicate that a variable should be treated as categorical. For example, the following model contains sex as a categorical variable.
End of explanation
# Solution goes here
# Solution goes here
Explanation: The estimated parameter indicates that sex=2, which indicates women, is associated with about \$4150 lower income, after controlling for age and education.
Exercise: Use groupby to group respondents by educ, then plot mean realinc for each education level.
End of explanation
# Solution goes here
# Solution goes here
Explanation: Exercise: Make a DataFrame with a range of values for educ and constant age=30. Compute age2 and educ2 accordingly.
Use this DataFrame to generate predictions for each level of education, holding age constant. Generate and plot separate predictions for men and women.
Also plot the data for comparison.
End of explanation
gss['gunlaw'].value_counts()
Explanation: Logistic regression
Let's use logistic regression to see what factors are associated with support for gun control. The variable we'll use is gunlaw, which represents the response to this question: "Would you favor or oppose a law which would require a person to obtain a police permit before he or she could buy a gun?"
Here are the values.
End of explanation
gss['gunlaw'].replace([0, 8, 9], np.nan, inplace=True)
Explanation: 1 means yes, 2 means no, 0 means the question wasn't asked; 8 and 9 mean the respondent doesn't know or refused to answer.
First I'll replace 0, 8, and 9 with NaN
End of explanation
gss['gunlaw'].replace(2, 0, inplace=True)
Explanation: In order to put gunlaw on the left side of a regression, we have to recode it so 0 means no and 1 means yes.
End of explanation
gss['gunlaw'].value_counts()
Explanation: Here's what it looks like after recoding.
End of explanation
results = smf.logit('gunlaw ~ age + age2 + educ + educ2 + C(sex)', data=gss).fit()
Explanation: Now we can run a logistic regression model
End of explanation
results.summary()
Explanation: Here are the results.
End of explanation
results.params
Explanation: Here are the parameters. The coefficient of sex=2 is positive, which indicates that women are more likely to support gun control, at least for this question.
End of explanation
grouped = gss.groupby('age')
favor_by_age = grouped['gunlaw'].mean()
plt.plot(favor_by_age, 'o', alpha=0.5)
df = pd.DataFrame()
df['age'] = np.linspace(18, 89)
df['educ'] = 12
df['age2'] = df['age']**2
df['educ2'] = df['educ']**2
df['sex'] = 1
pred = results.predict(df)
plt.plot(df['age'], pred, label='Male')
df['sex'] = 2
pred = results.predict(df)
plt.plot(df['age'], pred, label='Female')
plt.xlabel('Age')
plt.ylabel('Probability of favoring gun law')
plt.legend();
Explanation: The other parameters are not easy to interpret, but again we can use the regression results to generate predictions, which makes it possible to visualize the model.
I'll make a DataFrame with a range of ages and a fixed level of education, and generate predictions for men and women.
End of explanation
# Solution goes here
Explanation: Over the range of ages, women are more likely to support gun control than men, by about 15 percentage points.
Exercise: Generate a similar plot as a function of education, with constant age=40.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Use the variable grass to explore support for legalizing marijuana. This variable record the response to this question: "Do you think the use of marijuana should be made legal or not?"
Recode grass for use with logistic regression.
Run a regression model with age, education, and sex as explanatory variables.
Use the model to generate predictions for a range of ages, with education held constant, and plot the predictions for men and women. Also plot the mean level of support in each age group.
Use the model to generate predictions for a range of education levels, with age held constant, and plot the predictions for men and women. Also plot the mean level of support at each education level.
Note: This last graph might not look like a parabola. Why not?
End of explanation |
13,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
t = np.linspace(0,max_time,int(max_time*250))
soln = odeint(lorentz_derivs,ic,t,args=(sigma,rho,beta))
return soln,t
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
np.random.seed(1)
ic = np.random.rand(N,3)*30-15
plt.figure(figsize=(10,7))
for n in ic:
plt.plot(solve_lorentz(n,max_time,sigma,rho,beta)[0][:,0],solve_lorentz(n,max_time,sigma,rho,beta)[0][:,2])
plt.xlabel('x(t)')
plt.ylabel('z(t)')
plt.title('x(t) vs. z(t)')
plt.tick_params(top=False,right=False)
plt.ylim(-20,65)
plt.xlim(-30,30)
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
interact(plot_lorentz,N=(1,50),max_time=(1,10),sigma=(0.0,50.0),rho=(0.0,50.0),beta=fixed(8.0/3.0));
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
13,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[0]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
img
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
tf.reset_default_graph()
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28*28], 'input')
targets_ = tf.placeholder(tf.float32, [None, 28*28], 'target')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu, name='encoded')
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, 28*28, activation=None, name='logits')
# Sigmoid output from logits
decoded = tf.sigmoid(logits, name='decoded')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits, name='loss')
# Mean of the loss
cost = tf.reduce_mean(loss, name='cost')
# Adam optimizer
opt = tf.train.AdamOptimizer(0.0005).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(10,2))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
13,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm4', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-ESM4
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
13,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building the dataset of research papers
(Adapted from
Step1: The datasets will be saved as serialized Python objects, compressed with bzip2.
Saving/loading them will therefore require the pickle and bz2 modules.
Step2: EInfo
Step3: In search_fields, we find 'TIAB' ('Free text associated with Abstract/Title') as a possible search field to use in searches.
Step4: ESearch
Step5: Note how the result being produced is not in Python's native string format
Step6: The part of the query's result we are most interested in is accessible through
Step7: PubMed IDs dataset
We will now assemble a dataset comprised of research articles containing the keyword "evolution", in either their titles or abstracts.
Step8: Taking a look at what we just retrieved, here are the last 5 elements of the Ids list
Step9: ESummary
Step10: For now, we'll keep just some basic information for each paper
Step11: Summaries dataset
We are now ready to assemble a dataset containing the summaries of all the paper Ids we previously fetched.
To reduce the memory footprint, and to ensure the saved datasets won't depend on Biopython being installed to be properly loaded, values returned by Entrez.read() will be converted to their corresponding native Python types. We start by defining a function for helping with the conversion of strings
Step12: Let us take a look at the first 3 retrieved summaries
Step13: EFetch
Step14: q is a list, with each member corresponding to a queried id. Because here we only queried for one id, its results are then in q[0].
Step15: At q[0] we find a dictionary containing two keys, the contents of which we print below.
Step16: The key 'MedlineCitation' maps into another dictionary. In that dictionary, most of the information is contained under the key 'Article'. To minimize the clutter, below we show the contents of 'MedlineCitation' excluding its 'Article' member, and below that we then show the contents of 'Article'.
Step17: A paper's abstract can therefore be accessed with
Step18: A paper for which no abstract is available will simply not contain the 'Abstract' key in its 'Article' dictionary
Step19: Some of the ids in our dataset refer to books from the NCBI Bookshelf, a collection of freely available, downloadable, on-line versions of selected biomedical books. For such ids, Entrez.efetch() returns a slightly different structure, where the keys [u'BookDocument', u'PubmedBookData'] take the place of the [u'MedlineCitation', u'PubmedData'] keys we saw above.
Here is an example of the data we obtain for the id corresponding to the book The Social Biology of Microbial Communities
Step20: In a book from the NCBI Bookshelf, its abstract can then be accessed as such
Step21: Abstracts dataset
We can now assemble a dataset mapping paper ids to their abstracts.
Step22: Taking a look at one paper's abstract
Step23: ELink
Step24: Because we restricted our search to papers in an open-access journal, you can then follow their DOIs to freely access their PDFs at the journal's website
Step25: We have in CA_citing[paper_id][0]['LinkSetDb'][0]['Link'] the list of papers citing paper_id. To get it as just a list of ids, we can do
Step26: However, one more step is needed, as what we have now are PubMed Central IDs, and not PubMed IDs. Their conversion can be achieved through an additional call to Entrez.elink()
Step27: And to check these papers
Step28: Citations dataset
We have now seen all the steps required to assemble a dataset of citations to each of the papers in our dataset.
Step29: At least one server query will be issued per paper in Ids. Because NCBI allows for at most 3 queries per second (see here), this dataset will take a long time to assemble. Should you need to interrupt it for some reason, or the connection fail at some point, it is safe to just rerun the cell below until all data is collected.
Step30: To see that we have indeed obtained the data we expected, you can match the ids below, with the ids listed at the end of last section. | Python Code:
from Bio import Entrez
# NCBI requires you to set your email address to make use of NCBI's E-utilities
Entrez.email = "[email protected]"
Explanation: Building the dataset of research papers
(Adapted from: Building the "evolution" research papers dataset - Luís F. Simões. Converted to Python 3 and minor changes by Tobias Kuhn, 2015-10-22.)
The Entrez module, a part of the Biopython library, will be used to interface with PubMed.<br>
You can download Biopython from here.
In this notebook we will be covering several of the steps taken in the Biopython Tutorial, specifically in Chapter 9 Accessing NCBI’s Entrez databases.
End of explanation
import pickle, bz2, os
Explanation: The datasets will be saved as serialized Python objects, compressed with bzip2.
Saving/loading them will therefore require the pickle and bz2 modules.
End of explanation
# accessing extended information about the PubMed database
pubmed = Entrez.read( Entrez.einfo(db="pubmed"), validate=False )[u'DbInfo']
# list of possible search fields for use with ESearch:
search_fields = { f['Name']:f['Description'] for f in pubmed["FieldList"] }
Explanation: EInfo: Obtaining information about the Entrez databases
End of explanation
search_fields
Explanation: In search_fields, we find 'TIAB' ('Free text associated with Abstract/Title') as a possible search field to use in searches.
End of explanation
example_authors = ['Haasdijk E']
example_search = Entrez.read( Entrez.esearch( db="pubmed", term=' AND '.join([a+'[AUTH]' for a in example_authors]) ) )
example_search
Explanation: ESearch: Searching the Entrez databases
To have a look at the kind of data we get when searching the database, we'll perform a search for papers authored by Haasdijk:
End of explanation
type( example_search['IdList'][0] )
Explanation: Note how the result being produced is not in Python's native string format:
End of explanation
example_ids = [ int(id) for id in example_search['IdList'] ]
print(example_ids)
Explanation: The part of the query's result we are most interested in is accessible through
End of explanation
search_term = 'air'
Ids_file = 'data/' + search_term + '__Ids.pkl.bz2'
if os.path.exists( Ids_file ):
Ids = pickle.load( bz2.BZ2File( Ids_file, 'rb' ) )
else:
# determine the number of hits for the search term
search = Entrez.read( Entrez.esearch( db="pubmed", term=search_term+'[TIAB]', retmax=0 ) )
total = int( search['Count'] )
# `Ids` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Ids_str = []
retrieve_per_query = 10000
for start in range( 0, total, retrieve_per_query ):
print('Fetching IDs of results [%d,%d]' % ( start, start+retrieve_per_query ) )
s = Entrez.read( Entrez.esearch( db="pubmed", term=search_term+'[TIAB]', retstart=start, retmax=retrieve_per_query ) )
Ids_str.extend( s[ u'IdList' ] )
# convert Ids to integers (and ensure that the conversion is reversible)
Ids = [ int(id) for id in Ids_str ]
for (id_str, id_int) in zip(Ids_str, Ids):
if str(id_int) != id_str:
raise Exception('Conversion of PubMed ID %s from string to integer it not reversible.' % id_str )
# Save list of Ids
pickle.dump( Ids, bz2.BZ2File( Ids_file, 'wb' ) )
total = len( Ids )
print('%d documents contain the search term "%s".' % ( total, search_term ) )
Explanation: PubMed IDs dataset
We will now assemble a dataset comprised of research articles containing the keyword "evolution", in either their titles or abstracts.
End of explanation
Ids[:5]
Explanation: Taking a look at what we just retrieved, here are the last 5 elements of the Ids list:
End of explanation
example_paper = Entrez.read( Entrez.esummary(db="pubmed", id='23144668') )[0]
def print_dict( p ):
for k,v in p.items():
print(k)
print('\t', v)
print_dict(example_paper)
Explanation: ESummary: Retrieving summaries from primary IDs
To have a look at the kind of metadata we get from a call to Entrez.esummary(), we now fetch the summary of one of Haasdijk's papers (using one of the PubMed IDs we obtained in the previous section:
End of explanation
( example_paper['Title'], example_paper['AuthorList'], int(example_paper['PubDate'][:4]), example_paper['DOI'] )
Explanation: For now, we'll keep just some basic information for each paper: title, list of authors, publication year, and DOI.
In case you are not familiar with the DOI system, know that the paper above can be accessed through the link http://dx.doi.org/10.1007/s12065-012-0071-x (which is http://dx.doi.org/ followed by the paper's DOI).
End of explanation
Summaries_file = 'data/' + search_term + '__Summaries.pkl.bz2'
if os.path.exists( Summaries_file ):
Summaries = pickle.load( bz2.BZ2File( Summaries_file, 'rb' ) )
else:
# `Summaries` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Summaries = []
retrieve_per_query = 500
print('Fetching Summaries of results: ')
for start in range( 0, len(Ids), retrieve_per_query ):
if (start % 10000 == 0):
print('')
print(start, end='')
else:
print('.', end='')
# build comma separated string with the ids at indexes [start, start+retrieve_per_query)
query_ids = ','.join( [ str(id) for id in Ids[ start : start+retrieve_per_query ] ] )
s = Entrez.read( Entrez.esummary( db="pubmed", id=query_ids ) )
# out of the retrieved data, we will keep only a tuple (title, authors, year, DOI), associated with the paper's id.
# (all values converted to native Python formats)
f = [
( int( p['Id'] ), (
str( p['Title'] ),
[ str(a) for a in p['AuthorList'] ],
int( p['PubDate'][:4] ), # keeps just the publication year
str( p.get('DOI', '') ) # papers for which no DOI is available get an empty string in their place
) )
for p in s
]
Summaries.extend( f )
# Save Summaries, as a dictionary indexed by Ids
Summaries = dict( Summaries )
pickle.dump( Summaries, bz2.BZ2File( Summaries_file, 'wb' ) )
Explanation: Summaries dataset
We are now ready to assemble a dataset containing the summaries of all the paper Ids we previously fetched.
To reduce the memory footprint, and to ensure the saved datasets won't depend on Biopython being installed to be properly loaded, values returned by Entrez.read() will be converted to their corresponding native Python types. We start by defining a function for helping with the conversion of strings:
End of explanation
{ id : Summaries[id] for id in Ids[:3] }
Explanation: Let us take a look at the first 3 retrieved summaries:
End of explanation
q = Entrez.read( Entrez.efetch(db="pubmed", id='23144668', retmode="xml") )
Explanation: EFetch: Downloading full records from Entrez
Entrez.efetch() is the function that will allow us to obtain paper abstracts. Let us start by taking a look at the kind of data it returns when we query PubMed's database.
End of explanation
type(q), len(q)
Explanation: q is a list, with each member corresponding to a queried id. Because here we only queried for one id, its results are then in q[0].
End of explanation
type(q[0]), q[0].keys()
print_dict( q[0][ 'PubmedData' ] )
Explanation: At q[0] we find a dictionary containing two keys, the contents of which we print below.
End of explanation
print_dict( { k:v for k,v in q[0][ 'MedlineCitation' ].items() if k!='Article' } )
print_dict( q[0][ 'MedlineCitation' ][ 'Article' ] )
Explanation: The key 'MedlineCitation' maps into another dictionary. In that dictionary, most of the information is contained under the key 'Article'. To minimize the clutter, below we show the contents of 'MedlineCitation' excluding its 'Article' member, and below that we then show the contents of 'Article'.
End of explanation
{ int(q[0]['MedlineCitation']['PMID']) : str(q[0]['MedlineCitation']['Article']['Abstract']['AbstractText'][0]) }
Explanation: A paper's abstract can therefore be accessed with:
End of explanation
print_dict( Entrez.read( Entrez.efetch(db="pubmed", id='17782550', retmode="xml") )[0]['MedlineCitation']['Article'] )
Explanation: A paper for which no abstract is available will simply not contain the 'Abstract' key in its 'Article' dictionary:
End of explanation
r = Entrez.read( Entrez.efetch(db="pubmed", id='24027805', retmode="xml") )
print_dict( r[0][ 'PubmedBookData' ] )
print_dict( r[0][ 'BookDocument' ] )
Explanation: Some of the ids in our dataset refer to books from the NCBI Bookshelf, a collection of freely available, downloadable, on-line versions of selected biomedical books. For such ids, Entrez.efetch() returns a slightly different structure, where the keys [u'BookDocument', u'PubmedBookData'] take the place of the [u'MedlineCitation', u'PubmedData'] keys we saw above.
Here is an example of the data we obtain for the id corresponding to the book The Social Biology of Microbial Communities:
End of explanation
{ int(r[0]['BookDocument']['PMID']) : str(r[0]['BookDocument']['Abstract']['AbstractText'][0]) }
Explanation: In a book from the NCBI Bookshelf, its abstract can then be accessed as such:
End of explanation
Abstracts_file = 'data/' + search_term + '__Abstracts.pkl.bz2'
import http.client
from collections import deque
if os.path.exists( Abstracts_file ):
Abstracts = pickle.load( bz2.BZ2File( Abstracts_file, 'rb' ) )
else:
# `Abstracts` will be incrementally assembled, by performing multiple queries,
# each returning at most `retrieve_per_query` entries.
Abstracts = deque()
retrieve_per_query = 500
print('Fetching Abstracts of results: ')
for start in range( 0, len(Ids), retrieve_per_query ):
if (start % 10000 == 0):
print('')
print(start, end='')
else:
print('.', end='')
# build comma separated string with the ids at indexes [start, start+retrieve_per_query)
query_ids = ','.join( [ str(id) for id in Ids[ start : start+retrieve_per_query ] ] )
# issue requests to the server, until we get the full amount of data we expect
while True:
try:
s = Entrez.read( Entrez.efetch(db="pubmed", id=query_ids, retmode="xml" ) )
except http.client.IncompleteRead:
print('r', end='')
continue
break
i = 0
for p in s:
abstr = ''
if 'MedlineCitation' in p:
pmid = p['MedlineCitation']['PMID']
if 'Abstract' in p['MedlineCitation']['Article']:
abstr = p['MedlineCitation']['Article']['Abstract']['AbstractText'][0]
elif 'BookDocument' in p:
pmid = p['BookDocument']['PMID']
if 'Abstract' in p['BookDocument']:
abstr = p['BookDocument']['Abstract']['AbstractText'][0]
else:
raise Exception('Unrecognized record type, for id %d (keys: %s)' % (Ids[start+i], str(p.keys())) )
Abstracts.append( (int(pmid), str(abstr)) )
i += 1
# Save Abstracts, as a dictionary indexed by Ids
Abstracts = dict( Abstracts )
pickle.dump( Abstracts, bz2.BZ2File( Abstracts_file, 'wb' ) )
Explanation: Abstracts dataset
We can now assemble a dataset mapping paper ids to their abstracts.
End of explanation
Abstracts[26488732]
Explanation: Taking a look at one paper's abstract:
End of explanation
CA_search_term = search_term+'[TIAB] AND PLoS computational biology[JOUR]'
CA_ids = Entrez.read( Entrez.esearch( db="pubmed", term=CA_search_term ) )['IdList']
CA_ids
CA_summ = {
p['Id'] : ( p['Title'], p['AuthorList'], p['PubDate'][:4], p['FullJournalName'], p.get('DOI', '') )
for p in Entrez.read( Entrez.esummary(db="pubmed", id=','.join( CA_ids )) )
}
CA_summ
Explanation: ELink: Searching for related items in NCBI Entrez
To understand how to obtain paper citations with Entrez, we will first assemble a small set of PubMed IDs, and then query for their citations.
To that end, we search here for papers published in the PLOS Computational Biology journal (as before, having also the word "air" in either the title or abstract):
End of explanation
CA_citing = {
id : Entrez.read( Entrez.elink(
cmd = "neighbor", # ELink command mode: "neighbor", returns
# a set of UIDs in `db` linked to the input UIDs in `dbfrom`.
dbfrom = "pubmed", # Database containing the input UIDs: PubMed
db = "pmc", # Database from which to retrieve UIDs: PubMed Central
LinkName = "pubmed_pmc_refs", # Name of the Entrez link to retrieve: "pubmed_pmc_refs", gets
# "Full-text articles in the PubMed Central Database that cite the current articles"
from_uid = id # input UIDs
) )
for id in CA_ids
}
CA_citing['24853675']
Explanation: Because we restricted our search to papers in an open-access journal, you can then follow their DOIs to freely access their PDFs at the journal's website:<br>10.1371/journal.pcbi.0040023, 10.1371/journal.pcbi.1000948, 10.1371/journal.pcbi.1002236.
We will now issue calls to Entrez.elink() using these PubMed IDs, to retrieve the IDs of papers that cite them.
The database from which the IDs will be retrieved is PubMed Central, a free digital database of full-text scientific literature in the biomedical and life sciences.
You can, for instance, find archived here, with the PubMed Central ID 2951343, the paper "Critical dynamics in the evolution of stochastic strategies for the iterated prisoner's dilemma", which as we saw above, has the PubMed ID 20949101.
A complete list of the kinds of links you can retrieve with Entrez.elink() can be found here.
End of explanation
cits = [ l['Id'] for l in CA_citing['24853675'][0]['LinkSetDb'][0]['Link'] ]
cits
Explanation: We have in CA_citing[paper_id][0]['LinkSetDb'][0]['Link'] the list of papers citing paper_id. To get it as just a list of ids, we can do
End of explanation
cits_pm = Entrez.read( Entrez.elink( dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed", from_uid=",".join(cits)) )
cits_pm
ids_map = { pmc_id : link['Id'] for (pmc_id,link) in zip(cits_pm[0]['IdList'], cits_pm[0]['LinkSetDb'][0]['Link']) }
ids_map
Explanation: However, one more step is needed, as what we have now are PubMed Central IDs, and not PubMed IDs. Their conversion can be achieved through an additional call to Entrez.elink():
End of explanation
{ p['Id'] : ( p['Title'], p['AuthorList'], p['PubDate'][:4], p['FullJournalName'], p.get('DOI', '') )
for p in Entrez.read( Entrez.esummary(db="pubmed", id=','.join( ids_map.values() )) )
}
Explanation: And to check these papers:
End of explanation
Citations_file = 'data/' + search_term + '__Citations.pkl.bz2'
Citations = []
Explanation: Citations dataset
We have now seen all the steps required to assemble a dataset of citations to each of the papers in our dataset.
End of explanation
import http.client
if Citations == [] and os.path.exists( Citations_file ):
Citations = pickle.load( bz2.BZ2File( Citations_file, 'rb' ) )
if len(Citations) < len(Ids):
i = len(Citations)
checkpoint = len(Ids) / 10 + 1 # save to hard drive at every 10% of Ids fetched
for pm_id in Ids[i:]: # either starts from index 0, or resumes from where we previously left off
while True:
try:
# query for papers archived in PubMed Central that cite the paper with PubMed ID `pm_id`
c = Entrez.read( Entrez.elink( dbfrom = "pubmed", db="pmc", LinkName = "pubmed_pmc_refs", id=str(pm_id) ) )
c = c[0]['LinkSetDb']
if len(c) == 0:
# no citations found for the current paper
c = []
else:
c = [ l['Id'] for l in c[0]['Link'] ]
# convert citations from PubMed Central IDs to PubMed IDs
p = []
retrieve_per_query = 500
for start in range( 0, len(c), retrieve_per_query ):
query_ids = ','.join( c[start : start+retrieve_per_query] )
r = Entrez.read( Entrez.elink( dbfrom="pmc", db="pubmed", LinkName="pmc_pubmed", from_uid=query_ids ) )
# select the IDs. If no matching PubMed ID was found, [] is returned instead
p.extend( [] if r[0]['LinkSetDb']==[] else [ int(link['Id']) for link in r[0]['LinkSetDb'][0]['Link'] ] )
c = p
except http.client.BadStatusLine:
# Presumably, the server closed the connection before sending a valid response. Retry until we have the data.
print('r')
continue
break
Citations.append( (pm_id, c) )
if (i % 10000 == 0):
print('')
print(i, end='')
if (i % 100 == 0):
print('.', end='')
i += 1
if i % checkpoint == 0:
print('\tsaving at checkpoint', i)
pickle.dump( Citations, bz2.BZ2File( Citations_file, 'wb' ) )
print('\n done.')
# Save Citations, as a dictionary indexed by Ids
Citations = dict( Citations )
pickle.dump( Citations, bz2.BZ2File( Citations_file, 'wb' ) )
Explanation: At least one server query will be issued per paper in Ids. Because NCBI allows for at most 3 queries per second (see here), this dataset will take a long time to assemble. Should you need to interrupt it for some reason, or the connection fail at some point, it is safe to just rerun the cell below until all data is collected.
End of explanation
Citations[24853675]
Explanation: To see that we have indeed obtained the data we expected, you can match the ids below, with the ids listed at the end of last section.
End of explanation |
13,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MaxPooling2D
[pooling.MaxPooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
Step1: [pooling.MaxPooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
Step2: [pooling.MaxPooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
Step3: [pooling.MaxPooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
Step4: [pooling.MaxPooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
Step5: [pooling.MaxPooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
Step6: [pooling.MaxPooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
Step7: [pooling.MaxPooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
Step8: [pooling.MaxPooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
Step9: [pooling.MaxPooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
Step10: [pooling.MaxPooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
Step11: [pooling.MaxPooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
Step12: [pooling.MaxPooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
Step13: export for Keras.js tests | Python Code:
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(270)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: MaxPooling2D
[pooling.MaxPooling2D.0] input 6x6x3, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(271)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.1] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(272)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.2] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(273)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.3] input 6x6x3, pool_size=(3, 3), strides=None, padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(274)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.4] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='valid', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(275)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.5] input 6x6x3, pool_size=(2, 2), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(276)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.6'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.6] input 6x6x3, pool_size=(2, 2), strides=(1, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 7, 3)
L = MaxPooling2D(pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(277)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.7'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.7] input 6x7x3, pool_size=(2, 2), strides=(2, 1), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=None, padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(278)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.8'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.8] input 6x6x3, pool_size=(3, 3), strides=None, padding='same', data_format='channels_last'
End of explanation
data_in_shape = (6, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(279)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.9'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.9] input 6x6x3, pool_size=(3, 3), strides=(3, 3), padding='same', data_format='channels_last'
End of explanation
data_in_shape = (5, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(280)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.10'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.10] input 5x6x3, pool_size=(3, 3), strides=(2, 2), padding='valid', data_format='channels_first'
End of explanation
data_in_shape = (5, 6, 3)
L = MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(281)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.11'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.11] input 5x6x3, pool_size=(3, 3), strides=(1, 1), padding='same', data_format='channels_first'
End of explanation
data_in_shape = (4, 6, 4)
L = MaxPooling2D(pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(282)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['pooling.MaxPooling2D.12'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
Explanation: [pooling.MaxPooling2D.12] input 4x6x4, pool_size=(2, 2), strides=None, padding='valid', data_format='channels_first'
End of explanation
import os
filename = '../../../test/data/layers/pooling/MaxPooling2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
Explanation: export for Keras.js tests
End of explanation |
13,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Inference in Pyro
Much of modern machine learning can be cast as approximate inference and expressed succinctly in a language like Pyro. To motivate the rest of this tutorial, let's build a generative model for a simple physical problem so that we can use Pyro's inference machinery to solve it. However, we will first import the required modules for this tutorial
Step1: A Simple Example
Suppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some prior knowledge about the object, like its density or material properties. The following model encodes this process
Step2: Conditioning
The real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once and condition it on many different observations. Pyro supports constraining a model's internal sample statements to be equal to a given set of observations.
Consider scale once again. Suppose we want to sample from the distribution of weight given input guess = 8.5, but now we have observed that measurement == 9.5. That is, we wish to infer the distribution
Step3: Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's lambda or def
Step4: In some cases it might be more convenient to pass observations directly to individual pyro.sample statements instead of using pyro.condition. The optional obs keyword argument is reserved by pyro.sample for that purpose
Step5: Finally, in addition to pyro.condition for incorporating observations, Pyro also contains pyro.do, an implementation of Pearl's do-operator used for causal inference with an identical interface to pyro.condition. condition and do can be mixed and composed freely, making Pyro a powerful tool for model-based causal inference.
Flexible Approximate Inference With Guide Functions
Let's return to conditioned_scale. Now that we have conditioned on an observation of measurement, we can use Pyro's approximate inference algorithms to estimate the distribution over weight given guess and measurement == data.
Inference algorithms in Pyro, such as pyro.infer.SVI, allow us to use arbitrary stochastic functions, which we will call guide functions or guides, as approximate posterior distributions. Guide functions must satisfy these two criteria to be valid approximations for a particular model
Step6: Parametrized Stochastic Functions and Variational Inference
Although we could write out the exact posterior distribution for scale, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions for which we can determine the true posterior exactly are the exception rather than the rule. For example, even a version of our scale example with a nonlinear function in the middle may be intractable
Step7: What we can do instead is use the top-level function pyro.param to specify a family of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called variational inference.
pyro.param is a frontend for Pyro's key-value parameter store, which is described in more detail in the documentation. Like pyro.sample, pyro.param is always called with a name as its first argument. The first time pyro.param is called with a particular name, it stores its argument in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. It is similar to simple_param_store.setdefault here, but with some additional tracking and management functionality.
python
simple_param_store = {}
a = simple_param_store.setdefault("a", torch.randn(1))
For example, we can parametrize a and b in scale_posterior_guide instead of specifying them by hand
Step8: As an aside, note that in scale_parametrized_guide, we had to apply torch.abs to parameter b because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a constraints module for enforcing such restrictions, and applying constraints to Pyro parameters is as easy as passing the relevant constraint object to pyro.param
Step9: Pyro is built to enable stochastic variational inference, a powerful and widely applicable class of variational inference algorithms with three key characteristics | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import torch
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
pyro.set_rng_seed(101)
Explanation: An Introduction to Inference in Pyro
Much of modern machine learning can be cast as approximate inference and expressed succinctly in a language like Pyro. To motivate the rest of this tutorial, let's build a generative model for a simple physical problem so that we can use Pyro's inference machinery to solve it. However, we will first import the required modules for this tutorial:
End of explanation
def scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(weight, 0.75))
Explanation: A Simple Example
Suppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some prior knowledge about the object, like its density or material properties. The following model encodes this process:
$${\sf weight} \, | \, {\sf guess} \sim \cal {\sf Normal}({\sf guess}, 1) $$
$${\sf measurement} \, | \, {\sf guess}, {\sf weight} \sim {\sf Normal}({\sf weight}, 0.75)$$
Note that this is a model not only for our belief over weight, but also for the result of taking a measurement of it. The model corresponds to the following stochastic function:
End of explanation
conditioned_scale = pyro.condition(scale, data={"measurement": torch.tensor(9.5)})
Explanation: Conditioning
The real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once and condition it on many different observations. Pyro supports constraining a model's internal sample statements to be equal to a given set of observations.
Consider scale once again. Suppose we want to sample from the distribution of weight given input guess = 8.5, but now we have observed that measurement == 9.5. That is, we wish to infer the distribution:
$$({\sf weight} \, | \, {\sf guess}, {\sf measurement} = 9.5) \sim \, ? $$
Pyro provides the function pyro.condition to allow us to constrain the values of sample statements. pyro.condition is a higher-order function that takes a model and a dictionary of observations and returns a new model that has the same input and output signatures but always uses the given values at observed sample statements:
End of explanation
def deferred_conditioned_scale(measurement, guess):
return pyro.condition(scale, data={"measurement": measurement})(guess)
Explanation: Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's lambda or def:
End of explanation
def scale_obs(guess): # equivalent to conditioned_scale above
weight = pyro.sample("weight", dist.Normal(guess, 1.))
# here we condition on measurement == 9.5
return pyro.sample("measurement", dist.Normal(weight, 0.75), obs=torch.tensor(9.5))
Explanation: In some cases it might be more convenient to pass observations directly to individual pyro.sample statements instead of using pyro.condition. The optional obs keyword argument is reserved by pyro.sample for that purpose:
End of explanation
def perfect_guide(guess):
loc = (0.75**2 * guess + 9.5) / (1 + 0.75**2) # 9.14
scale = np.sqrt(0.75**2 / (1 + 0.75**2)) # 0.6
return pyro.sample("weight", dist.Normal(loc, scale))
Explanation: Finally, in addition to pyro.condition for incorporating observations, Pyro also contains pyro.do, an implementation of Pearl's do-operator used for causal inference with an identical interface to pyro.condition. condition and do can be mixed and composed freely, making Pyro a powerful tool for model-based causal inference.
Flexible Approximate Inference With Guide Functions
Let's return to conditioned_scale. Now that we have conditioned on an observation of measurement, we can use Pyro's approximate inference algorithms to estimate the distribution over weight given guess and measurement == data.
Inference algorithms in Pyro, such as pyro.infer.SVI, allow us to use arbitrary stochastic functions, which we will call guide functions or guides, as approximate posterior distributions. Guide functions must satisfy these two criteria to be valid approximations for a particular model:
1. all unobserved (i.e., not conditioned) sample statements that appear in the model appear in the guide.
2. the guide has the same input signature as the model (i.e., takes the same arguments)
Guide functions can serve as programmable, data-dependent proposal distributions for importance sampling, rejection sampling, sequential Monte Carlo, MCMC, and independent Metropolis-Hastings, and as variational distributions or inference networks for stochastic variational inference. Currently, importance sampling, MCMC, and stochastic variational inference are implemented in Pyro, and we plan to add other algorithms in the future.
Although the precise meaning of the guide is different across different inference algorithms, the guide function should generally be chosen so that, in principle, it is flexible enough to closely approximate the distribution over all unobserved sample statements in the model.
In the case of scale, it turns out that the true posterior distribution over weight given guess and measurement is actually ${\sf Normal}(9.14, 0.6)$. As the model is quite simple, we are able to determine our posterior distribution of interest analytically (for derivation, see for example Section 3.4 of these notes).
End of explanation
def intractable_scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(some_nonlinear_function(weight), 0.75))
Explanation: Parametrized Stochastic Functions and Variational Inference
Although we could write out the exact posterior distribution for scale, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions for which we can determine the true posterior exactly are the exception rather than the rule. For example, even a version of our scale example with a nonlinear function in the middle may be intractable:
End of explanation
def scale_parametrized_guide(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.))
return pyro.sample("weight", dist.Normal(a, torch.abs(b)))
Explanation: What we can do instead is use the top-level function pyro.param to specify a family of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called variational inference.
pyro.param is a frontend for Pyro's key-value parameter store, which is described in more detail in the documentation. Like pyro.sample, pyro.param is always called with a name as its first argument. The first time pyro.param is called with a particular name, it stores its argument in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. It is similar to simple_param_store.setdefault here, but with some additional tracking and management functionality.
python
simple_param_store = {}
a = simple_param_store.setdefault("a", torch.randn(1))
For example, we can parametrize a and b in scale_posterior_guide instead of specifying them by hand:
End of explanation
from torch.distributions import constraints
def scale_parametrized_guide_constrained(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.), constraint=constraints.positive)
return pyro.sample("weight", dist.Normal(a, b)) # no more torch.abs
Explanation: As an aside, note that in scale_parametrized_guide, we had to apply torch.abs to parameter b because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a constraints module for enforcing such restrictions, and applying constraints to Pyro parameters is as easy as passing the relevant constraint object to pyro.param:
End of explanation
guess = 8.5
pyro.clear_param_store()
svi = pyro.infer.SVI(model=conditioned_scale,
guide=scale_parametrized_guide,
optim=pyro.optim.Adam({"lr": 0.003}),
loss=pyro.infer.Trace_ELBO())
losses, a, b = [], [], []
num_steps = 2500
for t in range(num_steps):
losses.append(svi.step(guess))
a.append(pyro.param("a").item())
b.append(pyro.param("b").item())
plt.plot(losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
print('a = ',pyro.param("a").item())
print('b = ', pyro.param("b").item())
plt.subplot(1,2,1)
plt.plot([0,num_steps],[9.14,9.14], 'k:')
plt.plot(a)
plt.ylabel('a')
plt.subplot(1,2,2)
plt.ylabel('b')
plt.plot([0,num_steps],[0.6,0.6], 'k:')
plt.plot(b)
plt.tight_layout()
Explanation: Pyro is built to enable stochastic variational inference, a powerful and widely applicable class of variational inference algorithms with three key characteristics:
Parameters are always real-valued tensors
We compute Monte Carlo estimates of a loss function from samples of execution histories of the model and guide
We use stochastic gradient descent to search for the optimal parameters.
Combining stochastic gradient descent with PyTorch's GPU-accelerated tensor math and automatic differentiation allows us to scale variational inference to very high-dimensional parameter spaces and massive datasets.
Pyro's SVI functionality is described in detail in the SVI tutorial. Here is a very simple example applying it to scale:
End of explanation |
13,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
同時改訂(simultaneous revisions)とは
「各 t 期において,N 人全員が行動を (変えたければ) 変えることができる.」
協調ゲーム (coordination game)
以下の利得表のもと各人は戦略をとる場合を考える。
[(4, 4), (0, 3)]
[(3, 0), (2, 2)]
混合戦略ナッシュ均衡の組は、
(1, 0), (1, 0)
(2/3, 1/3), (2/3, 1/3)
(0, 1), (0, 1)
全体の人数(N)が5人で、今戦略1を取っている人数が3人だとする。
ということは自分自身と対戦することも許すならば、3/5 > 1/3 より戦略1をとることが各人にとって望ましい。
よって、みんなで一斉に戦略を変えるチャンスが与えられるならば、
次の期において戦略1を5人全員が選択する確率は、
${}_5 C _5 (1-\frac{\epsilon}{2})^{5} (\frac{\epsilon}{2})^{0}$ (つまり$(1-\frac{\epsilon}{2})^{5}$ )
4人が戦略1を選択する確率は
${}_5 C _4 (1-\frac{\epsilon}{2})^{4} (\frac{\epsilon}{2})^{1}$
...
0人が戦略1を選択する確率は
${}_5 C _0 (1-\frac{\epsilon}{2})^{0} (\frac{\epsilon}{2})^{5}$(つまり$(\frac{\epsilon}{2})^{5}$)
これより戦略1を取る人数ごとの確率分布は二項分布であると言える。
二項分布をコードで書くときに便利な関数が、scipy.stats.binom
サイトより引用
```
Notes
The probability mass function for binom is
Step1: t期において戦略1を3人が取っている状況を考え、epsilonを0.1と仮定する。
t+1期で戦略1を取る人数が5人全員となる確率$(1-\frac{\epsilon}{2})^{5}$は、
Step2: t+1期で戦略1を取る人数が4人となる確率${}_5 C _4 (1-\frac{\epsilon}{2})^{4} (\frac{\epsilon}{2})^{1}$ は、
Step3: t+1期で戦略1を取る人数が4人となる確率${}_5 C _3 (1-\frac{\epsilon}{2})^{3} (\frac{\epsilon}{2})^{2}$ は、
Step4: t+1期に戦略1を取る人数毎の確率をプロットしてみると、 | Python Code:
%matplotlib inline
from scipy.stats import binom
import matplotlib.pyplot as plt
Explanation: 同時改訂(simultaneous revisions)とは
「各 t 期において,N 人全員が行動を (変えたければ) 変えることができる.」
協調ゲーム (coordination game)
以下の利得表のもと各人は戦略をとる場合を考える。
[(4, 4), (0, 3)]
[(3, 0), (2, 2)]
混合戦略ナッシュ均衡の組は、
(1, 0), (1, 0)
(2/3, 1/3), (2/3, 1/3)
(0, 1), (0, 1)
全体の人数(N)が5人で、今戦略1を取っている人数が3人だとする。
ということは自分自身と対戦することも許すならば、3/5 > 1/3 より戦略1をとることが各人にとって望ましい。
よって、みんなで一斉に戦略を変えるチャンスが与えられるならば、
次の期において戦略1を5人全員が選択する確率は、
${}_5 C _5 (1-\frac{\epsilon}{2})^{5} (\frac{\epsilon}{2})^{0}$ (つまり$(1-\frac{\epsilon}{2})^{5}$ )
4人が戦略1を選択する確率は
${}_5 C _4 (1-\frac{\epsilon}{2})^{4} (\frac{\epsilon}{2})^{1}$
...
0人が戦略1を選択する確率は
${}_5 C _0 (1-\frac{\epsilon}{2})^{0} (\frac{\epsilon}{2})^{5}$(つまり$(\frac{\epsilon}{2})^{5}$)
これより戦略1を取る人数ごとの確率分布は二項分布であると言える。
二項分布をコードで書くときに便利な関数が、scipy.stats.binom
サイトより引用
```
Notes
The probability mass function for binom is:
binom.pmf(k) = choose(n, k) * pk * (1-p)(n-k)
for k in {0, 1,..., n}.
binom takes n and p as shape parameters.
```
つまり、binom.pmf(k, n, p)という形で使うことができ、それぞれの引数はこのように定義されている。
実際に使ってみる。
End of explanation
epsilon = 0.1
binom.pmf(5, 5, 1-epsilon/2)
Explanation: t期において戦略1を3人が取っている状況を考え、epsilonを0.1と仮定する。
t+1期で戦略1を取る人数が5人全員となる確率$(1-\frac{\epsilon}{2})^{5}$は、
End of explanation
binom.pmf(4, 5, 1-epsilon/2)
Explanation: t+1期で戦略1を取る人数が4人となる確率${}_5 C _4 (1-\frac{\epsilon}{2})^{4} (\frac{\epsilon}{2})^{1}$ は、
End of explanation
binom.pmf(3, 5, 1-epsilon/2)
Explanation: t+1期で戦略1を取る人数が4人となる確率${}_5 C _3 (1-\frac{\epsilon}{2})^{3} (\frac{\epsilon}{2})^{2}$ は、
End of explanation
P = [binom.pmf(i, 5, 1-epsilon/2) for i in range(5+1)]
plt.plot(range(5+1), P)
Explanation: t+1期に戦略1を取る人数毎の確率をプロットしてみると、
End of explanation |
13,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
used min-max normalization
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
max_value = 255
min_value = 0
return (x - min_value) / (max_value - min_value)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
lb=preprocessing.LabelBinarizer()
lb.fit(range(10))
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
shape = [x for x in image_shape]
shape.insert(0, None)
return tf.placeholder(tf.float32, shape=shape, name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, shape=[None, n_classes], name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
x_tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor_shape[-1], conv_num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([conv_num_outputs], stddev=0.05))
conv_layer = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias=bias)
conv_layer = tf.nn.relu(conv_layer)
conv_layer = tf.nn.max_pool(conv_layer,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return conv_layer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.contrib.layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
x_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))
return tf.nn.relu(tf.add(tf.matmul(x_tensor, weights), bias))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
x_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([x_shape[1], num_outputs], stddev=0.05))
bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.05))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_output_depth = {
'layer1': 32,
'layer2': 64,
'layer3': 128
}
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer1 = conv2d_maxpool(x, conv_output_depth['layer1'], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer2 = conv2d_maxpool(conv_layer1, conv_output_depth['layer2'], conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_layer3 = conv2d_maxpool(conv_layer2, conv_output_depth['layer3'], conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flattened_layer = flatten(conv_layer3)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fc_layer1 = fully_conn(flattened_layer, num_outputs=512)
fc_layer1 = tf.nn.dropout(fc_layer1, keep_prob=keep_prob)
fc_layer2 = fully_conn(fc_layer1, num_outputs=256)
fc_layer2 = tf.nn.dropout(fc_layer2, keep_prob=keep_prob)
fc_layer3 = fully_conn(fc_layer2, num_outputs=128)
fc_layer3 = tf.nn.dropout(fc_layer3, keep_prob=keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
logits = output(fc_layer3, 10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Traning Loss: {:>10.4f} Accuracy: {:.6f}'.format(loss, valid_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
13,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 20
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License
Step1: So far the differential equations we've worked with have been first
order, which means they involve only first derivatives. In this
chapter, we turn our attention to second order ODEs, which can involve
both first and second derivatives.
We'll revisit the falling penny example from
Chapter xxx, and use run_solve_ivp to find the position and velocity of the penny as it falls, with and without air resistance.
Newton's second law of motion
First order ODEs can be written
$$\frac{dy}{dx} = G(x, y)$$
where $G$ is some function of $x$ and $y$ (see http
Step2: where y is height above the sidewalk and v is velocity.
The units m and s are from the units object provided by Pint
Step3: In addition, we'll specify the duration of the simulation and the step
size
Step4: With these parameters, the number of time steps is 100, which is good
enough for many problems. Once we have a solution, we will increase the
number of steps and see what effect it has on the results.
We need a System object to store the parameters
Step5: Now we need a slope function, and here's where things get tricky. As we have seen, run_solve_ivp can solve systems of first order ODEs, but Newton's law is a second order ODE. However, if we recognize that
Velocity, $v$, is the derivative of position, $dy/dt$, and
Acceleration, $a$, is the derivative of velocity, $dv/dt$,
we can rewrite Newton's law as a system of first order ODEs
Step6: The first parameter, state, contains the position and velocity of the
penny. The last parameter, system, contains the system parameter g,
which is the magnitude of acceleration due to gravity.
The second parameter, t, is time. It is not used in this slope
function because none of the factors of the model are time dependent. I include it anyway because this function will be called by run_solve_ivp, which always provides the same arguments,
whether they are needed or not.
The rest of the function is a straightforward translation of the
differential equations, with the substitution $a = -g$, which indicates that acceleration due to gravity is in the direction of decreasing $y$. slope_func returns a sequence containing the two derivatives.
Before calling run_solve_ivp, it is a good idea to test the slope
function with the initial conditions
Step7: The result is 0 m/s for velocity and 9.8 m/s$^2$ for acceleration. Now we call run_solve_ivp like this
Step8: results is a TimeFrame with two columns
Step9: Since acceleration is constant, velocity increases linearly and position decreases quadratically; as a result, the height curve is a parabola.
The last value of results.y is negative, which means we ran the simulation too long.
Step10: One way to solve this problem is to use the results to
estimate the time when the penny hits the sidewalk.
The ModSim library provides crossings, which takes a TimeSeries and a value, and returns a sequence of times when the series passes through the value. We can find the time when the height of the penny is 0 like this
Step11: The result is an array with a single value, 8.818 s. Now, we could run
the simulation again with t_end = 8.818, but there's a better way.
Events
As an option, run_solve_ivp can take an event function, which
detects an "event", like the penny hitting the sidewalk, and ends the
simulation.
Event functions take the same parameters as slope functions, state,
t, and system. They should return a value that passes through 0
when the event occurs. Here's an event function that detects the penny
hitting the sidewalk
Step12: The return value is the height of the penny, y, which passes through
0 when the penny hits the sidewalk.
We pass the event function to run_solve_ivp like this
Step13: Then we can get the flight time and final velocity like this
Step16: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
Summary
But air resistance...
Exercises
Exercise
Step17: Under the hood
solve_ivp
Here is the source code for crossings so you can see what's happening under the hood | Python Code:
# install Pint if necessary
try:
import pint
except ImportError:
!pip install pint
# download modsim.py if necessary
from os.path import exists
filename = 'modsim.py'
if not exists(filename):
from urllib.request import urlretrieve
url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'
local, _ = urlretrieve(url+filename, filename)
print('Downloaded ' + local)
# import functions from modsim
from modsim import *
Explanation: Chapter 20
Modeling and Simulation in Python
Copyright 2021 Allen Downey
License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
End of explanation
init = State(y=381, v=0)
Explanation: So far the differential equations we've worked with have been first
order, which means they involve only first derivatives. In this
chapter, we turn our attention to second order ODEs, which can involve
both first and second derivatives.
We'll revisit the falling penny example from
Chapter xxx, and use run_solve_ivp to find the position and velocity of the penny as it falls, with and without air resistance.
Newton's second law of motion
First order ODEs can be written
$$\frac{dy}{dx} = G(x, y)$$
where $G$ is some function of $x$ and $y$ (see http://modsimpy.com/ode). Second order ODEs can be written
$$\frac{d^2y}{dx^2} = H(x, y, \frac{dy}{dt})$$
where $H$ is a function of $x$, $y$, and $dy/dx$.
In this chapter, we will work with one of the most famous and useful
second order ODEs, Newton's second law of motion:
$$F = m a$$
where $F$ is a force or the total of a set of forces, $m$ is the mass of a moving object, and $a$ is its acceleration.
Newton's law might not look like a differential equation, until we
realize that acceleration, $a$, is the second derivative of position,
$y$, with respect to time, $t$. With the substitution
$$a = \frac{d^2y}{dt^2}$$
Newton's law can be written
$$\frac{d^2y}{dt^2} = F / m$$
And that's definitely a second order ODE.
In general, $F$ can be a function of time, position, and velocity.
Of course, this "law" is really a model in the sense that it is a
simplification of the real world. Although it is often approximately
true:
It only applies if $m$ is constant. If mass depends on time,
position, or velocity, we have to use a more general form of
Newton's law (see http://modsimpy.com/varmass).
It is not a good model for very small things, which are better
described by another model, quantum mechanics.
And it is not a good model for things moving very fast, which are
better described by yet another model, relativistic mechanics.
However, for medium-sized things with constant mass, moving at
medium-sized speeds, Newton's model is extremely useful. If we can
quantify the forces that act on such an object, we can predict how it
will move.
Dropping pennies
As a first example, let's get back to the penny falling from the Empire State Building, which we considered in
Chapter xxx. We will implement two models of this system: first without air resistance, then with.
Given that the Empire State Building is 381 m high, and assuming that
the penny is dropped from a standstill, the initial conditions are:
End of explanation
g = 9.8
Explanation: where y is height above the sidewalk and v is velocity.
The units m and s are from the units object provided by Pint:
The only system parameter is the acceleration of gravity:
End of explanation
t_end = 10
dt = 0.1
Explanation: In addition, we'll specify the duration of the simulation and the step
size:
End of explanation
system = System(init=init, g=g, t_end=t_end, dt=dt)
Explanation: With these parameters, the number of time steps is 100, which is good
enough for many problems. Once we have a solution, we will increase the
number of steps and see what effect it has on the results.
We need a System object to store the parameters:
End of explanation
def slope_func(t, state, system):
y, v = state
dydt = v
dvdt = -system.g
return dydt, dvdt
Explanation: Now we need a slope function, and here's where things get tricky. As we have seen, run_solve_ivp can solve systems of first order ODEs, but Newton's law is a second order ODE. However, if we recognize that
Velocity, $v$, is the derivative of position, $dy/dt$, and
Acceleration, $a$, is the derivative of velocity, $dv/dt$,
we can rewrite Newton's law as a system of first order ODEs:
$$\frac{dy}{dt} = v$$
$$\frac{dv}{dt} = a$$
And we can translate those
equations into a slope function:
End of explanation
dydt, dvdt = slope_func(0, system.init, system)
print(dydt)
print(dvdt)
Explanation: The first parameter, state, contains the position and velocity of the
penny. The last parameter, system, contains the system parameter g,
which is the magnitude of acceleration due to gravity.
The second parameter, t, is time. It is not used in this slope
function because none of the factors of the model are time dependent. I include it anyway because this function will be called by run_solve_ivp, which always provides the same arguments,
whether they are needed or not.
The rest of the function is a straightforward translation of the
differential equations, with the substitution $a = -g$, which indicates that acceleration due to gravity is in the direction of decreasing $y$. slope_func returns a sequence containing the two derivatives.
Before calling run_solve_ivp, it is a good idea to test the slope
function with the initial conditions:
End of explanation
results, details = run_solve_ivp(system, slope_func)
details
results.head()
Explanation: The result is 0 m/s for velocity and 9.8 m/s$^2$ for acceleration. Now we call run_solve_ivp like this:
End of explanation
results.y.plot()
decorate(xlabel='Time (s)',
ylabel='Position (m)')
Explanation: results is a TimeFrame with two columns: y contains the height of
the penny; v contains its velocity.
We can plot the results like this:
End of explanation
t_end = results.index[-1]
results.y[t_end]
Explanation: Since acceleration is constant, velocity increases linearly and position decreases quadratically; as a result, the height curve is a parabola.
The last value of results.y is negative, which means we ran the simulation too long.
End of explanation
t_crossings = crossings(results.y, 0)
t_crossings
Explanation: One way to solve this problem is to use the results to
estimate the time when the penny hits the sidewalk.
The ModSim library provides crossings, which takes a TimeSeries and a value, and returns a sequence of times when the series passes through the value. We can find the time when the height of the penny is 0 like this:
End of explanation
def event_func(t, state, system):
y, v = state
return y
Explanation: The result is an array with a single value, 8.818 s. Now, we could run
the simulation again with t_end = 8.818, but there's a better way.
Events
As an option, run_solve_ivp can take an event function, which
detects an "event", like the penny hitting the sidewalk, and ends the
simulation.
Event functions take the same parameters as slope functions, state,
t, and system. They should return a value that passes through 0
when the event occurs. Here's an event function that detects the penny
hitting the sidewalk:
End of explanation
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
Explanation: The return value is the height of the penny, y, which passes through
0 when the penny hits the sidewalk.
We pass the event function to run_solve_ivp like this:
End of explanation
t_end = results.index[-1]
t_end
y, v = results.iloc[-1]
print(y)
print(v)
Explanation: Then we can get the flight time and final velocity like this:
End of explanation
# Solution
r_0 = 150e9 # 150 million km in m
v_0 = 0
init = State(r=r_0,
v=v_0)
# Solution
radius_earth = 6.37e6 # meters
radius_sun = 696e6 # meters
r_final = radius_sun + radius_earth
r_final
r_0 / r_final
t_end = 1e7 # seconds
system = System(init=init,
G=6.674e-11, # N m^2 / kg^2
m1=1.989e30, # kg
m2=5.972e24, # kg
r_final=radius_sun + radius_earth,
t_end=t_end)
# Solution
def universal_gravitation(state, system):
Computes gravitational force.
state: State object with distance r
system: System object with m1, m2, and G
r, v = state
G, m1, m2 = system.G, system.m1, system.m2
force = G * m1 * m2 / r**2
return force
# Solution
universal_gravitation(init, system)
# Solution
def slope_func(t, state, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `m2`
returns: derivatives of y and v
y, v = state
m2 = system.m2
force = universal_gravitation(state, system)
dydt = v
dvdt = -force / m2
return dydt, dvdt
# Solution
slope_func(0, system.init, system)
# Solution
def event_func(t, state, system):
r, v = state
return r - system.r_final
# Solution
event_func(0, init, system)
# Solution
results, details = run_solve_ivp(system, slope_func,
events=event_func)
details
# Solution
t_event = results.index[-1]
t_event
# Solution
seconds = t_event * units.second
days = seconds.to(units.day)
# Solution
results.index /= 60 * 60 * 24
# Solution
results.r /= 1e9
# Solution
results.r.plot(label='r')
decorate(xlabel='Time (day)',
ylabel='Distance from sun (million km)')
Explanation: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
Summary
But air resistance...
Exercises
Exercise: Here's a question from the web site Ask an Astronomer:
"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."
Use run_solve_ivp to answer this question.
Here are some suggestions about how to proceed:
Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.
When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.
Express your answer in days, and plot the results as millions of kilometers versus days.
If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.
You might also be interested to know that it's actually not that easy to get to the Sun.
End of explanation
%psource crossings
Explanation: Under the hood
solve_ivp
Here is the source code for crossings so you can see what's happening under the hood:
End of explanation |
13,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Character-level Language Modeling with LSTMs
This notebook is adapted from Keras' lstm_text_generation.py.
Steps
Step1: Loading some text data
Let's use some publicly available philosopy
Step2: Building a vocabulary of all possible symbols
To simplify things, we build a vocabulary by extracting the list all possible characters from the full datasets (train and validation).
In a more realistic setting we would need to take into account that the test data can hold symbols never seen in the training set. This issue is limited when we work at the character level though.
Let's build the list of all possible characters and sort it to assign a unique integer to each possible symbol in the corpus
Step3: char_indices is a mapping to from characters to integer identifiers
Step4: indices_char holds the reverse mapping
Step5: While not strictly required to build a language model, it's a good idea to have a look at the distribution of relative frequencies of each symbol in the corpus
Step6: Let's cut the dataset into fake sentences at random with some overlap. Instead of cutting at random we could use a English specific sentence tokenizer. This is explained at the end of this notebook. In the mean time random substring will be good enough to train a first language model.
Step7: Let's shuffle the sequences to break some of the dependencies
Step8: Converting the training data to one-hot vectors
Unfortunately the LSTM implementation in Keras does not (yet?) accept integer indices to slice columns from an input embedding by it-self. Let's use one-hot encoding. This is slightly less space and time efficient than integer coding but should be good enough when using a small character level vocabulary.
Exercise
Step10: Measuring per-character perplexity
The NLP community measures the quality of probabilistic model using perplexity.
In practice perplexity is just a base 2 exponentiation of the average negative log2 likelihoods
Step11: A perfect model has a minimal perplexity of 1.0 bit (negative log likelihood of 0.0)
Step12: Building recurrent model
Let's build a first model and train it on a very small subset of the data to check that it works as expected
Step13: Let's measure the perplexity of the randomly initialized model
Step14: Let's train the model for one epoch on a very small subset of the training set to check that it's well defined
Step17: Sampling random text from the model
Recursively generate one character at a time by sampling from the distribution parameterized by the model
Step18: The temperature parameter makes it possible to increase or decrease the entropy into the multinouli distribution parametrized by the output of the model.
Temperature lower than 1 will yield very regular text (biased towards the most frequent patterns of the training set). Temperatures higher than 1 will render the model "more creative" but also noisier (with a large fraction of meaningless words). A temperature of 1 is neutral (the noise of the generated text only stems from the imperfection of the model).
Step19: Training the model
Let's train the model and monitor the perplexity after each epoch and sample some text to qualitatively evaluate the model
Step20: Beam search for deterministic decoding
It is possible to improve the generation using a beam search, which will be presented in the following lab.
Better handling of sentence boundaries
To simplify things we used the lower case version of the text and we ignored any sentence boundaries. This prevents our model to learn when to stop generating characters. If we want to train a model that can start generating text at the beginning of a sentence and stop at the end of a sentence, we need to provide it with sentence boundary markers in the training set and use those special markers when sampling.
The following give an example of how to use NLTK to detect sentence boundaries in English text.
This could be used to insert an explicit "end_of_sentence" (EOS) symbol to mark separation between two consecutive sentences. This should make it possible to train a language model that explicitly generates complete sentences from start to end.
Use the following command (in a terminal) to install nltk before importing it in the notebook
Step21: The first few sentences detected by NLTK are too short to be considered real sentences. Let's have a look at short sentences with at least 20 characters
Step22: Some long sentences
Step23: The NLTK sentence tokenizer seems to do a reasonable job despite the weird casing and '--' signs scattered around the text.
Note that here we use the original case information because it can help the NLTK sentence boundary detection model make better split decisions. Our text corpus is probably too small to train a good sentence aware language model though, especially with full case information. Using larger corpora such as a large collection of public domain books or Wikipedia dumps. The NLTK toolkit also comes from corpus loading utilities.
The following loads a selection of famous books from the Gutenberg project archive
Step24: Let's do an arbitrary split. Note the training set will have a majority of text that is not authored by the author(s) of the validation set | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Character-level Language Modeling with LSTMs
This notebook is adapted from Keras' lstm_text_generation.py.
Steps:
Download a small text corpus and preprocess it.
Extract a character vocabulary and use it to vectorize the text.
Train an LSTM-based character level language model.
Use the trained model to sample random text with varying entropy levels.
Implement a beam-search deterministic decoder.
Note: fitting language models is very computation intensive. It is recommended to do this notebook on a server with a GPU or powerful CPUs that you can leave running for several hours at once.
End of explanation
from tensorflow.keras.utils import get_file
URL = "https://s3.amazonaws.com/text-datasets/nietzsche.txt"
corpus_path = get_file('nietzsche.txt', origin=URL)
text = open(corpus_path).read().lower()
print('Corpus length: %d characters' % len(text))
print(text[:600], "...")
text = text.replace("\n", " ")
split = int(0.9 * len(text))
train_text = text[:split]
test_text = text[split:]
Explanation: Loading some text data
Let's use some publicly available philosopy:
End of explanation
chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
Explanation: Building a vocabulary of all possible symbols
To simplify things, we build a vocabulary by extracting the list all possible characters from the full datasets (train and validation).
In a more realistic setting we would need to take into account that the test data can hold symbols never seen in the training set. This issue is limited when we work at the character level though.
Let's build the list of all possible characters and sort it to assign a unique integer to each possible symbol in the corpus:
End of explanation
len(char_indices)
sorted(char_indices.items())[:15]
Explanation: char_indices is a mapping to from characters to integer identifiers:
End of explanation
len(indices_char)
indices_char[52]
Explanation: indices_char holds the reverse mapping:
End of explanation
from collections import Counter
counter = Counter(text)
chars, counts = zip(*counter.most_common())
indices = np.arange(len(counts))
plt.figure(figsize=(14, 3))
plt.bar(indices, counts, 0.8)
plt.xticks(indices, chars);
Explanation: While not strictly required to build a language model, it's a good idea to have a look at the distribution of relative frequencies of each symbol in the corpus:
End of explanation
max_length = 40
step = 3
def make_sequences(text, max_length=max_length, step=step):
sequences = []
next_chars = []
for i in range(0, len(text) - max_length, step):
sequences.append(text[i: i + max_length])
next_chars.append(text[i + max_length])
return sequences, next_chars
sequences, next_chars = make_sequences(train_text)
sequences_test, next_chars_test = make_sequences(test_text, step=10)
print('nb train sequences:', len(sequences))
print('nb test sequences:', len(sequences_test))
Explanation: Let's cut the dataset into fake sentences at random with some overlap. Instead of cutting at random we could use a English specific sentence tokenizer. This is explained at the end of this notebook. In the mean time random substring will be good enough to train a first language model.
End of explanation
from sklearn.utils import shuffle
sequences, next_chars = shuffle(sequences, next_chars,
random_state=42)
sequences[0]
next_chars[0]
Explanation: Let's shuffle the sequences to break some of the dependencies:
End of explanation
n_sequences = len(sequences)
n_sequences_test = len(sequences_test)
voc_size = len(chars)
X = np.zeros((n_sequences, max_length, voc_size),
dtype=np.float32)
y = np.zeros((n_sequences, voc_size), dtype=np.float32)
X_test = np.zeros((n_sequences_test, max_length, voc_size),
dtype=np.float32)
y_test = np.zeros((n_sequences_test, voc_size), dtype=np.float32)
# TODO
# %load solutions/language_model_one_hot_data.py
X.shape
y.shape
X[0]
y[0]
Explanation: Converting the training data to one-hot vectors
Unfortunately the LSTM implementation in Keras does not (yet?) accept integer indices to slice columns from an input embedding by it-self. Let's use one-hot encoding. This is slightly less space and time efficient than integer coding but should be good enough when using a small character level vocabulary.
Exercise:
One hot encoded the training data sequences as X and next_chars as y:
End of explanation
def perplexity(y_true, y_pred):
Compute the per-character perplexity of model predictions.
y_true is one-hot encoded ground truth.
y_pred is predicted likelihoods for each class.
2 ** -mean(log2(p))
# TODO
return 1.
# %load solutions/language_model_perplexity.py
y_true = np.array([
[0, 1, 0],
[0, 0, 1],
[0, 0, 1],
])
y_pred = np.array([
[0.1, 0.9, 0.0],
[0.1, 0.1, 0.8],
[0.1, 0.2, 0.7],
])
perplexity(y_true, y_pred)
Explanation: Measuring per-character perplexity
The NLP community measures the quality of probabilistic model using perplexity.
In practice perplexity is just a base 2 exponentiation of the average negative log2 likelihoods:
$$perplexity_\theta = 2^{-\frac{1}{n} \sum_{i=1}^{n} log_2 (p_\theta(x_i))}$$
Note: here we define the per-character perplexity (because our model naturally makes per-character predictions). It is more common to report per-word perplexity. Note that this is not as easy to compute the per-world perplexity as we would need to tokenize the strings into a sequence of words and discard whitespace and punctuation character predictions. In practice the whitespace character is the most frequent character by far making our naive per-character perplexity lower than it should be if we ignored those.
Exercise: implement a Python function that computes the per-character perplexity with model predicted probabilities y_pred and y_true for the encoded ground truth:
End of explanation
perplexity(y_true, y_true)
Explanation: A perfect model has a minimal perplexity of 1.0 bit (negative log likelihood of 0.0):
End of explanation
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
from tensorflow.keras.optimizers import RMSprop
model = Sequential()
model.add(LSTM(128, input_shape=(max_length, voc_size)))
model.add(Dense(voc_size, activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(optimizer=optimizer, loss='categorical_crossentropy')
Explanation: Building recurrent model
Let's build a first model and train it on a very small subset of the data to check that it works as expected:
End of explanation
def model_perplexity(model, X, y):
predictions = model(X)
return perplexity(y, predictions)
model_perplexity(model, X_test, y_test)
Explanation: Let's measure the perplexity of the randomly initialized model:
End of explanation
small_train = slice(0, None, 40)
model.fit(X[small_train], y[small_train], validation_split=0.1,
batch_size=128, epochs=1)
model_perplexity(model, X[small_train], y[small_train])
model_perplexity(model, X_test, y_test)
Explanation: Let's train the model for one epoch on a very small subset of the training set to check that it's well defined:
End of explanation
def sample_one(preds, temperature=1.0):
Sample the next character according to the network output.
Use a lower temperature to force the model to output more
confident predictions: more peaky distribution.
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
# Draw a single sample (size=1) from a multinoulli distribution
# parameterized by the output of the softmax layer of our
# network. A multinoulli distribution is a multinomial
# distribution with a single trial with n_classes outcomes.
probs = np.random.multinomial(1, preds, size=1)
return np.argmax(probs)
def generate_text(model, seed_string, length=300, temperature=1.0):
Recursively sample a sequence of chars, one char at a time.
Each prediction is concatenated to the past string of predicted
chars so as to condition the next prediction.
Feed seed string as a sequence of characters to condition the
first predictions recursively. If seed_string is lower than
max_length, pad the input with zeros at the beginning of the
conditioning string.
generated = seed_string
prefix = seed_string
for i in range(length):
# Vectorize prefix string to feed as input to the model:
x = np.zeros((1, max_length, voc_size), dtype="float32")
shift = max_length - len(prefix)
for t, char in enumerate(prefix):
x[0, t + shift, char_indices[char]] = 1.
preds = model(x)[0]
next_index = sample_one(preds, temperature)
next_char = indices_char[next_index]
generated += next_char
prefix = prefix[1:] + next_char
return generated
Explanation: Sampling random text from the model
Recursively generate one character at a time by sampling from the distribution parameterized by the model:
$$
p_{\theta}(c_n | c_{n-1}, c_{n-2}, \ldots, c_0) \cdot p_{\theta}(c_{n-1} | c_{n-2}, \ldots, c_0) \cdot \ldots \cdot p_{\theta}(c_{0})
$$
This way of parametrizing the joint probability of a set of random-variables that are structured sequentially is called auto-regressive modeling.
End of explanation
generate_text(model, 'philosophers are ', temperature=0.1)
generate_text(model, 'atheism is the root of ', temperature=0.8)
Explanation: The temperature parameter makes it possible to increase or decrease the entropy into the multinouli distribution parametrized by the output of the model.
Temperature lower than 1 will yield very regular text (biased towards the most frequent patterns of the training set). Temperatures higher than 1 will render the model "more creative" but also noisier (with a large fraction of meaningless words). A temperature of 1 is neutral (the noise of the generated text only stems from the imperfection of the model).
End of explanation
nb_epoch = 30
seed_strings = [
'philosophers are ',
'atheism is the root of ',
]
for epoch in range(nb_epoch):
print("# Epoch %d/%d" % (epoch + 1, nb_epoch))
print("Training on one epoch takes ~90s on a K80 GPU")
model.fit(X, y, validation_split=0.1, batch_size=128, epochs=1,
verbose=2)
print("Computing perplexity on the test set:")
test_perplexity = model_perplexity(model, X_test, y_test)
print("Perplexity: %0.3f\n" % test_perplexity)
for temperature in [0.1, 0.5, 1]:
print("Sampling text from model at %0.2f:\n" % temperature)
for seed_string in seed_strings:
print(generate_text(model, seed_string, temperature=temperature))
print()
Explanation: Training the model
Let's train the model and monitor the perplexity after each epoch and sample some text to qualitatively evaluate the model:
End of explanation
with open(corpus_path, 'rb') as f:
text_with_case = f.read().decode('utf-8').replace("\n", " ")
%pip install nltk
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize
sentences = sent_tokenize(text_with_case)
plt.hist([len(s.split()) for s in sentences], bins=30);
plt.title('Distribution of sentence lengths')
plt.xlabel('Approximate number of words');
Explanation: Beam search for deterministic decoding
It is possible to improve the generation using a beam search, which will be presented in the following lab.
Better handling of sentence boundaries
To simplify things we used the lower case version of the text and we ignored any sentence boundaries. This prevents our model to learn when to stop generating characters. If we want to train a model that can start generating text at the beginning of a sentence and stop at the end of a sentence, we need to provide it with sentence boundary markers in the training set and use those special markers when sampling.
The following give an example of how to use NLTK to detect sentence boundaries in English text.
This could be used to insert an explicit "end_of_sentence" (EOS) symbol to mark separation between two consecutive sentences. This should make it possible to train a language model that explicitly generates complete sentences from start to end.
Use the following command (in a terminal) to install nltk before importing it in the notebook:
$ pip install nltk
End of explanation
sorted_sentences = sorted([s for s in sentences if len(s) > 20], key=len)
for s in sorted_sentences[:5]:
print(s)
Explanation: The first few sentences detected by NLTK are too short to be considered real sentences. Let's have a look at short sentences with at least 20 characters:
End of explanation
for s in sorted_sentences[-3:]:
print(s)
Explanation: Some long sentences:
End of explanation
import nltk
nltk.download('gutenberg')
book_selection_text = nltk.corpus.gutenberg.raw().replace("\n", " ")
print(book_selection_text[:300])
print("Book corpus length: %d characters" % len(book_selection_text))
Explanation: The NLTK sentence tokenizer seems to do a reasonable job despite the weird casing and '--' signs scattered around the text.
Note that here we use the original case information because it can help the NLTK sentence boundary detection model make better split decisions. Our text corpus is probably too small to train a good sentence aware language model though, especially with full case information. Using larger corpora such as a large collection of public domain books or Wikipedia dumps. The NLTK toolkit also comes from corpus loading utilities.
The following loads a selection of famous books from the Gutenberg project archive:
End of explanation
split = int(0.9 * len(book_selection_text))
book_selection_train = book_selection_text[:split]
book_selection_validation = book_selection_text[split:]
Explanation: Let's do an arbitrary split. Note the training set will have a majority of text that is not authored by the author(s) of the validation set:
End of explanation |
13,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TOC trends - October 2018 (Part 2
Step1: 1. 1990 to 2012
The code below is just for testing purposes
Step2: 2. 1990 to 2016
Step3: 3. 2002 to 2016
Step4: 4. 1990 to 2004
Step5: 5. All data
Step6: 6. Basic checking
6.1. Boxplots
The plot below can be compared to the previous results here. Overall, the two plots are very similar, which is reassuring.
Step7: 7. Data restructuring
The code below is taken from here. It is used to generate output files in the format requested by Heleen.
7.1. Combine datasets
Step8: 7.2. Check record completeness
See e-mail from Heleen received 25/10/2016 at 15
Step9: 7.3. SO4 at Abiskojaure
SO4 for this station ('station_id=38335') should be removed. See here.
Step10: 7.4. Relative slope
Step11: 7.5. Tidy
Step12: 7.6. Convert to "wide" format | Python Code:
# User input
# Specify projects of interest
proj_list = ['ICPW_TOCTRENDS_2018',]
# Specify results folder
res_fold = (r'../../update_autumn_2018/results')
Explanation: TOC trends - October 2018 (Part 2: Chemistry trend analysis)
The previous notebook created a new dataset for the trends analysis spanning ther period from 1990 to 2016. This notebook takes the new dataset and applies the same trends workflow as previously (from 2016 - see here for details).
End of explanation
# Specify period of interest
st_yr, end_yr = 1990, 2012
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=None)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df:
nd_df.to_csv(nd_csv, index=False)
Explanation: 1. 1990 to 2012
The code below is just for testing purposes: it runs the analysis from 1990 to 2012, which should be directly comparable to the data used during the 2016 analysis. The output generated below can therefore be compared to
C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\Results\res_1990-2012.csv
and any differences should be due to changes in the data (which are hopefully due to finding and fixing errors).
End of explanation
# Specify period of interest
st_yr, end_yr = 1990, 2016
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=None)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df:
nd_df.to_csv(nd_csv, index=False)
Explanation: 2. 1990 to 2016
End of explanation
# Specify period of interest
st_yr, end_yr = 2002, 2016
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=None)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df:
nd_df.to_csv(nd_csv, index=False)
Explanation: 3. 2002 to 2016
End of explanation
# Specify period of interest
st_yr, end_yr = 1990, 2004
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_%s-%s' % (st_yr, end_yr))
res_csv = os.path.join(res_fold, 'res_%s-%s.csv' % (st_yr, end_yr))
dup_csv = os.path.join(res_fold, 'dup_%s-%s.csv' % (st_yr, end_yr))
nd_csv = os.path.join(res_fold, 'nd_%s-%s.csv' % (st_yr, end_yr))
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=None)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df:
nd_df.to_csv(nd_csv, index=False)
Explanation: 4. 1990 to 2004
End of explanation
# Specify period of interest
st_yr, end_yr = None, None
# Build output paths
plot_fold = os.path.join(res_fold, 'trends_plots_all_years')
res_csv = os.path.join(res_fold, 'res_all_years.csv')
dup_csv = os.path.join(res_fold, 'dup_all_years.csv')
nd_csv = os.path.join(res_fold, 'nd_all_years.csv')
# Run analysis
res_df, dup_df, nd_df = resa2_trends.run_trend_analysis(proj_list,
eng,
st_yr=st_yr,
end_yr=end_yr,
plot=False,
fold=None)
# Delete mk_std_dev col as not relevant here
del res_df['mk_std_dev']
# Write output
res_df.to_csv(res_csv, index=False)
dup_df.to_csv(dup_csv, index=False)
if nd_df:
nd_df.to_csv(nd_csv, index=False)
Explanation: 5. All data
End of explanation
# Set up plot
fig = plt.figure(figsize=(20,10))
sn.set(style="ticks", palette="muted",
color_codes=True, font_scale=2)
# Horizontal boxplots
ax = sn.boxplot(x="mean", y="par_id", data=res_df,
whis=np.inf, color="c")
# Add "raw" data points for each observation, with some "jitter"
# to make them visible
sn.stripplot(x="mean", y="par_id", data=res_df, jitter=True,
size=3, color=".3", linewidth=0)
# Remove axis lines
sn.despine(trim=True)
Explanation: 6. Basic checking
6.1. Boxplots
The plot below can be compared to the previous results here. Overall, the two plots are very similar, which is reassuring.
End of explanation
# Read results files and concatenate
# Container for data
df_list = []
# Loop over periods
for per in ['1990-2016', '1990-2004', '2002-2016', 'all_years']:
res_path = os.path.join(res_fold, 'res_%s.csv' % per)
df = pd.read_csv(res_path)
# Change 'period' col to 'data_period' and add 'analysis_period'
df['data_period'] = df['period']
del df['period']
df['analysis_period'] = per
df_list.append(df)
# Concat
df = pd.concat(df_list, axis=0)
# Read station data
stn_path = r'../../update_autumn_2018/toc_trends_oct18_stations.xlsx'
stn_df = pd.read_excel(stn_path, sheet_name='Data', keep_default_na=False)
# Join
df = pd.merge(df, stn_df, how='left', on='station_id')
# Read projects table
sql = ("SELECT project_id, project_name "
"FROM resa2.projects "
"WHERE project_name = 'ICPW_TOCTRENDS_2018'")
proj_df = pd.read_sql_query(sql, eng)
# Get associated stations
sql = ("SELECT station_id, project_id "
"FROM resa2.projects_stations "
"WHERE project_id = 4390")
proj_stn_df = pd.read_sql_query(sql, eng)
# Join proj details
proj_df = pd.merge(proj_stn_df, proj_df, how='left', on ='project_id')
# Join to results
df = pd.merge(df, proj_df, how='left', on='station_id')
# Re-order columns
df = df[['project_id', 'project_name', 'station_id',
'station_code', 'station_name', 'nfc_code', 'type',
'latitude', 'longitude', 'continent', 'country',
'region', 'subregion', 'analysis_period', 'data_period',
'par_id', 'non_missing', 'n_start', 'n_end', 'mean', 'median',
'std_dev', 'mk_stat', 'norm_mk_stat', 'mk_p_val', 'trend',
'sen_slp']]
df.head()
Explanation: 7. Data restructuring
The code below is taken from here. It is used to generate output files in the format requested by Heleen.
7.1. Combine datasets
End of explanation
def include(row):
if ((row['analysis_period'] == '1990-2016') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 18)):
return 'yes'
elif ((row['analysis_period'] == '1990-2004') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
elif ((row['analysis_period'] == '2002-2016') &
(row['n_start'] >= 2) &
(row['n_end'] >= 2) &
(row['non_missing'] >= 10)):
return 'yes'
else:
return 'no'
df['include'] = df.apply(include, axis=1)
Explanation: 7.2. Check record completeness
See e-mail from Heleen received 25/10/2016 at 15:56. The 'non_missing' threshold is based of 65% of the data period (e.g. 65% of 27 years for 1990 to 2016).
End of explanation
# Remove sulphate-related series at Abiskojaure
df = df.query('not((station_id==38335) and ((par_id=="ESO4") or '
'(par_id=="ESO4X") or '
'(par_id=="ESO4_ECl")))')
Explanation: 7.3. SO4 at Abiskojaure
SO4 for this station ('station_id=38335') should be removed. See here.
End of explanation
# Relative slope
df['rel_sen_slp'] = df['sen_slp'] / df['median']
Explanation: 7.4. Relative slope
End of explanation
# Remove unwanted cols
df.drop(labels=['mean', 'n_end', 'n_start', 'mk_stat', 'norm_mk_stat'],
axis=1, inplace=True)
# Reorder columns
df = df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'latitude', 'longitude', 'analysis_period',
'data_period', 'par_id', 'non_missing', 'median', 'std_dev',
'mk_p_val', 'trend', 'sen_slp', 'rel_sen_slp', 'include']]
# Write to output
out_path = r'../../update_autumn_2018/results/toc_trends_long_format.csv'
df.to_csv(out_path, index=False, encoding='utf-8')
df.head()
Explanation: 7.5. Tidy
End of explanation
del df['data_period']
# Melt to "long" format
melt_df = pd.melt(df,
id_vars=['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'latitude', 'longitude',
'analysis_period', 'par_id', 'include'],
var_name='stat')
# Get only values where include='yes'
melt_df = melt_df.query('include == "yes"')
del melt_df['include']
# Build multi-index on everything except "value"
melt_df.set_index(['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'latitude', 'longitude', 'par_id',
'analysis_period',
'stat'], inplace=True)
# Unstack levels of interest to columns
wide_df = melt_df.unstack(level=['par_id', 'analysis_period', 'stat'])
# Drop unwanted "value" level in index
wide_df.columns = wide_df.columns.droplevel(0)
# Replace multi-index with separate components concatenated with '_'
wide_df.columns = ["_".join(item) for item in wide_df.columns]
# Reset multiindex on rows
wide_df = wide_df.reset_index()
# Save output
out_path = os.path.join(res_fold, 'toc_trends_wide_format.csv')
wide_df.to_csv(out_path, index=False, encoding='utf-8')
wide_df.head()
Explanation: 7.6. Convert to "wide" format
End of explanation |
13,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST SGD
Get the 'pickled' MNIST dataset from http
Step1: In lesson2-sgd we did these things ourselves | Python Code:
path = Config().data/'mnist'
path.ls()
with gzip.open(path/'mnist.pkl.gz', 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
plt.imshow(x_train[0].reshape((28,28)), cmap="gray")
x_train.shape
x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid))
n,c = x_train.shape
x_train.shape, y_train.min(), y_train.max()
Explanation: MNIST SGD
Get the 'pickled' MNIST dataset from http://deeplearning.net/data/mnist/mnist.pkl.gz. We're going to treat it as a standard flat dataset with fully connected layers, rather than using a CNN.
End of explanation
from torch.utils.data import TensorDataset
bs=64
train_ds = TensorDataset(x_train, y_train)
valid_ds = TensorDataset(x_valid, y_valid)
train_dl = TfmdDL(train_ds, bs=bs, shuffle=True)
valid_dl = TfmdDL(valid_ds, bs=2*bs)
dls = DataLoaders(train_dl, valid_dl)
x,y = dls.one_batch()
x.shape,y.shape
class Mnist_Logistic(Module):
def __init__(self): self.lin = nn.Linear(784, 10, bias=True)
def forward(self, xb): return self.lin(xb)
model = Mnist_Logistic().cuda()
model
model.lin
model(x).shape
[p.shape for p in model.parameters()]
lr=2e-2
loss_func = nn.CrossEntropyLoss()
def update(x,y,lr):
wd = 1e-5
y_hat = model(x)
# weight decay
w2 = 0.
for p in model.parameters(): w2 += (p**2).sum()
# add to regular loss
loss = loss_func(y_hat, y) + w2*wd
loss.backward()
with torch.no_grad():
for p in model.parameters():
p.sub_(lr * p.grad)
p.grad.zero_()
return loss.item()
losses = [update(x,y,lr) for x,y in dls.train]
plt.plot(losses);
class Mnist_NN(Module):
def __init__(self):
self.lin1 = nn.Linear(784, 50, bias=True)
self.lin2 = nn.Linear(50, 10, bias=True)
def forward(self, xb):
x = self.lin1(xb)
x = F.relu(x)
return self.lin2(x)
model = Mnist_NN().cuda()
losses = [update(x,y,lr) for x,y in dls.train]
plt.plot(losses);
model = Mnist_NN().cuda()
def update(x,y,lr):
opt = torch.optim.Adam(model.parameters(), lr)
y_hat = model(x)
loss = loss_func(y_hat, y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
losses = [update(x,y,1e-3) for x,y in dls.train]
plt.plot(losses);
learn = Learner(dls, Mnist_NN(), loss_func=loss_func, metrics=accuracy)
from fastai.callback.all import *
learn.lr_find()
learn.fit_one_cycle(1, 1e-2)
learn.recorder.plot_sched()
learn.recorder.plot_loss()
Explanation: In lesson2-sgd we did these things ourselves:
python
x = torch.ones(n,2)
def mse(y_hat, y): return ((y_hat-y)**2).mean()
y_hat = x@a
Now instead we'll use PyTorch's functions to do it for us, and also to handle mini-batches (which we didn't do last time, since our dataset was so small).
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.