Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
12,600
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Logistic Regression Learning Objectives Create Seaborn plots for Exploratory Data Analysis Train a Logistic Regression Model using Scikit-Learn Introduction This lab is in introduction to logistic regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. In this lab, we will use a synthetic advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Import Libraries Step1: Load the Dataset We will use a synthetic advertising dataset. This data set contains the following features Step2: Check the head of ad_data Step3: Use info and describe() on ad_data Step4: Let's check for any null values. Step5: Exploratory Data Analysis (EDA) Let's use seaborn to explore the data! Try recreating the plots shown below! TODO 1 Step6: TODO 1 Step7: TODO 2 Step8: TODO 1 Step9: Logistic Regression Logistic regression is a supervised machine learning process. It is similar to linear regression, but rather than predict a continuous value, we try to estimate probabilities by using a logistic function. Note that even though it has regression in the name, it is for classification. While linear regression is acceptable for estimating values, logistic regression is best for predicting the class of an observation Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on! Step10: Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems. Step11: TODO 2 Step12: Train and fit a logistic regression model on the training set. Step13: Predictions and Evaluations Now predict values for the testing data. Step14: Create a classification report for the model.
Python Code: !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst import os import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns %matplotlib inline Explanation: Introduction to Logistic Regression Learning Objectives Create Seaborn plots for Exploratory Data Analysis Train a Logistic Regression Model using Scikit-Learn Introduction This lab is in introduction to logistic regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. In this lab, we will use a synthetic advertising data set, indicating whether or not a particular internet user clicked on an Advertisement on a company website. We will try to create a model that will predict whether or not they will click on an ad based off the features of that user. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Import Libraries End of explanation # TODO 1: Read in the advertising.csv file and set it to a data frame called ad_data. # TODO: Your code goes here Explanation: Load the Dataset We will use a synthetic advertising dataset. This data set contains the following features: 'Daily Time Spent on Site': consumer time on site in minutes 'Age': customer age in years 'Area Income': Avg. Income of geographical area of consumer 'Daily Internet Usage': Avg. minutes a day consumer is on the internet 'Ad Topic Line': Headline of the advertisement 'City': City of consumer 'Male': Whether or not consumer was male 'Country': Country of consumer 'Timestamp': Time at which consumer clicked on Ad or closed window 'Clicked on Ad': 0 or 1 indicated clicking on Ad End of explanation ad_data.head() Explanation: Check the head of ad_data End of explanation ad_data.info() ad_data.describe() Explanation: Use info and describe() on ad_data End of explanation ad_data.isnull().sum() Explanation: Let's check for any null values. End of explanation # TODO: Your code goes here Explanation: Exploratory Data Analysis (EDA) Let's use seaborn to explore the data! Try recreating the plots shown below! TODO 1: Create a histogram of the Age End of explanation # TODO: Your code goes here Explanation: TODO 1: Create a jointplot showing Area Income versus Age. End of explanation # TODO: Your code goes here Explanation: TODO 2: Create a jointplot showing the kde distributions of Daily Time spent on site vs. Age. End of explanation # TODO: Your code goes here Explanation: TODO 1: Create a jointplot of 'Daily Time Spent on Site' vs. 'Daily Internet Usage' End of explanation from sklearn.model_selection import train_test_split Explanation: Logistic Regression Logistic regression is a supervised machine learning process. It is similar to linear regression, but rather than predict a continuous value, we try to estimate probabilities by using a logistic function. Note that even though it has regression in the name, it is for classification. While linear regression is acceptable for estimating values, logistic regression is best for predicting the class of an observation Now it's time to do a train test split, and train our model! You'll have the freedom here to choose columns that you want to train on! End of explanation X = ad_data[ [ "Daily Time Spent on Site", "Age", "Area Income", "Daily Internet Usage", "Male", ] ] y = ad_data["Clicked on Ad"] Explanation: Next, let's define the features and label. Briefly, feature is input; label is output. This applies to both classification and regression problems. End of explanation # TODO: Your code goes here Explanation: TODO 2: Split the data into training set and testing set using train_test_split End of explanation from sklearn.linear_model import LogisticRegression logmodel = LogisticRegression() logmodel.fit(X_train, y_train) Explanation: Train and fit a logistic regression model on the training set. End of explanation predictions = logmodel.predict(X_test) Explanation: Predictions and Evaluations Now predict values for the testing data. End of explanation from sklearn.metrics import classification_report print(classification_report(y_test, predictions)) Explanation: Create a classification report for the model. End of explanation
12,601
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">Mention Detection</h1> <h4 align="center">Jiarui Xu - [email protected]</h4> Step3: 1 Wikidata Alias In this section, we use wikidata to build lexicon, if a n-gram exists in lexicon, then we consider it a mention candidate. 1.1 Load Wikidata json file Step4: Load Wikidata and extract aliases Step5: file_formate is as follows Step7: (1) reverse mapping Step8: (2) to lower case Step9: c. Experiment with Golden Standards load tweet corpus Step10: (1) First experiment Step11: Lower the cases
Python Code: import json import pyprind import sys import pickle data_folder = "/Volumes/backup/ccg_tweet_wikifier_data/" wikidata_file = "/Volumes/backup/ccg_tweet_wikifier_data/wikidata/wikidata-20160404-all.json" entity_alias_output_file = data_folder+"wikidata/entity_alias.txt" from corenlp import * corenlp = StanfordCoreNLP() Explanation: <h1 align="center">Mention Detection</h1> <h4 align="center">Jiarui Xu - [email protected]</h4> End of explanation def join_by_tab(dic): Join each items in input val = "" val += dic.keys()[0] # english label val += "\t" val += "\t".join(dic.values()[0]) # aliases val += "\n" return val def find_en_aliases(entity): Return a list [label, alias_0, alias_1 ... ] for a given entity ret = {} entity_id = entity[u'id'] try: ret[entity_id]= [entity[u'labels'][u'en'][u'value']] except: ret[entity_id] = ["NONE_EN_LABEL"] try: ret[entity_id].extend([element['value'] for element in entity[u'aliases'][u'en']]) except: pass return ret def load_wikidata(wikidata_file, output_file): line_count = 20951710 # line count of 04_04 wikidata # for progress bar bar = pyprind.ProgBar(line_count, width=70, monitor = True) # set up error statistics errors = {} json_errors = [] count = 0 # write to file with open(output_file, "w") as g: with open(wikidata_file, "rb") as f: for line in f: # update progress bar bar.update() try: # load entity from the line being reading entity_content = json.loads(line.strip()[:-1]) try: # get aliases and connect them by tab output = join_by_tab(find_en_aliases(entity_content)) g.write(output.encode('utf8')) except: errors[entity_content[u'id']] = sys.exc_info()[0] except: json_errors.append(sys.exc_info()[0]) print json_errors, errors Explanation: 1 Wikidata Alias In this section, we use wikidata to build lexicon, if a n-gram exists in lexicon, then we consider it a mention candidate. 1.1 Load Wikidata json file End of explanation # Unblock to load # load_wikidata(wikidata_file, entity_alias_output_file) entity_alias_file = entity_alias_output_file Explanation: Load Wikidata and extract aliases End of explanation alias_entity_file = data_folder + "wikidata/alias_entity.txt" Explanation: file_formate is as follows: wikidata_id label alias_1 .... alias_n b. Build Alias Mapping End of explanation def reverse_mapping(src_file): Build a mapping from aliasn to entity_list # for progress bar line_count = 20951708 bar = pyprind.ProgBar(line_count, width=70, monitor = True) a2e = {} with open(src_file, "rb") as f: for line in f: bar.update() segments = line.strip().split("\t") entity = segments[0] for seg in segments[1:]: if seg not in a2e: a2e[seg] = set() a2e[seg].add(entity) return a2e alias_to_entity = reverse_mapping(entity_alias_file) entity_alias_output_txt_file = data_folder+"wikidata/alias2entity.txt" bar = pyprind.ProgBar(len(alias_to_entity), width=70, monitor = True) with open(entity_alias_output_txt_file, "wb") as f: for key in alias_to_entity.keys(): bar.update() line = [key] line.extend(alias_to_entity[key]) text = "\t".join(line) f.write(text+"\n") len(alias_to_entity) entity_alias_output_file = data_folder+"wikidata/alias2entity.pickle" with open(entity_alias_output_file, "wb") as f: pickle.dump(alias_to_entity, f) Explanation: (1) reverse mapping End of explanation alias_to_entity_lower = {} for als in alias_to_entity.keys(): als_lower = als.lower() if als_lower in alias_to_entity_lower: alias_to_entity_lower[als_lower] |= alias_to_entity[als] else: alias_to_entity_lower[als_lower] = alias_to_entity[als] # dump the mapping to file entity_alias_output_file = data_folder+"wikidata/alias2entity_lower.pickle" with open(wikidata_file + "alias2entity.pickle", "wb") as f: pickle.dump(alias_to_entity_lower, f) Explanation: (2) to lower case End of explanation with open(data_folder+"Tweet/NEEL_tweets(with_grams).pickle", "rb") as f: tweet_corpus = pickle.load(f) from stop_words import get_stop_words stop_words = get_stop_words('en') stop_words = get_stop_words('english') from stop_words import safe_get_stop_words stop_words = safe_get_stop_words('unsupported language') stop_words = get_stop_words('en') stop_words Explanation: c. Experiment with Golden Standards load tweet corpus End of explanation def remove_special(text): if text[0] in ['$', '#', "@"]: try: return text[1:] except: return text else: return text def experiment_gram_matching(tweets): total = 0 match = 0 try: for tweet in tweets.values(): goldens = tweet['goldens'] for g in goldens: total += 1 mention = g['mention'] gram_set = set() for grams in tweet['ngrams'].values(): for gram in grams: gram_set.add(remove_special(gram)) if mention in gram_set: match += 1 else: # pass print tweet['tweet_info']['id'] print tweet['tweet_info']['text'] print "MENTION:", mention print(type(mention)) print "======" except: print tweet['tweet_info']['id'] return [match, total] res = experiment_gram_matching(tweet_corpus) res for tweet in tweet_corpus.values(): tweet["gram_set"] = set() for gram_set in tweet["ngrams"].values(): tweet["gram_set"] |= set(gram_set) tweet["mention_set"] = set([item['mention'].lower() for item in tweet['goldens']]) stats = {"tp":0., "fp":0., "tn":0., "fn":0.} for tweet in tweet_corpus.values(): for gram in tweet['gram_set']: gram_low = gram.lower() if len(gram_low) < 2: continue if gram_low in stop_words: continue if gram_low in alias_to_entity_lower: if gram_low in tweet['mention_set']: stats['tp'] +=1 else: stats['fp'] +=1 print gram_low else: if gram_low in tweet['mention_set']: stats['fn'] +=1 else: stats['tn'] +=1 def check_upper(text): for c in text: if c.isupper(): return True return False for tweet in tweet_corpus.values(): tweet["gram_set"] = set() for gram_set in tweet["ngrams"].values(): tweet["gram_set"] |= set(gram_set) tweet["mention_set"] = set([item['mention'] for item in tweet['goldens']]) stats = {"tp":0., "fp":0., "tn":0., "fn":0.} for tweet in tweet_corpus.values(): print "=======" print tweet['tweet_info']['id'] print tweet['tweet_info']['text'] print tweet['mention_set'] for gram in tweet['gram_set']: gram_low = gram if len(gram_low) <= 3: continue if gram_low in stop_words: continue # if check_upper(gram_low) == False: elif gram_low in alias_to_entity and check_upper(gram_low): if gram_low in tweet['mention_set']: stats['tp'] +=1 else: stats['fp'] +=1 print gram_low else: if gram_low in tweet['mention_set']: stats['fn'] +=1 else: stats['tn'] +=1 stats precision = stats['tp']/(stats['tp']+stats['fp']) recall = stats['tp']/(stats['tp']+stats['fn']) F = 2*precision*recall/(precision+recall) print precision, recall, F alias_to_entity['screenwriter'] recall def experiment_alias(tweets, alias_mapper): length_sum = 0 match = 0 hit = 0 total = 0 for tweet in tweets.values(): goldens = tweet['goldens'] for g in goldens: total += 1 mention = g['mention'] real_mention = mention if mention in tweet['cashtag_mapping']: real_mention = tweet['cashtag_mapping'][mention]['text'] elif mention in tweet['hashtag_mapping']: real_mention = tweet['hashtag_mapping'][mention]['text'] elif mention in tweet['url_mapping']: real_mention = tweet['url_mapping'][mention]['url'] elif mention in tweet['usermention_mapping']: real_mention = tweet['usermention_mapping'][mention]['name'] low = real_mention.lower() if low in alias_mapper: match += 1 length_sum += len(alias_mapper[low]) else: print tweet['tweet_info']['id'], real_mention, "|", g['wiki_title'] print total, match, hit, length_sum Explanation: (1) First experiment: How n-grams match with our golden mentions End of explanation alias_to_entity_lower = {} for als in alias_to_entity.keys(): als_lower = als.lower() if als_lower in alias_to_entity_lower: alias_to_entity_lower[als_lower] |= alias_to_entity[als] else: alias_to_entity_lower[als_lower] = alias_to_entity[als] experiment_alias(tweet_corpus, alias_to_entity_lower) for s in alias_to_entity_lower: pass 15124./1981 Explanation: Lower the cases End of explanation
12,602
Given the following text description, write Python code to implement the functionality described below step by step Description: MODICE v04 area by country, 2000-2014 Step1: Reorder columns, descending area Use the first row of data to order the columns in descending area order Step2: Get 1 and 2strike data also Step3: Just look at detail on three lowest lines
Python Code: import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import pandas as pd %pylab inline filename = 'modice_v4_3strikes_by_country_by_yr.txt' df3 = pd.read_csv( filename, delim_whitespace=True, index_col=0 ) df3 Explanation: MODICE v04 area by country, 2000-2014 End of explanation sorted_col_index = df3.ix[df3.first_valid_index()].argsort()[::-1] print sorted_col_index df3 = df3[sorted_col_index] df3 Explanation: Reorder columns, descending area Use the first row of data to order the columns in descending area order End of explanation filename = 'modice_v4_2strikes_by_country_by_yr.txt' df2 = pd.read_csv( filename, delim_whitespace=True, index_col=0 ) df2 = df2[sorted_col_index] filename = 'modice_v4_1strikes_by_country_by_yr.txt' df1 = pd.read_csv( filename, delim_whitespace=True, index_col=0 ) df1 = df1[sorted_col_index] print df1 print df2 my_colors = list(['b','g','r','c','m']) fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(10,10)) df3.plot(ax=axes[0], style='-o', color=my_colors) axes[0].legend(bbox_to_anchor=(1.3,1.0)) axes[0].set(title="MODICE(3strike) by Country", ylabel='MODICE area ($km^2$)' ) df2.plot(ax=axes[1], style='-o', color=my_colors) axes[1].legend(bbox_to_anchor=(1.3,1.0)) axes[1].set(title="MODICE(2strike) by Country", ylabel='MODICE area ($km^2$)' ) df1.plot(ax=axes[2], style='-o', color=my_colors) axes[2].legend(bbox_to_anchor=(1.3,1.0)) axes[2].set(title="MODICE(1strike) by Country", ylabel='MODICE area ($km^2$)' ) Explanation: Get 1 and 2strike data also: End of explanation subdf3 = df3[['Kazakhstan','Uzbekistan','Turkmenistan']] subdf2 = df2[['Kazakhstan','Uzbekistan','Turkmenistan']] subdf1 = df1[['Kazakhstan','Uzbekistan','Turkmenistan']] my_sub_colors = my_colors[2:] fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True, figsize=(10,10)) subdf3.plot(ax=axes[0], style='-o', color=my_sub_colors) axes[0].legend(bbox_to_anchor=(1.3,1.0)) axes[0].set(title="MODICE(3strike) by Country", ylabel='MODICE area ($km^2$)' ) subdf2.plot(ax=axes[1], style='-o', color=my_sub_colors) axes[1].legend(bbox_to_anchor=(1.3,1.0)) axes[1].set(title="MODICE(2strike) by Country", ylabel='MODICE area ($km^2$)' ) subdf1.plot(ax=axes[2], style='-o', color=my_sub_colors) axes[2].legend(bbox_to_anchor=(1.3,1.0)) axes[2].set(title="MODICE(1strike) by Country", ylabel='MODICE area ($km^2$)' ) Explanation: Just look at detail on three lowest lines: End of explanation
12,603
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: 用 tf.data 加载 CSV 数据 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: 加载数据 开始的时候,我们通过打印 CSV 文件的前几行来了解文件的格式。 Step3: 正如你看到的那样,CSV 文件的每列都会有一个列名。dataset 的构造函数会自动识别这些列名。如果你使用的文件的第一行不包含列名,那么需要将列名通过字符串列表传给 make_csv_dataset 函数的 column_names 参数。 ```python CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone'] dataset = tf.data.experimental.make_csv_dataset( ..., column_names=CSV_COLUMNS, ...) ``` 这个示例使用了所有的列。如果你需要忽略数据集中的某些列,创建一个包含你需要使用的列的列表,然后传给构造器的(可选)参数 select_columns。 ```python dataset = tf.data.experimental.make_csv_dataset( ..., select_columns = columns_to_use, ...) ``` 对于包含模型需要预测的值的列是你需要显式指定的。 Step4: 现在从文件中读取 CSV 数据并且创建 dataset。 (完整的文档,参考 tf.data.experimental.make_csv_dataset) Step5: dataset 中的每个条目都是一个批次,用一个元组(多个样本,多个标签)表示。样本中的数据组织形式是以列为主的张量(而不是以行为主的张量),每条数据中包含的元素个数就是批次大小(这个示例中是 12)。 阅读下面的示例有助于你的理解。 Step6: 数据预处理 分类数据 CSV 数据中的有些列是分类的列。也就是说,这些列只能在有限的集合中取值。 使用 tf.feature_column API 创建一个 tf.feature_column.indicator_column 集合,每个 tf.feature_column.indicator_column 对应一个分类的列。 Step7: 这将是后续构建模型时处理输入数据的一部分。 连续数据 连续数据需要标准化。 写一个函数标准化这些值,然后将这些值改造成 2 维的张量。 Step8: 现在创建一个数值列的集合。tf.feature_columns.numeric_column API 会使用 normalizer_fn 参数。在传参的时候使用 functools.partial,functools.partial 由使用每个列的均值进行标准化的函数构成。 Step9: 这里使用标准化的方法需要提前知道每列的均值。如果需要计算连续的数据流的标准化的值可以使用 TensorFlow Transform。 创建预处理层 将这两个特征列的集合相加,并且传给 tf.keras.layers.DenseFeatures 从而创建一个进行预处理的输入层。 Step10: 构建模型 从 preprocessing_layer 开始构建 tf.keras.Sequential。 Step11: 训练、评估和预测 现在可以实例化和训练模型。 Step12: 当模型训练完成的时候,你可以在测试集 test_data 上检查准确性。 Step13: 使用 tf.keras.Model.predict 推断一个批次或多个批次的标签。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation import functools import numpy as np import tensorflow as tf import tensorflow_datasets as tfds TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv" TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv" train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL) test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL) # 让 numpy 数据更易读。 np.set_printoptions(precision=3, suppress=True) Explanation: 用 tf.data 加载 CSV 数据 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/csv"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 Tensorflow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 Github 上查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/csv.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a> </td> </table> Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到 tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入 [email protected] Google Group。 这篇教程通过一个示例展示了怎样将 CSV 格式的数据加载进 tf.data.Dataset。 这篇教程使用的是泰坦尼克号乘客的数据。模型会根据乘客的年龄、性别、票务舱和是否独自旅行等特征来预测乘客生还的可能性。 设置 End of explanation !head {train_file_path} Explanation: 加载数据 开始的时候,我们通过打印 CSV 文件的前几行来了解文件的格式。 End of explanation LABEL_COLUMN = 'survived' LABELS = [0, 1] Explanation: 正如你看到的那样,CSV 文件的每列都会有一个列名。dataset 的构造函数会自动识别这些列名。如果你使用的文件的第一行不包含列名,那么需要将列名通过字符串列表传给 make_csv_dataset 函数的 column_names 参数。 ```python CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone'] dataset = tf.data.experimental.make_csv_dataset( ..., column_names=CSV_COLUMNS, ...) ``` 这个示例使用了所有的列。如果你需要忽略数据集中的某些列,创建一个包含你需要使用的列的列表,然后传给构造器的(可选)参数 select_columns。 ```python dataset = tf.data.experimental.make_csv_dataset( ..., select_columns = columns_to_use, ...) ``` 对于包含模型需要预测的值的列是你需要显式指定的。 End of explanation def get_dataset(file_path): dataset = tf.data.experimental.make_csv_dataset( file_path, batch_size=12, # 为了示例更容易展示,手动设置较小的值 label_name=LABEL_COLUMN, na_value="?", num_epochs=1, ignore_errors=True) return dataset raw_train_data = get_dataset(train_file_path) raw_test_data = get_dataset(test_file_path) Explanation: 现在从文件中读取 CSV 数据并且创建 dataset。 (完整的文档,参考 tf.data.experimental.make_csv_dataset) End of explanation examples, labels = next(iter(raw_train_data)) # 第一个批次 print("EXAMPLES: \n", examples, "\n") print("LABELS: \n", labels) Explanation: dataset 中的每个条目都是一个批次,用一个元组(多个样本,多个标签)表示。样本中的数据组织形式是以列为主的张量(而不是以行为主的张量),每条数据中包含的元素个数就是批次大小(这个示例中是 12)。 阅读下面的示例有助于你的理解。 End of explanation CATEGORIES = { 'sex': ['male', 'female'], 'class' : ['First', 'Second', 'Third'], 'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'], 'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'], 'alone' : ['y', 'n'] } categorical_columns = [] for feature, vocab in CATEGORIES.items(): cat_col = tf.feature_column.categorical_column_with_vocabulary_list( key=feature, vocabulary_list=vocab) categorical_columns.append(tf.feature_column.indicator_column(cat_col)) # 你刚才创建的内容 categorical_columns Explanation: 数据预处理 分类数据 CSV 数据中的有些列是分类的列。也就是说,这些列只能在有限的集合中取值。 使用 tf.feature_column API 创建一个 tf.feature_column.indicator_column 集合,每个 tf.feature_column.indicator_column 对应一个分类的列。 End of explanation def process_continuous_data(mean, data): # 标准化数据 data = tf.cast(data, tf.float32) * 1/(2*mean) return tf.reshape(data, [-1, 1]) Explanation: 这将是后续构建模型时处理输入数据的一部分。 连续数据 连续数据需要标准化。 写一个函数标准化这些值,然后将这些值改造成 2 维的张量。 End of explanation MEANS = { 'age' : 29.631308, 'n_siblings_spouses' : 0.545455, 'parch' : 0.379585, 'fare' : 34.385399 } numerical_columns = [] for feature in MEANS.keys(): num_col = tf.feature_column.numeric_column(feature, normalizer_fn=functools.partial(process_continuous_data, MEANS[feature])) numerical_columns.append(num_col) # 你刚才创建的内容。 numerical_columns Explanation: 现在创建一个数值列的集合。tf.feature_columns.numeric_column API 会使用 normalizer_fn 参数。在传参的时候使用 functools.partial,functools.partial 由使用每个列的均值进行标准化的函数构成。 End of explanation preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numerical_columns) Explanation: 这里使用标准化的方法需要提前知道每列的均值。如果需要计算连续的数据流的标准化的值可以使用 TensorFlow Transform。 创建预处理层 将这两个特征列的集合相加,并且传给 tf.keras.layers.DenseFeatures 从而创建一个进行预处理的输入层。 End of explanation model = tf.keras.Sequential([ preprocessing_layer, tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid'), ]) model.compile( loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) Explanation: 构建模型 从 preprocessing_layer 开始构建 tf.keras.Sequential。 End of explanation train_data = raw_train_data.shuffle(500) test_data = raw_test_data model.fit(train_data, epochs=20) Explanation: 训练、评估和预测 现在可以实例化和训练模型。 End of explanation test_loss, test_accuracy = model.evaluate(test_data) print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy)) Explanation: 当模型训练完成的时候,你可以在测试集 test_data 上检查准确性。 End of explanation predictions = model.predict(test_data) # 显示部分结果 for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]): print("Predicted survival: {:.2%}".format(prediction[0]), " | Actual outcome: ", ("SURVIVED" if bool(survived) else "DIED")) Explanation: 使用 tf.keras.Model.predict 推断一个批次或多个批次的标签。 End of explanation
12,604
Given the following text description, write Python code to implement the functionality described below step by step Description: Components We store our component functions inside the pp.components module. Each function there returns a Component object You can use dir or help over the pp.c module to see the all available components. Some of which are just shapes, but we call them components as they all inherit from the component class in pp.Component Step1: Phidl components Gdsfactory extends phidl. Therefore all phidl components can be easily used in gdsfactory. Step2: You can see all the components available in gdsfactory
Python Code: import pp c = pp.c.mzi() pp.qp(c) c.ports c = pp.c.ring_single_bus() pp.qp(c) Explanation: Components We store our component functions inside the pp.components module. Each function there returns a Component object You can use dir or help over the pp.c module to see the all available components. Some of which are just shapes, but we call them components as they all inherit from the component class in pp.Component End of explanation import phidl.geometry as pg components = [ pg.tee(size=(4, 2), stub_size=(2, 1), taper_type=None, layer=0), pg.optimal_hairpin( width=0.2, pitch=0.6, length=10, turn_ratio=4, num_pts=50, layer=0 ), pg.optimal_step( start_width=10, end_width=22, num_pts=50, width_tol=1e-3, anticrowding_factor=1.2, symmetric=False, layer=0, ), pg.optimal_90deg(width=100.0, num_pts=15, length_adjust=1, layer=0), pg.ytron_round( rho=1, arm_lengths=(500, 300), source_length=500, arm_widths=(200, 200), theta=2.5, theta_resolution=10, layer=0, ), ] for c in components: pp.qp(c) c2 = pp.import_phidl_component(component=c) pp.show(c2) Explanation: Phidl components Gdsfactory extends phidl. Therefore all phidl components can be easily used in gdsfactory. End of explanation help(pp.c) Explanation: You can see all the components available in gdsfactory End of explanation
12,605
Given the following text description, write Python code to implement the functionality described below step by step Description: Bloques Faltantes Busco el nombre de los bloques faltantes Step1: Bloques Faltantes Primero, veo si puedo sacar el bloque de los otros años Step2: Scrapeo del Sitio de Senadores Los bloques faltantes los vamos a obtener del sitio oficial del Senado. En particular, el sitio tiene este lugar que nos permite buscar por el nombre del senador. Problemas Step3: "API" Senadores Es bastante simple. Dado un nombre de Sendanor, buscamos el ID correspondiente. Si no encontramos el nombre del Senador, devolvemos los 3 nombres más cercanos. Si lo encuentra, devuelve un DataFrame de Pandas con los datos de sus periodos
Python Code: import difflib import requests import pandas as pd from bs4 import BeautifulSoup Explanation: Bloques Faltantes Busco el nombre de los bloques faltantes End of explanation # Join de todos los otros años csvs = ['../viajes_2012.csv', '../viajes_2015.csv', '../viajes_2016.csv', '../viajes_2017.csv'] for cnt, csv in enumerate(csvs): if cnt == 0: df = pd.read_csv(csv) fecha = csv.split('_')[1].split('.csv')[0] df['Año'] = [fecha for _ in range(df.shape[0])] else: df_temp = pd.read_csv(csv) fecha = csv.split('_')[1].split('.csv')[0] df_temp['Año'] = [fecha for _ in range(df_temp.shape[0])] df = pd.concat([df, df_temp], ignore_index=True) df.head() def search_block(ele): df_filter = df[df['Autoridad'] == ele] amount_blocks = len(set(df_filter['Bloque'])) if amount_blocks == 1: return df_filter['Bloque'].iloc[0] elif amount_blocks == 0: return "No encontrado" else: return False block = df_2013['Autoridad'].apply(search_block) print("Senadores sin bloque en el 2013: {0}. Senadores encontrados: {1}".format(df_2013.shape[0], block[block == "No encontrado"].shape[0])) Explanation: Bloques Faltantes Primero, veo si puedo sacar el bloque de los otros años End of explanation # Header para el request para parecer un usuario "comun" headers = { 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'User-Agent': ('Mozilla/5.0 (X11; Linux x86_64; rv:45.0)' ' Gecko/20100101 Firefox/45.0'), } base_url = "http://www.senado.gov.ar/senadores/Historico/PeriodoResultado" # Session for the delicious cookies session = requests.session() session.headers = headers # Get the ID of each senador response = session.get(base_url) soup = BeautifulSoup(response.text, 'html.parser') senadors_id = {} select = soup.find('select', {'id': 'senado_senadoresbundle_busquedahistoricostype_senador'}) for option in select: if option.attrs['value']: name = ' '.join(option.text.replace(',', ' ').split()).lower() senadors_id[name] = option.attrs['value'] Explanation: Scrapeo del Sitio de Senadores Los bloques faltantes los vamos a obtener del sitio oficial del Senado. En particular, el sitio tiene este lugar que nos permite buscar por el nombre del senador. Problemas: * El sitio anda mucho más lento con https (no se por que) * El server, medio seguido, no te devuelve nada y tenes que hacer otro get (?) Lo que vamos a hacer, es una pequeña "API" para utilizar esta herramienta de busquede de los senadores End of explanation name = ' '.join('MARINO, Juan Carlos'.lower().replace(',', ' ').split()) if name in senadors_id.keys(): # Senador encontrado senador_id = senadors_id[name] print(senador_id) else: print(difflib.get_close_matches(name, senadors_id.keys())) def get_block(row): name = ' '.join(row['Autoridad'].lower().replace(',', ' ').split()) if name in senadors_id.keys(): # Senador encontrado senador_id = senadors_id[name] r = session.post(base_url, {'senado_senadoresbundle_busquedahistoricostype[senador]': senador_id}) d = pd.read_html(r.text)[0] # Paso las fehcas a datetime legal_split = d['Período Legal'].str.split('al', expand=True) d['Período Legal Comienzo'] = legal_split[0] d['Período Legal Fin'] = legal_split[1] d['Período Legal Comienzo'] = pd.to_datetime(d['Período Legal Comienzo'], infer_datetime_format=True) d['Período Legal Fin'] = pd.to_datetime(d['Período Legal Fin'], infer_datetime_format=True) # Busco la fecha de interes date = pd.to_datetime(row['Fecha_salida'], infer_datetime_format=True) block = d[(d['Período Legal Comienzo'] < date) & (date < d['Período Legal Fin'])]['Partido Político'] return block else: return (difflib.get_close_matches(name, senadors_id.keys())) df_2013[:5].apply(get_block, axis=1) d Explanation: "API" Senadores Es bastante simple. Dado un nombre de Sendanor, buscamos el ID correspondiente. Si no encontramos el nombre del Senador, devolvemos los 3 nombres más cercanos. Si lo encuentra, devuelve un DataFrame de Pandas con los datos de sus periodos End of explanation
12,606
Given the following text description, write Python code to implement the functionality described below step by step Description: Combine a Matplotlib Basemap with IPython Widgets This is an experiment in creating a Jupyter notebook showing a world map with different parameters (including map projection) by combining a Matplotlib Basemap and IPython widgets. Tested on Python 3.5, basemap 1.0.7, and ipywidgets 4.1.1. Step1: Get a list of supported projection names (no, there seems to be no single ready-made list) Step2: Create sliders without continuous update, since creating a map can take a few seconds (this effect shows only when replacing the @interact_manual decorator below with @interact) Step3: This function does the real work. Notice that some projections will create warnings or even errors when they need additional parameters!
Python Code: # Make plots appear inline (inside the Jupyter notebook). %matplotlib inline import datetime import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap, supported_projections from ipywidgets import interact, interact_manual, FloatSlider Explanation: Combine a Matplotlib Basemap with IPython Widgets This is an experiment in creating a Jupyter notebook showing a world map with different parameters (including map projection) by combining a Matplotlib Basemap and IPython widgets. Tested on Python 3.5, basemap 1.0.7, and ipywidgets 4.1.1. End of explanation lines = supported_projections.strip().split('\n') proj_names = [line.strip().split()[0] for line in lines] print(sorted(proj_names)) Explanation: Get a list of supported projection names (no, there seems to be no single ready-made list): End of explanation lat_slider = FloatSlider(min=-90, max=90, step=0.1, continuous_update=False) lon_slider = FloatSlider(min=-180, max=180, step=0.1, continuous_update=False) hour_slider = FloatSlider(min=-12, max=12, step=1/60, continuous_update=False) Explanation: Create sliders without continuous update, since creating a map can take a few seconds (this effect shows only when replacing the @interact_manual decorator below with @interact): End of explanation @interact_manual(lat_0=lat_slider, lon_0=lon_slider, delta_hours=hour_slider, projection=proj_names, title='Sample Title') def show_map(lat_0=0, lon_0=0, delta_hours=0, projection='mill', title=''): "Show a world map." # Resolutions: c (crude), l (low), i (intermediate), h (high), f (full) or None. map = Basemap(projection=projection, lat_0=lat_0, lon_0=lon_0, resolution='c') # Plot coastlines, draw label meridians and parallels. map.drawcoastlines() # linewidth=0.5, linestyle='solid', color='k', antialiased=1, ax=None, zorder=None) # Plot countries. map.drawcountries() # linewidth=0.5, linestyle='solid', color='k', antialiased=1, ax=None, zorder=None) # Plot parallels and meridians. map.drawparallels(np.arange(-90, 90, 30), labels=[1, 0, 0, 0]) map.drawmeridians(np.arange(map.lonmin, map.lonmax + 30, 60), labels=[0, 0, 0, 1]) # Fill continents 'coral' (with zorder=0), color wet areas 'aqua' map.drawmapboundary(fill_color='aqua') map.fillcontinents(color='coral', lake_color='aqua') # Shade the night areas, with alpha transparency so the # map shows through. Use current time in UTC + delta. date = datetime.datetime.utcnow().timestamp() + delta_hours * 3600 date = datetime.datetime.fromtimestamp(date) map.nightshade(date, alpha=0.35) plt.title('%s %s (UTC)' % (title, date.isoformat()[:19])) plt.show() Explanation: This function does the real work. Notice that some projections will create warnings or even errors when they need additional parameters! End of explanation
12,607
Given the following text description, write Python code to implement the functionality described below step by step Description: Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12 Step1: AOS frames Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data. Step2: Virtual Channel 63 (Only Idle Data) Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's. Step3: Virtual channel 0 Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol. Step4: APID 5 As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag.
Python Code: def timestamps(packets): epoch = np.datetime64('2000-01-01T12:00:00') t = np.array([struct.unpack('>I', p[ccsds.SpacePacketPrimaryHeader.sizeof():][:4])[0] for p in packets], 'uint32') return epoch + t * np.timedelta64(1, 's') def load_frames(path): frame_size = 223 * 5 - 2 frames = np.fromfile(path, dtype = 'uint8') frames = frames[:frames.size//frame_size*frame_size].reshape((-1, frame_size)) return frames frames = load_frames('lucy_frames_bochum_20211024_214614.u8') frames.shape[0] Explanation: Timestamps are contained in the Space Packet secondary header time code field. They are encoded as big-endian 32-bit integers counting the number of seconds elapsed since the J2000 epoch (2000-01-01T12:00:00). Looking at the idle APID packets, the next byte might indicate fractional seconds (since it is still part of the secondary header rather than idle data), but it is difficult to be sure. End of explanation aos = [AOSFrame.parse(f) for f in frames] collections.Counter([a.primary_header.transfer_frame_version_number for a in aos]) collections.Counter([a.primary_header.spacecraft_id for a in aos]) collections.Counter([a.primary_header.virtual_channel_id for a in aos]) Explanation: AOS frames Telemetry is in Virtual Channel 1. Virtual channel 63 contains Only Idle Data. End of explanation vc63 = [a for a in aos if a.primary_header.virtual_channel_id == 63] [a.primary_header for a in vc63[:10]] vc63[0] vc63_frames = np.array([f for f, a in zip(frames, aos) if a.primary_header.virtual_channel_id == 63]) np.unique(vc63_frames[:, 6:8], axis = 0) bytes(vc63_frames[0, 6:8]).hex() np.unique(vc63_frames[:, 8:]) hex(170) fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc63]) plt.figure(figsize = (10, 5), facecolor = 'w') plt.plot(fc[1:], np.diff(fc)-1, '.') plt.title("Lucy virtual channel 63 (OID) frame loss") plt.xlabel('Virtual channel frame counter') plt.ylabel('Lost frames'); fc.size/(fc[-1]-fc[0]+1) Explanation: Virtual Channel 63 (Only Idle Data) Virtual channel 63 corresponds to Only Idle Data. The transfer frame data field includes an M_PDU header with a first header pointer equal to 0x7fe, which indicates that the packet zone contains only idle data. The packet zone is filled with 0xaa's. End of explanation vc0 = [a for a in aos if a.primary_header.virtual_channel_id == 0] [a.primary_header for a in vc0[:10]] fc = np.array([a.primary_header.virtual_channel_frame_count for a in vc0]) plt.figure(figsize = (10, 5), facecolor = 'w') plt.plot(fc[1:], np.diff(fc)-1, '.') plt.title("Lucy virtual channel 0 (telemetry) frame loss") plt.xlabel('Virtual channel frame counter') plt.ylabel('Lost frames'); fc.size/(fc[-1]-fc[0]+1) vc0_packets = list(ccsds.extract_space_packets(vc0, 49, 0)) vc0_t = timestamps(vc0_packets) vc0_sp_headers = [ccsds.SpacePacketPrimaryHeader.parse(p) for p in vc0_packets] vc0_apids = collections.Counter([p.APID for p in vc0_sp_headers]) vc0_apids apid_axis = {a : k for k, a in enumerate(sorted(vc0_apids))} plt.figure(figsize = (10, 5), facecolor = 'w') plt.plot(vc0_t, [apid_axis[p.APID] for p in vc0_sp_headers], '.') plt.yticks(ticks=range(len(apid_axis)), labels=apid_axis) plt.xlabel('Space Packet timestamp') plt.ylabel('APID') plt.title('Lucy Virtual Channel 0 APID distribution'); vc0_by_apid = {apid : [p for h,p in zip(vc0_sp_headers, vc0_packets) if h.APID == apid] for apid in vc0_apids} plot_apids(vc0_by_apid) Explanation: Virtual channel 0 Virtual channel 0 contains telemetry. There are a few active APIDs sending CCSDS Space Packets using the AOS M_PDU protocol. End of explanation tags = {2: Int16ub, 3: Int16ub, 15: Int32ub, 31: Int16ub, 32: Int16ub, 1202: Float64b, 1203: Float64b, 1204: Float64b, 1205: Float64b, 1206: Float64b, 1208: Float32b, 1209: Float32b, 1210: Float32b, 1601: Float32b, 1602: Float32b, 1603: Float32b, 1630: Float32b, 1631: Float32b, 1632: Float32b, 17539: Float32b, 17547: Float32b, 17548: Float32b, 21314: Int32sb, 21315: Int32sb, 21316: Int32sb, 21317: Int32sb, 46555: Int32sb, 46980: Int16ub, 46981: Int16ub, 46982: Int16ub, 47090: Int16ub, 47091: Int16ub, 47092: Int16ub, } values = list() for packet in vc0_by_apid[5]: t = timestamps([packet])[0] packet = packet[6+5:] # skip primary and secondary headers while True: tag = Int16ub.parse(packet) packet = packet[2:] value = tags[tag].parse(packet) packet = packet[tags[tag].sizeof():] values.append((tag, value, t)) if len(packet) == 0: break values_keys = {v[0] for v in values} values = {k: [(v[2], v[1]) for v in values if v[0] == k] for k in values_keys} for k in sorted(values_keys): vals = values[k] plt.figure() plt.title(f'Key {k}') plt.plot([v[0] for v in vals], [v[1] for v in vals], '.') Explanation: APID 5 As found by r00t this APID has frames of fixed size containing a number of fields in tag-value format. Tags are 2 bytes, and values have different formats and sizes depending on the tag. End of explanation
12,608
Given the following text description, write Python code to implement the functionality described below step by step Description: <table align="left"> <td> <a href="https Step1: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. Step2: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API, Cloud Build API, Cloud Storage API, and Container Registry API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note Step3: Otherwise, set your project ID here. Step4: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. Step5: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps Step6: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job, Vertex AI saves all resources to the given GCS bucket. We will also use the same bucket to download and host the input data. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. Step7: Only if your bucket doesn't already exist Step8: Finally, validate access to your Cloud Storage bucket by examining its contents Step9: Import libraries and define constants Step10: Define the anomaly detection components Here you will load components from the anomaly_detection folder in the Google Cloud Pipeline Components SDK. You can also save and modify the original Python component file. For example, for tfp_anomaly_detection.py Step13: Define the pipeline Here you will define the relationship between the components and how data is passed. In this pipeline a Google Cloud Storage csv is imported, the data is preprocessed, anomalies are flagged, and the results are postprocessed so that the output csv is scoreable by the Numenta Anomaly Benchmark. Step14: Download the data Here you will download the Numenta Anomaly Benchmark and upload the dataset to your GCS bucket. We will then find the exact GCS file url associated with the chosen task to pass as the input url into the pipeline. Step15: Run the pipeline Finally, we run the pipeline. Please wait until the run has completed before proceeding to the next steps. Step16: Download the results locally Copy the GCS file path from the final postprocess step of the pipeline below. Here we will save this output locally for visualization and scoring. Step18: Visualize the results Here we will plot the forecast distribution outputted by the pipeline, the points flagged as anomalies (red), and the ground truth targets (green). The graph is plotted with daily granularity due to the resampling done during preprocessing. Note how the algorithm correctly identifies December 25th as an anomaly. Step21: Run scoring Here we quantitatively score the algorithm's performance on the Numenta Anomaly Benchmark. The benchmark uses a custom scoring mechanism described in their paper. Unlike precision and recall which do not reward for early detection, this scoring mechanism rewards based on windows around anomalous points rather than the exact points themselves. We will run the scoring script with the --optimize flag, which uses the anomaly_scores column to score and optimizes the decision threshold. If this flag is omitted, then the script will only use the label column originally outputted by the component. Step22: NAB also provides scores for three profile settings
Python Code: import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip3 install {USER_FLAG} --upgrade kfp ! pip3 install {USER_FLAG} --upgrade google-cloud-pipeline-components ! pip3 install {USER_FLAG} --upgrade tensorflow ! pip3 install {USER_FLAG} --upgrade matplotlib ! pip3 install {USER_FLAG} --upgrade numpy ! pip3 install {USER_FLAG} --upgrade pandas Explanation: <table align="left"> <td> <a href="https://colab.research.google.com/github/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/tfp_anomaly_detection.ipynb""> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/kubeflow/pipelines/blob/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/tfp_anomaly_detection.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Anomaly Detection with TensorFlow Probability STS on Kubeflow Pipelines Overview This notebook demonstrates how to use TensorFlow Probability and Kubeflow Pipelines for anomaly detection in time series data. It uses structural time series (STS), a class of Bayesian statistical models, to decompose a time series into interpretable seasonal and trend components. This algorithm fits an STS model to the time series, generates a forecast of acceptable values for each timestep, and flags any points outside of the forecast as an anomaly. To learn more about STS models, check out this demo on Structural Time Series Modeling Case Studies. This demo is most relevant for those who would like to automatically flag anomalies in time series data and can be used for applications like network monitoring, infrastructure maintenance, and sales tracking. Dataset This demo uses the Numenta Anomaly Benchmark, a popular benchmark of time series data with labeled anomalies. More specifically, our demo uses nyc_taxi.csv which reports the total number of passengers in NYC taxis from July 2014 to January 2015 in 30-minute increments. Objective You will go through the following steps: * Define and launch an anomaly detection algorithm on Kubeflow Pipelines. * Retrieve and visualize results. * Benchmark predictions using the Numenta Anomaly Benchmark scoring method. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Install additional packages Install additional package dependencies not installed in your notebook environment. End of explanation # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) Explanation: Restart the kernel After you install the additional packages, you need to restart the notebook kernel so it can find the packages. End of explanation import os # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID) Explanation: Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API, Cloud Build API, Cloud Storage API, and Container Registry API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud. End of explanation if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} !gcloud config set project {PROJECT_ID} Explanation: Otherwise, set your project ID here. End of explanation from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") Explanation: Timestamp If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. End of explanation import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' Explanation: Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. End of explanation BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP Explanation: Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job, Vertex AI saves all resources to the given GCS bucket. We will also use the same bucket to download and host the input data. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI. End of explanation ! gsutil mb -l $REGION $BUCKET_NAME Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. End of explanation ! gsutil ls -al $BUCKET_NAME Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents: End of explanation PIPELINE_NAME = '{0}-{1}'.format('tfp-anomaly-detection', TIMESTAMP) PIPELINE_ROOT = '{0}/{1}'.format(BUCKET_NAME, PIPELINE_NAME) from typing import Callable, Optional, Mapping, Any import kfp from kfp.v2 import compiler from kfp.v2 import dsl from kfp.v2.google.client import AIPlatformClient from kfp.v2.dsl import Input, Output, Dataset Explanation: Import libraries and define constants End of explanation preprocess_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/preprocess.yaml') anomaly_detection_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/component.yaml') postprocess_op = kfp.components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/master/components/google-cloud/google_cloud_pipeline_components/experimental/tensorflow_probability/anomaly_detection/postprocess.yaml') Explanation: Define the anomaly detection components Here you will load components from the anomaly_detection folder in the Google Cloud Pipeline Components SDK. You can also save and modify the original Python component file. For example, for tfp_anomaly_detection.py: Call generate_component_file() which creates a yaml file. Replace the next cell with anomaly_detection_op = kfp.components.load_component_from_file('component.yaml') The components do the following: * preprocess: Regularizes and resamples a time series. * tfp_anomaly_detection: Infers the structure of the time series, fits the model, and identifies anomalies based on the predictive distribution of acceptable values at each timestep. * postprocess: Fills missing values from regularizing and resampling. End of explanation @dsl.pipeline( pipeline_root=PIPELINE_ROOT, name=PIPELINE_NAME) def pipeline(input_url: str, memory_limit: str, seed: int) -> None: Train model and return detected anomalies. input_task = kfp.dsl.importer( artifact_uri=input_url, artifact_class=Dataset) preprocess_task = preprocess_op(input_dataset=input_task.output) anomaly_detection_task = anomaly_detection_op(input_dataset=preprocess_task.output, seed=seed).set_memory_limit(memory_limit) postprocess_op(input_dataset=input_task.output, predictions_dataset=anomaly_detection_task.output) def run_pipeline(pipeline: Callable, parameter_values: Optional[Mapping[str, Any]] = {}, enable_caching: bool = False) -> None: Runs a given pipeline function using Kubeflow Pipelines. Args: pipeline: The function to run. parameter_values: Parameters passed to the pipeline function when run. enable_caching: Whether to used cached results from previous runs. compiler.Compiler().compile( pipeline_func=pipeline, package_path='{}_pipeline.json'.format(PIPELINE_NAME)) api_client = AIPlatformClient( project_id=PROJECT_ID, region=REGION, ) _ = api_client.create_run_from_job_spec( job_spec_path='{}_pipeline.json'.format(PIPELINE_NAME), pipeline_root=PIPELINE_ROOT, parameter_values=parameter_values, enable_caching=enable_caching) Explanation: Define the pipeline Here you will define the relationship between the components and how data is passed. In this pipeline a Google Cloud Storage csv is imported, the data is preprocessed, anomalies are flagged, and the results are postprocessed so that the output csv is scoreable by the Numenta Anomaly Benchmark. End of explanation import os NAB_DATA_BLOB = '{0}/NAB'.format(BUCKET_NAME) if not os.path.exists('content/NAB'): !git clone https://github.com/numenta/NAB !gsutil cp -r NAB/data $NAB_DATA_BLOB # Find the full file path in gcs for the chosen task import tensorflow as tf chosen_task_folder = 'realKnownCause' chosen_task = 'nyc_taxi' nab_files = tf.io.gfile.glob('{0}/*/*.csv'.format(NAB_DATA_BLOB)) chosen_task_file = [file for file in nab_files if chosen_task in file][0] print('The pipeline will be run on the task: {0}'.format(chosen_task)) Explanation: Download the data Here you will download the Numenta Anomaly Benchmark and upload the dataset to your GCS bucket. We will then find the exact GCS file url associated with the chosen task to pass as the input url into the pipeline. End of explanation parameter_values = { 'input_url': chosen_task_file, 'memory_limit': '50G', 'seed': 0, } run_pipeline(pipeline, parameter_values=parameter_values) Explanation: Run the pipeline Finally, we run the pipeline. Please wait until the run has completed before proceeding to the next steps. End of explanation import pandas as pd import numpy as np import json gcs_file = '[your-pipeline-output]' # @param {type:'string'} output_file = '/content/{0}-{1}.csv'.format(chosen_task, TIMESTAMP) !gsutil cp $gcs_file $output_file # Collect targets specifically for the chosen task targets = json.load(open('/content/NAB/labels/combined_labels.json')) chosen_task_targets = [targets[key] for key in targets if chosen_task in key][0] Explanation: Download the results locally Copy the GCS file path from the final postprocess step of the pipeline below. Here we will save this output locally for visualization and scoring. End of explanation #@title Plotting setup from matplotlib import pylab as plt from matplotlib.lines import Line2D def plot_predictions(predictions: pd.DataFrame, annotation_fn: Callable = lambda timestamp: timestamp) -> None: Plots the time series, forecast, detected anomalies, and residuals. Args: predictions: The output of the anomaly detection algorithm. # Drop NaN values during plotting predictions = predictions.dropna(how='any') predictions = predictions.reset_index() timestamp = pd.to_datetime(predictions['timestamp'], format='%Y-%m-%d') # Plot the value from predictions which may be # an aggregation of the original value value = np.array(predictions['value_predictions']) lower_limit = np.array(predictions['lower_limit']) upper_limit = np.array(predictions['upper_limit']) mean = np.array(predictions['mean']) anomalies = np.array(predictions['label']).nonzero()[0] targets = [] if 'target' in predictions: targets = np.array(predictions['target']).nonzero()[0] fig = plt.figure(figsize=(10, 5), constrained_layout=True) spec = fig.add_gridspec(ncols=1, nrows=2, height_ratios=[2., 1.]) series_ax = fig.add_subplot(spec[0, 0]) residuals_ax = fig.add_subplot(spec[1, 0], sharex=series_ax) # Plot anomalies on series_ax series_ax.plot( timestamp, value, color='black', alpha=0.6) series_ax.fill_between( timestamp, lower_limit, upper_limit, color='tab:blue', alpha=0.3) for anomaly_idx in anomalies: x = timestamp[anomaly_idx] y = value[anomaly_idx] series_ax.scatter(x, y, s=100, alpha=0.4, c='red') for target_idx in targets: x = timestamp[target_idx] y = value[target_idx] series_ax.scatter(x, y, s=100, alpha=0.4, c='green') series_ax.annotate(annotation_fn(x), (x, y)) # Plot residuals on residuals_ax time_delta = timestamp[1] - timestamp[0] residuals_ax.bar( timestamp, height=upper_limit - lower_limit, bottom=lower_limit - mean, width=time_delta, align='center', color='tab:blue', alpha=0.3) residuals_ax.bar( timestamp, width=time_delta, height=value - mean, align='center', color='black', alpha=0.6) # Set up grid styling series_ax.set_ylabel('Original series') residuals_ax.set_ylabel('Residuals') series_ax.grid(True, color='whitesmoke') residuals_ax.grid(True, color='whitesmoke') series_ax.set_axisbelow(True) residuals_ax.set_axisbelow(True) # Add title and legend series_ax.set_title('TFP STS model forecast, anomalies, and residuals for {0}'.format(chosen_task)) create_legend_label = lambda label, color: Line2D([0], [0], marker='o', color='w', label=label, markerfacecolor=color, markersize=10) legend_elements = [create_legend_label(label, color) for label, color in [('predicted anomaly', 'red'), ('target', 'green')]] series_ax.legend(handles=legend_elements, loc='lower right') # Round target timestamps to day for plotting round_to_day = lambda timestamp: timestamp.split()[0] rounded_targets = [round_to_day(timestamp) for timestamp in chosen_task_targets] rounded_targets = set(rounded_targets) predictions = pd.read_csv(output_file) predictions['target'] = predictions.apply(lambda df: round_to_day(df['timestamp']) in rounded_targets, axis=1) # Change the start and end to view different slices of the prediction start, end = 8000, 9000 round_annotation = lambda timestamp: timestamp.date() plot_predictions(predictions.iloc[start:end], round_annotation) Explanation: Visualize the results Here we will plot the forecast distribution outputted by the pipeline, the points flagged as anomalies (red), and the ground truth targets (green). The graph is plotted with daily granularity due to the resampling done during preprocessing. Note how the algorithm correctly identifies December 25th as an anomaly. End of explanation # Set up NAB folder for running scoring %cd /content/NAB !pip install . --user !python scripts/create_new_detector.py --detector $PIPELINE_NAME # Move gcs output into the NAB results folder structure results_file = 'results/{0}/{1}/{0}_{2}.csv'.format(PIPELINE_NAME, chosen_task_folder, chosen_task) !cp $output_file $results_file # Run the scoring script !python run.py -d $PIPELINE_NAME --optimize --score --normalize #@title Score collection and normalization setup import glob def collect_scores(profile_name: str, chosen_task: str) -> pd.DataFrame: Crawls through results files for all detectors in NAB to get results for the chosen task. Args: profile_name: One of 'standard', 'low_FP_rate', 'low_FN_rate'. chosen_task: The chosen benchmark task. Returns: all_scores_df: A pandas DataFrame of results for the task sorted by highest to lowest score. all_scores = [] for scores_file in glob.glob('/content/NAB/results/**/*_{0}_scores.csv'.format(profile_name)): scores_df = pd.read_csv(scores_file) chosen_task_row = scores_df[scores_df['File'].str.contains(chosen_task).fillna(False)] all_scores.append(chosen_task_row) all_scores_df = pd.concat(all_scores) all_scores_df = all_scores_df.sort_values(by=['Score'], ascending=False) all_scores_df = all_scores_df.reset_index().drop('index', axis=1) return all_scores_df def normalize_scores(results: pd.DataFrame, profile_name: str, profiles: dict, tpCount: int) -> pd.DataFrame: Normalizes scores with the max from a perfect detector and the min from a null detector. Args: results: Pandas DataFrame with score results. profile_name: One of 'standard', 'low_FP_rate', 'low_FN_rate'. profiles: Dictionary containing cost matrix for each profile. tpCount: The number of true positives in the ground truth targets. Returns: The results DataFrame with an added column of normalized scores. perfect = tpCount * profiles[profile_name]["CostMatrix"]["tpWeight"] # Note that the null detector's name is NaN in the `Detector` column base = results[pd.isna(results['Detector'])]['Score'].iloc[0] scores = results['Score'] results['Normalized_Score'] = 100 * (scores - base) / (perfect - base) # Reindex column order for more organized table columns = results.columns.to_list() columns.remove('Score') columns.remove('Normalized_Score') columns += ['Score', 'Normalized_Score'] results = results.reindex(columns=columns) print('Normalization used min raw score: {0} and max raw score: {1}'.format(base, perfect)) return results Explanation: Run scoring Here we quantitatively score the algorithm's performance on the Numenta Anomaly Benchmark. The benchmark uses a custom scoring mechanism described in their paper. Unlike precision and recall which do not reward for early detection, this scoring mechanism rewards based on windows around anomalous points rather than the exact points themselves. We will run the scoring script with the --optimize flag, which uses the anomaly_scores column to score and optimizes the decision threshold. If this flag is omitted, then the script will only use the label column originally outputted by the component. End of explanation tpCount = len(chosen_task_targets) profile_name = 'standard' profiles = json.load(open('/content/NAB/config/profiles.json')) profiles results = collect_scores(profile_name, chosen_task) results = normalize_scores(results, profile_name, profiles, tpCount) results Explanation: NAB also provides scores for three profile settings: standard, reward_low_FN_rate, and reward_low_FP_rate. If you run the cell below you can see the cost matrix for each profile, where reward_low_FN_rate penalizes false negatives more and reward_low_FP_rate penalizes false positives more. For example, if for the NYC Taxi & Limousine Commission it is worse to not have enough taxis during a big event than it is to have too many, then they may want to score based on a reward_low_FN_rate profile. For the purposes of this demo we will only display results for the standard profile. End of explanation
12,609
Given the following text description, write Python code to implement the functionality described below step by step Description: Aufgaben zur personal-Datenbank Step1: Welcher Mitarbeiter steht in einer alphabetisch sortierten Liste an letzter Stelle? Es sollen die Mitarbeiternummer, der Nachname und der Vorname angezeigt werden. Step2: Welche Mitarbeiter haben an welchen Projekten welcher Kunden wieviele Stunden gearbeitet? Zeigen sie Mitarbeiternummer (Sortierkriterium), Nachname und Vorname des Mitarbeiters, Projektnummer, Projektname, Stunden und Firma an Step3: Die Unternehmensleitung möchte die Summe der Monatsgehälter für jede Abteilung wissen. Die Spalten Abteilungsnummer, Abteilungsname und Summe der Monatsgehälter sollen angezeigt werden. Step4: Es soll das durchschnittliche Alter aller Mitarbeiter, das Alter des ältesten Mitarbeiters und Alter des jüngsten Mitarbeiters ermittelt werden. Es genügt ein Näherungswert in Jahren Step5: Wie viele Mitarbeiter arbeiten in der Abteilung 3? Step6: Alle Mitarbeiter in der Abteilung 4, die mehr verdienen als der Mitarbeiter der Abteilung 5 mit dem höchsten Monatsgehalt, sind mit der Mitarbeiternummer, dem Nachnamen und dem Monatsgehalt anzuzeigen. Step7: Welche Mitarbeiter haben ein kleineres Monatsgehalt als das durchschnittliche Monatsgehalt aller Mitarbeiter? Nachname, Vorname und Monatsgehalt dieser Mitarbeiter sollen angezeigt werden. Step8: In welchen Abteilungen arbeiten mehr als vier Mitarbeiter? Die Abteilungsnummer und die Anzahl der Mitarbeiter sollen angezeigt werden. Step9: Sie wollen wissen, wieviel Kosten (Stunden * Stundensatz) bisher für das Projekt PKR aufgelaufen sind.
Python Code: %load_ext sql %sql mysql://steinam:steinam@localhost/personal Explanation: Aufgaben zur personal-Datenbank End of explanation %%sql select MNr, MName, MVorname from Mitarbeiter order by MName desc limit 1; Explanation: Welcher Mitarbeiter steht in einer alphabetisch sortierten Liste an letzter Stelle? Es sollen die Mitarbeiternummer, der Nachname und der Vorname angezeigt werden. End of explanation %%sql select Mitarbeiter.MNr, MName, Stunden, Projektname, Firma from Mitarbeiter inner join Projektbearbeitung on MItarbeiter.MNr = Projektbearbeitung.MNr inner join Projekte on Projektbearbeitung.ProjNr = Projekte.ProjektNr inner join Kunden on Kunden.KundenNr = Projekte.KundenCode Explanation: Welche Mitarbeiter haben an welchen Projekten welcher Kunden wieviele Stunden gearbeitet? Zeigen sie Mitarbeiternummer (Sortierkriterium), Nachname und Vorname des Mitarbeiters, Projektnummer, Projektname, Stunden und Firma an End of explanation %%sql select Abteilung.AbtName, sum(Monatsgehalt) from Abteilung inner join Mitarbeiter on Mitarbeiter.AbtNr = Abteilung.AbtNr inner join Gehalt on Mitarbeiter.MNr = Gehalt.MNr group by Abteilung.Abtname Explanation: Die Unternehmensleitung möchte die Summe der Monatsgehälter für jede Abteilung wissen. Die Spalten Abteilungsnummer, Abteilungsname und Summe der Monatsgehälter sollen angezeigt werden. End of explanation %%sql select avg((year(now()) - year(MGeburtsdatum))) from Mitarbeiter Explanation: Es soll das durchschnittliche Alter aller Mitarbeiter, das Alter des ältesten Mitarbeiters und Alter des jüngsten Mitarbeiters ermittelt werden. Es genügt ein Näherungswert in Jahren End of explanation %%sql /* geht select count(MNr) from Mitarbeiter where Mitarbeiter.AbtNr = 3 */ /* gehjt auch, weil wir wegen dem where vor dem Groupen nur noch Datensätze haben, die die AbtNr 3 haben; dann kann das group by auch wegbleiben select Abteilung.AbtNr, Abteilung.AbtName, count(Mitarbeiter.MNr) from Abteilung inner join Mitarbeiter on Abteilung.AbtNr = Mitarbeiter.AbtNr where Abteilung.AbtNr = 3 */ Explanation: Wie viele Mitarbeiter arbeiten in der Abteilung 3? End of explanation %%sql select Mitarbeiter.MName, Monatsgehalt from Mitarbeiter inner join Gehalt on Mitarbeiter.MNr = Gehalt.MNr where Mitarbeiter.AbtNr = 4 and MOnatsgehalt > ( select max(Monatsgehalt) from Gehalt inner join Mitarbeiter on Mitarbeiter.MNr = Gehalt.MNr where Mitarbeiter.AbtNr = 5 ) Explanation: Alle Mitarbeiter in der Abteilung 4, die mehr verdienen als der Mitarbeiter der Abteilung 5 mit dem höchsten Monatsgehalt, sind mit der Mitarbeiternummer, dem Nachnamen und dem Monatsgehalt anzuzeigen. End of explanation %%sql select Mitarbeiter.MName, Monatsgehalt from Mitarbeiter inner join Gehalt on Mitarbeiter.MNr = Gehalt.MNr where Monatsgehalt < ( select avg(Monatsgehalt) from Gehalt ) Explanation: Welche Mitarbeiter haben ein kleineres Monatsgehalt als das durchschnittliche Monatsgehalt aller Mitarbeiter? Nachname, Vorname und Monatsgehalt dieser Mitarbeiter sollen angezeigt werden. End of explanation %%sql select AbtNr, count(*) as Anzahl from Mitarbeiter M group by AbtNr having Anzahl > 4 Explanation: In welchen Abteilungen arbeiten mehr als vier Mitarbeiter? Die Abteilungsnummer und die Anzahl der Mitarbeiter sollen angezeigt werden. End of explanation %%sql select sum(Stundensatz * Stunden), ProjNr from Projektbearbeitung inner join Mitarbeiter on Projektbearbeitung.MNr = Mitarbeiter.MNr inner join Stundensatz on Mitarbeiter.StundensatzNr = Stundensatz.StundensatzNr where ProjNr = 'A1' Explanation: Sie wollen wissen, wieviel Kosten (Stunden * Stundensatz) bisher für das Projekt PKR aufgelaufen sind. End of explanation
12,610
Given the following text description, write Python code to implement the functionality described below step by step Description: Exporting data from BigQuery to Google Cloud Storage In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data. Step1: Please ignore any incompatibility warnings and errors. Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart). Step2: Change the following cell as necessary Step3: Create BigQuery tables If you haven not already created a BigQuery dataset for our data, run the following cell Step4: Let's create a table with 1 million examples. Note that the order of columns is exactly what was in our CSV files. Step5: Make the validation dataset be 1/10 the size of the training dataset. Step6: Export the tables as CSV files
Python Code: !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst %pip install google-cloud-bigquery==1.25.0 Explanation: Exporting data from BigQuery to Google Cloud Storage In this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data. End of explanation # Importing necessary tensorflow library and printing the TF version. import tensorflow as tf print("Tensorflow version: ",tf.__version__) import os from google.cloud import bigquery Explanation: Please ignore any incompatibility warnings and errors. Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart). End of explanation # Change with your own bucket and project below: BUCKET = "<BUCKET>" PROJECT = "<PROJECT>" OUTDIR = "gs://{bucket}/taxifare/data".format(bucket=BUCKET) os.environ['BUCKET'] = BUCKET os.environ['OUTDIR'] = OUTDIR os.environ['PROJECT'] = PROJECT Explanation: Change the following cell as necessary: End of explanation bq = bigquery.Client(project = PROJECT) dataset = bigquery.Dataset(bq.dataset("taxifare")) try: bq.create_dataset(dataset) print("Dataset created") except: print("Dataset already exists") Explanation: Create BigQuery tables If you haven not already created a BigQuery dataset for our data, run the following cell: End of explanation %%bigquery CREATE OR REPLACE TABLE taxifare.feateng_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 Explanation: Let's create a table with 1 million examples. Note that the order of columns is exactly what was in our CSV files. End of explanation %%bigquery CREATE OR REPLACE TABLE taxifare.feateng_valid_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'unused' AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 Explanation: Make the validation dataset be 1/10 the size of the training dataset. End of explanation %%bash echo "Deleting current contents of $OUTDIR" gsutil -m -q rm -rf $OUTDIR echo "Extracting training data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ taxifare.feateng_training_data \ $OUTDIR/taxi-train-*.csv echo "Extracting validation data to $OUTDIR" bq --location=US extract \ --destination_format CSV \ --field_delimiter "," --noprint_header \ taxifare.feateng_valid_data \ $OUTDIR/taxi-valid-*.csv gsutil ls -l $OUTDIR !gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2 Explanation: Export the tables as CSV files End of explanation
12,611
Given the following text description, write Python code to implement the functionality described below step by step Description: Benchmarking MLDB This notebook contains the code to run "The Absolute Minimum Benchmark" for a machine learning tool. First we load the Python MLDB helper library Step1: Next we create the datasets directly from the remote files. Step4: Now we create the experimental setup. Step5: Finally, we run the experiment inside a timing block. On an otherwise-unloaded AWS EC2 r3.8xlarge instance (32 cores, 240GB of RAM) it takes around 20 seconds to reach an AUC of more than 0.74.
Python Code: from pymldb import Connection mldb = Connection("http://localhost/") Explanation: Benchmarking MLDB This notebook contains the code to run "The Absolute Minimum Benchmark" for a machine learning tool. First we load the Python MLDB helper library End of explanation mldb.put('/v1/procedures/import_bench_train_1m', { "type": "import.text", "params": { "dataFileUrl": "https://s3.amazonaws.com/benchm-ml--main/train-1m.csv", "outputDataset":"bench_train_1m", "runOnCreation": True } }) mldb.put('/v1/procedures/import_bench_test', { "type": "import.text", "params": { "dataFileUrl": "https://s3.amazonaws.com/benchm-ml--main/test.csv", "outputDataset":"bench_test", "runOnCreation": True } }) print "Datasets loaded." Explanation: Next we create the datasets directly from the remote files. End of explanation mldb.put('/v1/procedures/benchmark', { "type": "classifier.experiment", "params": { "experimentName": "benchm_ml", "inputData": select {* EXCLUDING(dep_delayed_15min)} as features, dep_delayed_15min = 'Y' as label from bench_train_1m , "testingDataOverride": select {* EXCLUDING(dep_delayed_15min)} as features, dep_delayed_15min = 'Y' as label from bench_test , "configuration": { "type": "bagging", "num_bags": 100, "validation_split": 0, "weak_learner": { "type": "decision_tree", "max_depth": 20, "random_feature_propn": 0.3 } }, "modelFileUrlPattern": "file:///mldb_data/models/benchml_$runid.cls", "mode": "boolean" } }) print "Ready to go!" Explanation: Now we create the experimental setup. End of explanation import time start_time = time.time() result = mldb.post('/v1/procedures/benchmark/runs') run_time = time.time() - start_time auc = result.json()["status"]["folds"][0]["resultsTest"]["auc"] print "\n\nAUC = %0.10f, time = %0.4f\n\n" % (auc, run_time) Explanation: Finally, we run the experiment inside a timing block. On an otherwise-unloaded AWS EC2 r3.8xlarge instance (32 cores, 240GB of RAM) it takes around 20 seconds to reach an AUC of more than 0.74. End of explanation
12,612
Given the following text description, write Python code to implement the functionality described below step by step Description: Plot point-spread functions (PSFs) and cross-talk functions (CTFs) Visualise PSF and CTF at one vertex for sLORETA. Step1: Visualize PSF Step2: CTF
Python Code: # Authors: Olaf Hauk <[email protected]> # Alexandre Gramfort <[email protected]> # # License: BSD-3-Clause import mne from mne.datasets import sample from mne.minimum_norm import (make_inverse_resolution_matrix, get_cross_talk, get_point_spread) print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects/' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif' fname_evo = data_path + '/MEG/sample/sample_audvis-ave.fif' # read forward solution forward = mne.read_forward_solution(fname_fwd) # forward operator with fixed source orientations mne.convert_forward_solution(forward, surf_ori=True, force_fixed=True, copy=False) # noise covariance matrix noise_cov = mne.read_cov(fname_cov) # evoked data for info evoked = mne.read_evokeds(fname_evo, 0) # make inverse operator from forward solution # free source orientation inverse_operator = mne.minimum_norm.make_inverse_operator( info=evoked.info, forward=forward, noise_cov=noise_cov, loose=0., depth=None) # regularisation parameter snr = 3.0 lambda2 = 1.0 / snr ** 2 method = 'MNE' # can be 'MNE' or 'sLORETA' # compute resolution matrix for sLORETA rm_lor = make_inverse_resolution_matrix(forward, inverse_operator, method='sLORETA', lambda2=lambda2) # get PSF and CTF for sLORETA at one vertex sources = [1000] stc_psf = get_point_spread(rm_lor, forward['src'], sources, norm=True) stc_ctf = get_cross_talk(rm_lor, forward['src'], sources, norm=True) del rm_lor Explanation: Plot point-spread functions (PSFs) and cross-talk functions (CTFs) Visualise PSF and CTF at one vertex for sLORETA. End of explanation # Which vertex corresponds to selected source vertno_lh = forward['src'][0]['vertno'] verttrue = [vertno_lh[sources[0]]] # just one vertex # find vertices with maxima in PSF and CTF vert_max_psf = vertno_lh[stc_psf.data.argmax()] vert_max_ctf = vertno_lh[stc_ctf.data.argmax()] brain_psf = stc_psf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir) brain_psf.show_view('ventral') brain_psf.add_text(0.1, 0.9, 'sLORETA PSF', 'title', font_size=16) # True source location for PSF brain_psf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh', color='green') # Maximum of PSF brain_psf.add_foci(vert_max_psf, coords_as_verts=True, scale_factor=1., hemi='lh', color='black') Explanation: Visualize PSF: End of explanation brain_ctf = stc_ctf.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir) brain_ctf.add_text(0.1, 0.9, 'sLORETA CTF', 'title', font_size=16) brain_ctf.show_view('ventral') brain_ctf.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh', color='green') # Maximum of CTF brain_ctf.add_foci(vert_max_ctf, coords_as_verts=True, scale_factor=1., hemi='lh', color='black') Explanation: CTF: End of explanation
12,613
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This tutorial gives a short overview of the AD-module included in PorePy. For an example where the AD module has been used to solve non-linear compressible flow, see the tutorial Step1: Scalar AD-variables We initiate a variable $x = 2$ by giving a pair (val, jac) to the Ad_array class. val is the value at which the function will be evaluated and jac =1 since $\frac{d x}{dx} = 1$. Step2: We can now define a function $y=x^2 + 3$ Step3: To obtain the function value and the derivative we can call .val and .jac Step4: $y$ is also an AD variable as a function of $x$. We can use it to declare further functions, e.g., $h(x) = e^{y(x)}$. To take the exponential of an Ad_array we need to call the exponential function found in the AD module Step5: If we knew the value and jacobian of $y$ we could alternatively skip initiating $x$ and initiate $y$ directly Step6: Arrays of AD-variables The Ad_array class also support arrays. Step7: As for the scalar case, it is straight forward to define functions using normal Python programming. Let us declare the function $$y = Ax + x^2$$ which has the jacobian $$ J(y) = A + 2 \text{diag}(x)$$ With this notation we mean $x^2 = [x_1^2, x_2^2, x_3^2]$, and $\text{diag}(x)$ is a matrix with $x$ on the diagonal and zeros elsewhere.
Python Code: import numpy as np import scipy.sparse as sps from porepy.numerics.ad.forward_mode import Ad_array import porepy.numerics.ad.functions as af Explanation: Introduction This tutorial gives a short overview of the AD-module included in PorePy. For an example where the AD module has been used to solve non-linear compressible flow, see the tutorial: "compressible_flow_with_automatic_differentiation" End of explanation x = Ad_array(2, 1) Explanation: Scalar AD-variables We initiate a variable $x = 2$ by giving a pair (val, jac) to the Ad_array class. val is the value at which the function will be evaluated and jac =1 since $\frac{d x}{dx} = 1$. End of explanation y = x**2 + 3 Explanation: We can now define a function $y=x^2 + 3$ End of explanation print('y value is: ', y.val) print('dy/dx is: ', y.jac) Explanation: To obtain the function value and the derivative we can call .val and .jac End of explanation h = af.exp(y) print('h value is: ', h.val) print('dh/dx is: ', h.jac) Explanation: $y$ is also an AD variable as a function of $x$. We can use it to declare further functions, e.g., $h(x) = e^{y(x)}$. To take the exponential of an Ad_array we need to call the exponential function found in the AD module End of explanation y = Ad_array(7, 4) h = af.exp(y) print('h value is: ', h.val) print('dh/dx is: ', h.jac) Explanation: If we knew the value and jacobian of $y$ we could alternatively skip initiating $x$ and initiate $y$ directly: End of explanation x = Ad_array(np.array([1,2,3]), sps.diags([1,1,1])) Explanation: Arrays of AD-variables The Ad_array class also support arrays. End of explanation A = sps.csc_matrix(np.array([[0,2,3],[4,0,6],[7,8,0]])) y = A*x + x**2 print('Analytic y value: ') print(np.array([14, 26, 32])) print('Analytic y jacobian:') print(np.array([[2,2,3],[4,4,6],[7,8,6]]),'\n') print('Ad y value: ') print(y.val) print('Ad y jacobian:') print(y.jac.A) Explanation: As for the scalar case, it is straight forward to define functions using normal Python programming. Let us declare the function $$y = Ax + x^2$$ which has the jacobian $$ J(y) = A + 2 \text{diag}(x)$$ With this notation we mean $x^2 = [x_1^2, x_2^2, x_3^2]$, and $\text{diag}(x)$ is a matrix with $x$ on the diagonal and zeros elsewhere. End of explanation
12,614
Given the following text description, write Python code to implement the functionality described below step by step Description: Markov decision processes (MDPs) This IPy notebook acts as supporting material for topics covered in Chapter 17 Making Complex Decisions of the book Artificial Intelligence Step1: CONTENTS Overview MDP Grid MDP Value Iteration Visualization OVERVIEW Before we start playing with the actual implementations let us review a couple of things about MDPs. A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it. -- Source Step2: The _init _ method takes in the following parameters Step3: Finally we instantize the class with the parameters for our MDP in the picture. Step4: With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP. GRID MDP Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in Fig 17.1 of the AIMA Book. The code should be easy to understand if you have gone through the CustomMDP example. Step5: The _init _ method takes grid as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states. go method returns the state by going in particular direction by using vector_add. T method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s. actions method returns list of actions possible in each state. By default it returns all actions for states other than terminal states. to_arrows are used for representing the policy in a grid like format. We can create a GridMDP like the one in Fig 17.1 as follows Step6: Value Iteration Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better. We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy pi.The algorithm Value Iteration (Fig. 17.4 in the book) relies on finding solutions of the Bellman's Equation. The intuition Value Iteration works is because values propagate. This point will we more clear after we encounter the visualisation. For more information you can refer to Section 17.2 of the book. Step7: It takes as inputs two parameters, an MDP to solve and epsilon the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. Let us solve the sequencial_decision_enviornment GridMDP. Step8: The pseudocode for the algorithm Step9: VALUE ITERATION VISUALIZATION To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want. Step10: Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io
Python Code: from mdp import * from notebook import psource, pseudocode Explanation: Markov decision processes (MDPs) This IPy notebook acts as supporting material for topics covered in Chapter 17 Making Complex Decisions of the book Artificial Intelligence: A Modern Approach. We makes use of the implementations in mdp.py module. This notebook also includes a brief summary of the main topics as a review. Let us import everything from the mdp module to get started. End of explanation %psource MDP Explanation: CONTENTS Overview MDP Grid MDP Value Iteration Visualization OVERVIEW Before we start playing with the actual implementations let us review a couple of things about MDPs. A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it. -- Source: Wikipedia Often it is possible to model many different phenomena as a Markov process by being flexible with our definition of state. MDPs help us deal with fully-observable and non-deterministic/stochastic environments. For dealing with partially-observable and stochastic cases we make use of generalization of MDPs named POMDPs (partially observable Markov decision process). Our overall goal to solve a MDP is to come up with a policy which guides us to select the best action in each state so as to maximize the expected sum of future rewards. MDP To begin with let us look at the implementation of MDP class defined in mdp.py The docstring tells us what all is required to define a MDP namely - set of states,actions, initial state, transition model, and a reward function. Each of these are implemented as methods. Do not close the popup so that you can follow along the description of code below. End of explanation # Transition Matrix as nested dict. State -> Actions in state -> States by each action -> Probabilty t = { "A": { "X": {"A":0.3, "B":0.7}, "Y": {"A":1.0} }, "B": { "X": {"End":0.8, "B":0.2}, "Y": {"A":1.0} }, "End": {} } init = "A" terminals = ["End"] rewards = { "A": 5, "B": -10, "End": 100 } class CustomMDP(MDP): def __init__(self, transition_matrix, rewards, terminals, init, gamma=.9): # All possible actions. actlist = [] for state in transition_matrix.keys(): actlist.extend(transition_matrix.keys()) actlist = list(set(actlist)) MDP.__init__(self, init, actlist, terminals=terminals, gamma=gamma) self.t = transition_matrix self.reward = rewards for state in self.t: self.states.add(state) def T(self, state, action): return [(new_state, prob) for new_state, prob in self.t[state][action].items()] Explanation: The _init _ method takes in the following parameters: init: the initial state. actlist: List of actions possible in each state. terminals: List of terminal states where only possible action is exit gamma: Discounting factor. This makes sure that delayed rewards have less value compared to immediate ones. R method returns the reward for each state by using the self.reward dict. T method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s. actions method returns list of actions possible in each state. By default it returns all actions for states other than terminal states. Now let us implement the simple MDP in the image below. States A, B have actions X, Y available in them. Their probabilities are shown just above the arrows. We start with using MDP as base class for our CustomMDP. Obviously we need to make a few changes to suit our case. We make use of a transition matrix as our transitions are not very simple. <img src="files/images/mdp-a.png"> End of explanation our_mdp = CustomMDP(t, rewards, terminals, init, gamma=.9) Explanation: Finally we instantize the class with the parameters for our MDP in the picture. End of explanation %psource GridMDP Explanation: With this we have sucessfully represented our MDP. Later we will look at ways to solve this MDP. GRID MDP Now we look at a concrete implementation that makes use of the MDP as base class. The GridMDP class in the mdp module is used to represent a grid world MDP like the one shown in in Fig 17.1 of the AIMA Book. The code should be easy to understand if you have gone through the CustomMDP example. End of explanation sequential_decision_environment Explanation: The _init _ method takes grid as an extra parameter compared to the MDP class. The grid is a nested list of rewards in states. go method returns the state by going in particular direction by using vector_add. T method is not implemented and is somewhat different from the text. Here we return (probability, s') pairs where s' belongs to list of possible state by taking action a in state s. actions method returns list of actions possible in each state. By default it returns all actions for states other than terminal states. to_arrows are used for representing the policy in a grid like format. We can create a GridMDP like the one in Fig 17.1 as follows: GridMDP([[-0.04, -0.04, -0.04, +1], [-0.04, None, -0.04, -1], [-0.04, -0.04, -0.04, -0.04]], terminals=[(3, 2), (3, 1)]) In fact the sequential_decision_environment in mdp module has been instantized using the exact same code. End of explanation psource(value_iteration) Explanation: Value Iteration Now that we have looked how to represent MDPs. Let's aim at solving them. Our ultimate goal is to obtain an optimal policy. We start with looking at Value Iteration and a visualisation that should help us understanding it better. We start by calculating Value/Utility for each of the states. The Value of each state is the expected sum of discounted future rewards given we start in that state and follow a particular policy pi.The algorithm Value Iteration (Fig. 17.4 in the book) relies on finding solutions of the Bellman's Equation. The intuition Value Iteration works is because values propagate. This point will we more clear after we encounter the visualisation. For more information you can refer to Section 17.2 of the book. End of explanation value_iteration(sequential_decision_environment) Explanation: It takes as inputs two parameters, an MDP to solve and epsilon the maximum error allowed in the utility of any state. It returns a dictionary containing utilities where the keys are the states and values represent utilities. Let us solve the sequencial_decision_enviornment GridMDP. End of explanation pseudocode("Value-Iteration") Explanation: The pseudocode for the algorithm: End of explanation def value_iteration_instru(mdp, iterations=20): U_over_time = [] U1 = {s: 0 for s in mdp.states} R, T, gamma = mdp.R, mdp.T, mdp.gamma for _ in range(iterations): U = U1.copy() for s in mdp.states: U1[s] = R(s) + gamma * max([sum([p * U[s1] for (p, s1) in T(s, a)]) for a in mdp.actions(s)]) U_over_time.append(U) return U_over_time Explanation: VALUE ITERATION VISUALIZATION To illustrate that values propagate out of states let us create a simple visualisation. We will be using a modified version of the value_iteration function which will store U over time. We will also remove the parameter epsilon and instead add the number of iterations we want. End of explanation columns = 4 rows = 3 U_over_time = value_iteration_instru(sequential_decision_environment) %matplotlib inline from notebook import make_plot_grid_step_function plot_grid_step = make_plot_grid_step_function(columns, rows, U_over_time) import ipywidgets as widgets from IPython.display import display from notebook import make_visualize iteration_slider = widgets.IntSlider(min=1, max=15, step=1, value=0) w=widgets.interactive(plot_grid_step,iteration=iteration_slider) display(w) visualize_callback = make_visualize(iteration_slider) visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False) time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0']) a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select) display(a) Explanation: Next, we define a function to create the visualisation from the utilities returned by value_iteration_instru. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io End of explanation
12,615
Given the following text description, write Python code to implement the functionality described below step by step Description: interp-acf demo Generate time series fluxes with two oscillation periods, and missing data Step1: Now we'll use two interpacf methods on these simulated fluxes Step2: Comparing with McQuillan, Aigrain & Mazeh (2013) ...for my favorite star, HAT-P-11. McQuillan et al. find a rotation period of 29.472 d. What do we find? This example makes use of the kplr package to download Kepler data. You'll need to install it to run this example, which you can do with Step3: Now measure the peak in the autocorrelation function for each quarter's light curve Step4: Compare with McQuillan+ 2013
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Make flux time-series with random noise, and # two periodic oscillations, one 70% the amplitude # of the other: np.random.seed(42) n_points = 1000 primary_period = 2.5*np.pi secondary_period = 1.3*np.pi all_times = np.linspace(0, 6*np.pi, n_points) all_fluxes = 10 + (0.1*np.random.randn(len(all_times)) + np.sin(2*np.pi/primary_period * all_times) + 0.7*np.cos(2*np.pi/secondary_period * (all_times - 2.5))) # Remove some fluxes, times from those data: n_points_missing = 200 # This number is approximate missing_indices = np.unique(np.random.randint(0, n_points, size=n_points_missing)) mask = list(set(np.arange(len(all_times))).difference(set(missing_indices))) times_incomplete = all_times[mask] fluxes_incomplete = all_fluxes[mask] # Plot these fluxes before and after data are removed: fig, ax = plt.subplots(1, 2, figsize=(14, 5)) ax[0].plot(all_times, all_fluxes, '.') ax[0].set(title='All fluxes (N={0})'.format(len(all_fluxes))) ax[1].plot(times_incomplete, fluxes_incomplete, '.') ax[1].set(title='With fluxes missing (N={0})'.format(len(fluxes_incomplete))) plt.show() Explanation: interp-acf demo Generate time series fluxes with two oscillation periods, and missing data: End of explanation from interpacf import interpolated_acf, dominant_period # Need zero-mean fluxes: fluxes_incomplete -= np.mean(fluxes_incomplete) # Compute autocorrelation function lag, acf = interpolated_acf(times_incomplete, fluxes_incomplete) # Find dominant period in autocorrelation function detected_period = dominant_period(lag, acf, plot=True) print("Actual dominant period: {0:.3f}\nDetected dominant period: " "{1:.3f}\nDifference: {2:.3f}%" .format(primary_period, detected_period, (primary_period - detected_period)/primary_period)) Explanation: Now we'll use two interpacf methods on these simulated fluxes: interpacf.interpolated_acf will interpolate over the missing fluxes and compute the autocorrelation function. Don't forget to subtract the flux its mean! interpacf.dominant_period returns the lag with the highest peak in the smoothed autocorrelation function. The default smoothing kernel matches that of McQuillan, Aigrain & Mazeh (2013) End of explanation import numpy as np import kplr client = kplr.API() # Find the target KOI. koi = client.koi(3.01) # Get a list of light curve datasets. lcs = koi.get_light_curves(short_cadence=False) # Loop over the datasets and read in the data. time, flux, ferr, quality = [], [], [], [] for lc in lcs[1:]: with lc.open() as f: # The lightcurve data are in the first FITS HDU. hdu_data = f[1].data time.append(hdu_data["time"]) flux.append(hdu_data["sap_flux"]) ferr.append(hdu_data["sap_flux_err"]) quality.append(hdu_data["sap_quality"]) time = np.array(time) # Median normalize each quarter of observations flux = np.array([f/np.nanmedian(f) - 1 for f in flux]) Explanation: Comparing with McQuillan, Aigrain & Mazeh (2013) ...for my favorite star, HAT-P-11. McQuillan et al. find a rotation period of 29.472 d. What do we find? This example makes use of the kplr package to download Kepler data. You'll need to install it to run this example, which you can do with: pip install kplr First download and normalize each quarter of the HAT-P-11 Kepler light curve: End of explanation %matplotlib inline periods = [] for i, t, f in zip(range(len(time)), time, flux): lag, acf = interpolated_acf(t[~np.isnan(f)], f[~np.isnan(f)]) period = dominant_period(lag, acf) periods.append(period) print("HAT-P-11 period in Q{0}: {1} d".format(i, period)) Explanation: Now measure the peak in the autocorrelation function for each quarter's light curve: End of explanation print("Median period (interpacf): {0};\n" "Period McQuillan+ 2013: 29.472" .format(np.median(periods)) Explanation: Compare with McQuillan+ 2013: End of explanation
12,616
Given the following text description, write Python code to implement the functionality described below step by step Description: a) Which thrillers were directed by Steven Spielberg? Step1: b) Who acted in at least 20 different films? Step2: c) List all shows of “Alice in Wonderland”. Step3: d) Who acted in his/her own movie? Step4: e) Which cinemas show films with Kate Winslet? Step5: f) Which films have more than one director? Step6: g) Which films have not been presented in a cinema yet? Step7: h) Who hasn’t participated in a film yet? Step8: i) Who directed at least two different films in the same year? Step9: k) Are there persons having the same name (name and first name)? Step10: Question 2 What is the meaning of the following SQL queries over the film schema. Provid the corresponding realational algebra expressions. a) SELECT DISTINCT title FROM (film JOIN show ON ID = film) JOIN cinema ON cinema.ID = cinema WHERE name = ’Metropol’ b) SELECT DISTINCT person.name, person.firstname FROM film, person, cinema, participation, show WHERE film.ID = participation.film AND film.ID = show.film AND person.ID = person AND cinema.ID = cinema AND date = ’2016-11-16’ Sheet 5 Question 3
Python Code: cur.execute('''SELECT film.title FROM film, person, participation WHERE film.genre LIKE '%Thriller%' AND film.id = participation.film AND person.id = participation.person AND participation.function = "director" AND person.name= "Spielberg" AND person.firstname = "Steven" ''') for row in cur.fetchall(): print(row[0]) Explanation: a) Which thrillers were directed by Steven Spielberg? End of explanation cur.execute('''SELECT DISTINCT person.firstname, person.name FROM person, participation WHERE person.id = participation.person AND participation.person IN (SELECT participation.person FROM participation GROUP BY participation.person HAVING count(*)>20);''') cur.execute('''SELECT person.firstname, person.name FROM (person JOIN participation ON participation.person=person.ID) GROUP BY person.ID HAVING COUNT(participation.film) > 20;''') # Both statements work. for row in cur.fetchall(): print(row[0], row[1]) Explanation: b) Who acted in at least 20 different films? End of explanation cur.execute('''SELECT show.date, cinema.name, cinema.city FROM show, film, cinema WHERE show.film = film.id AND show.cinema = cinema.id AND film.title="Alice in Wonderland";''') for row in cur.fetchall(): print(row[0], row[1], row[2]) Explanation: c) List all shows of “Alice in Wonderland”. End of explanation cur.execute('''SELECT p.firstname, p.name, f.title FROM (((person p INNER JOIN participation par ON p.ID=par.person AND par.function="director") INNER JOIN film f ON par.film=f.ID) INNER JOIN participation par2 ON p.ID=par2.person AND par2.film=par.film AND par2.function="actor") ORDER BY p.name;''') ''' Some of the results seem to be counter intuitive as actors in animation movies or severall people acting and directing in the same movie. But this is due to co-directors and the voice cast of animation movies ''' for row in cur.fetchall()[:20]: print(row[0], row[1], row[2]) Explanation: d) Who acted in his/her own movie? End of explanation cur.execute('''SELECT DISTINCT c.name, c.city FROM (cinema c INNER JOIN show s ON c.ID=s.cinema) WHERE s.film IN (SELECT f.ID FROM (film f INNER JOIN participation par ON f.ID= par.film) WHERE par.person= (SELECT p.ID FROM person p WHERE p.name="Winslet" AND p.firstname="Kate")) ;''') cur.execute('''SELECT DISTINCT c.name, c.city FROM (((cinema c JOIN show s ON c.ID=s.cinema) JOIN participation par ON s.film = par.film) JOIN person p ON p.ID=person) WHERE p.name="Winslet" AND p.firstname="Kate" ;''') for row in cur.fetchall()[:20]: print(row) Explanation: e) Which cinemas show films with Kate Winslet? End of explanation cur.execute('''SELECT DISTINCT f.title FROM ((film f INNER JOIN participation par ON f.ID = par.film AND par.function='director') INNER JOIN participation par1 ON f.ID = par1.film AND par1.function='director' AND par.person IS NOT par1.person) ;''') cur.execute('''SELECT f.title FROM (film f JOIN participation par ON f.ID = par.film) WHERE par.function='director' GROUP BY ID HAVING COUNT(*) > 1 ORDER BY f.title asc ;''') for row in cur.fetchall()[:20]: print(*row) Explanation: f) Which films have more than one director? End of explanation cur.execute('''SELECT f.title FROM (film f JOIN show s ON f.ID=s.film) WHERE s.date > '2015-05-30' ;''') '''The dates have been assigned randomly between 1980-01-01 and 2016-01-01 during database creation. ''' for row in cur.fetchall()[:20]: print(*row) Explanation: g) Which films have not been presented in a cinema yet? End of explanation cur.execute('''SELECT p.firstname, p.name FROM person p EXCEPT SELECT p.firstname, p.name FROM (person p JOIN participation par ON p.ID=par.person) ORDER BY p.name, p.firstname ;''') # It seems that for severall actors no participation record was written for row in cur.fetchall()[:20]: print(*row) cur.execute('''SELECT p.firstname, p.name FROM person p WHERE p.ID NOT IN (SELECT par.person FROM participation par) ORDER BY p.name, p.firstname ;''') for row in cur.fetchall()[:20]: print(*row) Explanation: h) Who hasn’t participated in a film yet? End of explanation cur.execute('''SELECT p.firstname, p.name FROM person p WHERE p.ID IN ( SELECT x.person FROM ( (SELECT * FROM (film f JOIN participation par ON f.ID=par.film) WHERE par.function="director" ) as x JOIN (SELECT * FROM (film f1 JOIN participation par1 ON f1.ID=par1.film) WHERE par1.function="director" ) as y ON x.year=y.year AND x.person = y.person AND x.film <> y.film )) ;''') for row in cur.fetchall()[:100]: print(*row) cur.execute('''SELECT f.year, f.title FROM (( film f JOIN participation par ON f.ID=par.film) JOIN person p ON p.ID=par.person) WHERE p.name = "Donner" AND p.firstname = "Richard" AND par.function='director' Order By f.year ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:100]: print(*row) Explanation: i) Who directed at least two different films in the same year? End of explanation cur.execute('''SELECT DISTINCT p.firstname, p.name FROM person p JOIN person p1 ON p.name = p1.name AND p.firstname = p1.firstname AND p.ID <> p1.ID ORDER BY p.name, p.firstname ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) Explanation: k) Are there persons having the same name (name and first name)? End of explanation cur.execute('''SELECT DISTINCT p.firstname, p.name FROM (((person p JOIN participation par ON p.ID=par.person) JOIN film f ON par.film=f.ID) JOIN show s ON f.ID=s.film) WHERE s.date<"2017-01-01" ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) cur.execute('''SELECT DISTINCT p.firstname, p.name FROM person p WHERE EXISTS (SELECT par.person FROM (participation par JOIN show s ON s.film=par.film ) WHERE s.date<"2017-01-01") ;''') # Just to see which movies where made in the same year for row in cur.fetchall()[:20]: print(*row) Explanation: Question 2 What is the meaning of the following SQL queries over the film schema. Provid the corresponding realational algebra expressions. a) SELECT DISTINCT title FROM (film JOIN show ON ID = film) JOIN cinema ON cinema.ID = cinema WHERE name = ’Metropol’ b) SELECT DISTINCT person.name, person.firstname FROM film, person, cinema, participation, show WHERE film.ID = participation.film AND film.ID = show.film AND person.ID = person AND cinema.ID = cinema AND date = ’2016-11-16’ Sheet 5 Question 3 End of explanation
12,617
Given the following text description, write Python code to implement the functionality described below step by step Description: Lab 3 - Basic Artificial Neural Network In this lab we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented with only basic math and linear algebra functions using Python's scientific computing library numpy. This will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use higher-level libraries such as Keras and Tensorflow which automate and optimize most of these functions, making the network much faster and easier to use. The code and MNIST test data is taken directly from http Step9: Next, we will build the artificial neural network by defining a new class called Network. This class will contain all the data for our neural network, as well as all the methods we need to compute activations between each layer, and train the network through backpropagation and stochastic gradient descent (SGD). Step10: Finally, we define two helper functions which compute the sigmoid activation function and it's derivative which is used in backpropagation. Step11: Iris dataset example Now we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset, which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using a pairwise plot using a buit-in function in seaborn. A pairwise plot is a kind of exploratory data analysis that helps us to find relationships between pairs of features within a multi-dimensional data set. In this case, we can use it to understand which features might be most useful for determining the species of the flower. Step12: Next, we will prepare the data set for training in our ANN. Here is a list of operations we need to perform on the data set so that it will work with the Network class we created above Step13: MNIST dataset example Next, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here. We will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set. note Step14: We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back to a 2d matrix form Step15: Assignment 3 - classification Now that you have a basic understanding of how an artificial neural network works and have seen it applied to a classification task using two types of data, see if you can use the network to solve another classification problem using another data set. In the week-3 folder there is a data set called wine.csv which is another common data set used to test classification capabilities of machine learning algorithms. You can find a description of the data set here
Python Code: %matplotlib inline import random import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set(style="ticks", color_codes=True) from sklearn.preprocessing import OneHotEncoder from sklearn.utils import shuffle Explanation: Lab 3 - Basic Artificial Neural Network In this lab we will build a very rudimentary Artificial Neural Network (ANN) and use it to solve some basic classification problems. This example is implemented with only basic math and linear algebra functions using Python's scientific computing library numpy. This will allow us to study how each aspect of the network works, and to gain an intuitive understanding of its functions. In future labs we will use higher-level libraries such as Keras and Tensorflow which automate and optimize most of these functions, making the network much faster and easier to use. The code and MNIST test data is taken directly from http://neuralnetworksanddeeplearning.com/ by Michael Nielsen. Please review the first chapter of the book for a thorough explanation of the code. First we import the Python libraries we will be using, including the random library for generating random numbers, numpy for scientific computing, matplotlib and seaborn for creating data visualizations, and several helpful modules from the sci-kit learn machine learning library: End of explanation class Network(object): def __init__(self, sizes): The list ``sizes`` contains the number of neurons in the respective layers of the network. For example, if the list was [2, 3, 1] then it would be a three-layer network, with the first layer containing 2 neurons, the second layer 3 neurons, and the third layer 1 neuron. The biases and weights for the network are initialized randomly, using a Gaussian distribution with mean 0, and variance 1. Note that the first layer is assumed to be an input layer, and by convention we won't set any biases for those neurons, since biases are only ever used in computing the outputs for later layers. self.num_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y, 1) for y in sizes[1:]] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedforward (self, a): Return the output of the network if "a" is input. The np.dot() function computes the matrix multiplication between the weight and input matrices for each set of layers. When used with numpy arrays, the '+' operator performs matrix addition. for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w, a)+b) return a def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None): Train the neural network using mini-batch stochastic gradient descent. The "training_data" is a list of tuples "(x, y)" representing the training inputs and the desired outputs. The other non-optional parameters specify the number of epochs, size of each mini-batch, and the learning rate. If "test_data" is provided then the network will be evaluated against the test data after each epoch, and partial progress printed out. This is useful for tracking progress, but slows things down substantially. # create an empty array to store the accuracy results from each epoch results = [] n = len(training_data) if test_data: n_test = len(test_data) # this is the code for one training step, done once for each epoch for j in xrange(epochs): # before each epoch, the data is randomly shuffled random.shuffle(training_data) # training data is broken up into individual mini-batches mini_batches = [ training_data[k:k+mini_batch_size] for k in xrange(0, n, mini_batch_size) ] # then each mini-batch is used to update the parameters of the # network using backpropagation and the specified learning rate for mini_batch in mini_batches: self.update_mini_batch(mini_batch, eta) # if a test data set is provided, the accuracy results # are displayed and stored in the 'results' array if test_data: num_correct = self.evaluate(test_data) accuracy = "%.2f" % (100 * (float(num_correct) / n_test)) print "Epoch", j, ":", num_correct, "/", n_test, "-", accuracy, "% acc" results.append(accuracy) else: print "Epoch", j, "complete" return results def update_mini_batch(self, mini_batch, eta): Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch. The "mini_batch" is a list of tuples "(x, y)", and "eta" is the learning rate. nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] for x, y in mini_batch: delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] def backprop(self, x, y): Return a tuple ``(nabla_b, nabla_w)`` representing the gradient for the cost function C_x. ``nabla_b`` and ``nabla_w`` are layer-by-layer lists of numpy arrays, similar to ``self.biases`` and ``self.weights``. nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] # feedforward activation = x activations = [x] # list to store all the activations, layer by layer zs = [] # list to store all the z vectors, layer by layer for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) # backward pass delta = self.cost_derivative(activations[-1], y) * \ sigmoid_prime(zs[-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta, activations[-2].transpose()) Note that the variable l in the loop below is used a little differently to the notation in Chapter 2 of the book. Here, l = 1 means the last layer of neurons, l = 2 is the second-last layer, and so on. It's a renumbering of the scheme in the book, used here to take advantage of the fact that Python can use negative indices in lists. for l in xrange(2, self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weights[-l+1].transpose(), delta) * sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) return (nabla_b, nabla_w) def evaluate(self, test_data): Return the number of test inputs for which the neural network outputs the correct result. Note that the neural network's output is assumed to be the index of whichever neuron in the final layer has the highest activation. Numpy's argmax() function returns the position of the largest element in an array. We first create a list of predicted value and target value pairs, and then count the number of times those values match to get the total number correct. test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in test_data] return sum(int(x == y) for (x, y) in test_results) def cost_derivative(self, output_activations, y): Return the vector of partial derivatives \partial C_x / \partial a for the output activations. return (output_activations-y) Explanation: Next, we will build the artificial neural network by defining a new class called Network. This class will contain all the data for our neural network, as well as all the methods we need to compute activations between each layer, and train the network through backpropagation and stochastic gradient descent (SGD). End of explanation def sigmoid(z): # The sigmoid activation function. return 1.0/(1.0 + np.exp(-z)) def sigmoid_prime(z): # Derivative of the sigmoid function. return sigmoid(z)*(1-sigmoid(z)) Explanation: Finally, we define two helper functions which compute the sigmoid activation function and it's derivative which is used in backpropagation. End of explanation iris_data = sns.load_dataset("iris") # randomly shuffle data iris_data = shuffle(iris_data) # print first 5 data points print iris_data[:5] # create pairplot of iris data g = sns.pairplot(iris_data, hue="species") Explanation: Iris dataset example Now we will test our basic artificial neural network on a very simple classification problem. First we will use the seaborn data visualization library to load the 'iris' dataset, which consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), with four features measuring the length and the width of each flower's sepals and petals. After we load the data we will vizualize it using a pairwise plot using a buit-in function in seaborn. A pairwise plot is a kind of exploratory data analysis that helps us to find relationships between pairs of features within a multi-dimensional data set. In this case, we can use it to understand which features might be most useful for determining the species of the flower. End of explanation # convert iris data to numpy format iris_array = iris_data.as_matrix() # split data into feature and target sets X = iris_array[:, :4].astype(float) y = iris_array[:, -1] # normalize the data per feature by dividing by the maximum value in each column X = X / X.max(axis=0) # convert the textual category data to integer using numpy's unique() function _, y = np.unique(y, return_inverse=True) # convert the list of targets to a vertical matrix with the dimensions [1 x number of samples] # this is necessary for later computation y = y.reshape(-1,1) # combine feature and target data into a new python array data = [] for i in range(X.shape[0]): data.append(tuple([X[i].reshape(-1,1), y[i][0]])) # split data into training and test sets trainingSplit = int(.7 * len(data)) training_data = data[:trainingSplit] test_data = data[trainingSplit:] # create an instance of the one-hot encoding function from the sci-kit learn library enc = OneHotEncoder() # use the function to figure out how many categories exist in the data enc.fit(y) # convert only the target data in the training set to one-hot encoding training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data] # define the network net = Network([4, 32, 3]) # train the network using SGD, and output the results results = net.SGD(training_data, 30, 10, 0.2, test_data=test_data) # visualize the results plt.plot(results) plt.ylabel('accuracy (%)') plt.ylim([0,100.0]) plt.show() Explanation: Next, we will prepare the data set for training in our ANN. Here is a list of operations we need to perform on the data set so that it will work with the Network class we created above: Convert data to numpy format Normalize the data so that each features is scaled from 0 to 1 Split data into feature and target data sets by extracting specific rows from the numpy array. In this case the features are in the first four columns, and the target is in the last column, which in Python we can access with a negative index Recombine the data into a single Python array, so that each entry in the array represents one sample, and each sample is composed of two numpy arrays, one for the feature data, and one for the target Split this data set into training and testing sets Finally, we also need to convert the targets of the training set to 'one-hot' encoding (OHE). OHE takes each piece of categorical data and converts it to a list of binary values the length of which is equal to the number of categories, and the position of the current category denoted with a '1' and '0' for all others. For example, in our dataset we have 3 possible categories: versicolor, virginica, and setosa. After applying OHE, versicolor becomes [1,0,0], virginica becomes [0,1,0], and setosa becomes [0,0,1]. OHE is often used to represent target data in neural networks because it allows easy comparison to the output coming from the network's final layer. End of explanation import mnist_loader training_data, validation_data, test_data = mnist_loader.load_data_wrapper() Explanation: MNIST dataset example Next, we will test our ANN on another, slightly more difficult classification problem. The data set we'll be using is called MNIST, which contains tens of thousands of scanned images of handwritten digits, classified according to the digit type from 0-9. The name MNIST comes from the fact that it is a Modified (M) version of a dataset originally developed by the United States' National Institute of Standards and Technology (NIST). This is a very popular dataset used to measure the effectiveness of Machine Learning models for image recongnition. This time we don't have to do as much data management since the data is already provided in the right format here. We will get into more details about working with images and proper data formats for image data in later labs, but you can already use this data to test the effectiveness of our network. With the default settings you should be able to get a classification accuracy of 95% in the test set. note: since this is a much larger data set than the Iris data, the training will take substantially more time. End of explanation img = training_data[0][0][:,0].reshape((28,28)) fig = plt.figure() plt.imshow(img, interpolation='nearest', vmin = 0, vmax = 1, cmap=plt.cm.gray) plt.axis('off') plt.show() net = Network([784, 30, 10]) results = net.SGD(training_data, 30, 10, 3.0, test_data=test_data) plt.plot(results) plt.ylabel('accuracy (%)') plt.ylim([0,100.0]) plt.show() Explanation: We can use the matplotlib library to visualize one of the training images. In the data set, the pixel values of each 28x28 pixel image is encoded in a straight list of 784 numbers, so before we visualize it we have to use numpy's reshape function to convert it back to a 2d matrix form End of explanation %matplotlib inline import random import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set(style="ticks", color_codes=True) from sklearn.preprocessing import OneHotEncoder from sklearn.utils import shuffle class Network(object): def __init__(self, sizes): self.num_layers = len(sizes) self.sizes = sizes self.biases = [np.random.randn(y, 1) for y in sizes[1:]] self.weights = [np.random.randn(y, x) for x, y in zip(sizes[:-1], sizes[1:])] def feedforward (self, a): for b, w in zip(self.biases, self.weights): a = sigmoid(np.dot(w, a)+b) return a def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None): results = [] n = len(training_data) if test_data: n_test = len(test_data) for j in xrange(epochs): random.shuffle(training_data) mini_batches = [ training_data[k:k+mini_batch_size] for k in xrange(0, n, mini_batch_size) ] for mini_batch in mini_batches: self.update_mini_batch(mini_batch, eta) if test_data: num_correct = self.evaluate(test_data) accuracy = "%.2f" % (100 * (float(num_correct) / n_test)) print "Epoch", j, ":", num_correct, "/", n_test, "-", accuracy, "% acc" results.append(accuracy) else: print "Epoch", j, "complete" return results def update_mini_batch(self, mini_batch, eta): nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] for x, y in mini_batch: delta_nabla_b, delta_nabla_w = self.backprop(x, y) nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)] nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)] self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)] self.biases = [b-(eta/len(mini_batch))*nb for b, nb in zip(self.biases, nabla_b)] def backprop(self, x, y): nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] activation = x activations = [x] zs = [] for b, w in zip(self.biases, self.weights): z = np.dot(w, activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) delta = self.cost_derivative(activations[-1], y) * \ sigmoid_prime(zs[-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta, activations[-2].transpose()) for l in xrange(2, self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weights[-l+1].transpose(), delta) * sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) return (nabla_b, nabla_w) def evaluate(self, test_data): test_results = [(np.argmax(self.feedforward(x)), y) for (x, y) in test_data] return sum(int(x == y) for (x, y) in test_results) def cost_derivative(self, output_activations, y): return (output_activations-y) def sigmoid(z): return 1.0/(1.0 + np.exp(-z)) def sigmoid_prime(z): return sigmoid(z)*(1-sigmoid(z)) wine_data = np.loadtxt(open("./data/wine.csv","rb"),delimiter=",") wine_data = shuffle(wine_data) X = wine_data[:,1:] y = wine_data[:, 0] X = X / X.max(axis=0) _, y = np.unique(y, return_inverse=True) y = y.reshape(-1,1) data = [] for i in range(X.shape[0]): data.append(tuple([X[i].reshape(-1,1), y[i][0]])) trainingSplit = int(.8 * len(data)) training_data = data[:trainingSplit] test_data = data[trainingSplit:] enc = OneHotEncoder() enc.fit(y) training_data = [[_x, enc.transform(_y.reshape(-1,1)).toarray().reshape(-1,1)] for _x, _y in training_data] net = Network([13, 30, 3]) results = net.SGD(training_data, 30, 2, 1.5, test_data=test_data) plt.plot(results) plt.ylabel('accuracy (%)') plt.ylim([0,100.0]) plt.show() Explanation: Assignment 3 - classification Now that you have a basic understanding of how an artificial neural network works and have seen it applied to a classification task using two types of data, see if you can use the network to solve another classification problem using another data set. In the week-3 folder there is a data set called wine.csv which is another common data set used to test classification capabilities of machine learning algorithms. You can find a description of the data set here: https://archive.ics.uci.edu/ml/datasets/Wine The code below uses numpy to import this .csv file as a 2d numpy array. As before, we first shuffle the data set, and then split it into feature and target sets. This time, the target is in the first column of the data, with the rest of the columns representing the 13 features. From there you should be able to go through and format the data set in a similar way as we did for the Iris data above. Remember to split the data into both training and test sets, and encode the training targets as one-hot vectors. When you create the network, make sure to specify the proper dimensions for the input and output layer so that it matches the number of features and target categories in the data set. You can also experiment with different sizes for the hidden layer. If you are not achieving good results, try changing some of the hyper-parameters, including the size and quantity of hidden layers in the network specification, and the number of epochs, the size of a mini-batch, and the learning rate in the SGD function call. With a training/test split of 80/20 you should be able to achieve 100% accuracy Within 30 epochs. Remeber to commit your changes and submit a pull request when you are done. Hint: do not be fooled by the category labels that come with this data set! Even though the labels are already integers (1,2,3) we need to always make sure that our category labels are sequential integers and start with 0. To make sure this is the case you should always use the np.unique() function on the target data as we did with the Iris example above. End of explanation
12,618
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2 align="center">点击下列图标在线运行HanLP</h2> <div align="center"> <a href="https Step1: 加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。 Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务: Step3: 语义依存分析 任务越少,速度越快。如指定仅执行语义依存分析: Step4: 返回值为一个Document Step5: doc['sdp']字段代表语义依存图的数组格式,数组中第i个子数组代表第i个单词的语义依存关系,子数组中每个二元组的格式为[中心词的下标, 与中心词的语义依存关系]。每个单词的语义依存关系可能有零个、一个或多个(任意数量)。 转换为CoNLLSentence格式更容易观察: Step6: 为已分词的句子执行语义依存分析:
Python Code: !pip install hanlp -U Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2> <div align="center"> <a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/sdp_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fsdp_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a> </div> 安装 无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定: End of explanation import hanlp hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库 Explanation: 加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。 End of explanation HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH) Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务: End of explanation doc = HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', tasks='sdp') Explanation: 语义依存分析 任务越少,速度越快。如指定仅执行语义依存分析: End of explanation print(doc) Explanation: 返回值为一个Document: End of explanation print(doc.to_conll()) Explanation: doc['sdp']字段代表语义依存图的数组格式,数组中第i个子数组代表第i个单词的语义依存关系,子数组中每个二元组的格式为[中心词的下标, 与中心词的语义依存关系]。每个单词的语义依存关系可能有零个、一个或多个(任意数量)。 转换为CoNLLSentence格式更容易观察: End of explanation print(HanLP([ ["HanLP", "为", "生产", "环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"], ["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"] ], tasks='sdp', skip_tasks='tok*').to_conll()) Explanation: 为已分词的句子执行语义依存分析: End of explanation
12,619
Given the following text description, write Python code to implement the functionality described below step by step Description: Calculate quantities for analysis These notebooks describe how to calculate the data and how to produce figures in the manuscript "Barnaba Step1: Now we calculate all the quantitites described above and save the data to pickle file. Note that heavy_atoms=True in RMSD calculation. Default mode is backbone-only.
Python Code: import barnaba as bb import pickle top = "topology.pdb" traj = "trajectory.dcd" native = "2KOC.pdb" Explanation: Calculate quantities for analysis These notebooks describe how to calculate the data and how to produce figures in the manuscript "Barnaba: Software for Analysis of Nucleic Acids Structures and Trajectories". Here, we calculate different quantities over the entire trajectory: - eRMSD from reference structure - RMSD from reference structure - Base-pairing and base-stacking detection (annotation) - Dot-bracket annotation - Backbone torsion angles - $^3$J scalar couplings - Relative position and orientation between nucleobases as G-vectors, as defined in Bottaro, Di Palma, Bussi. NAR 2014. All the data is saved to pickle files for later analysis. The MD trajectory is taken from the paper "RNA force field with accuracy comparable to state-of- the-art protein force fields", PNAS, 2017. First, we import the modules barnaba and pickles, and define the location of the topology/trajectory file, as well as the location of the native, reference structure. End of explanation # calculate ermsd and store in a pickle file fname = "ermsd.p" ermsd = bb.ermsd(native,traj,topology=top) pickle.dump(ermsd[1:],open(fname, "w")) # calculate rmsd and store in a pickle file fname = "rmsd.p" print "# calculate %s" % fname rmsd = bb.rmsd(native,traj,topology=top,heavy_atom=True) pickle.dump(rmsd[1:],open(fname, "w")) # calculate annotation and store in pickle file fname = "pairs.p" print "# calculate %s" % fname stackings, pairings, res = bb.annotate(traj,topology=top) pickle.dump([pairings[1:], res],open(fname, "w")) # calculate dot-bracket annotation and store in pickle file fname = "dotbracket.p" dotbr,ss = bb.dot_bracket(pairings,res) pickle.dump([dotbr[1:], res],open(fname, "w")) # Calculate torsion angles fname = "angles.p" print "# calculate %s" % fname angles,res = bb.backbone_angles(traj,topology=top) pickle.dump([angles[1:],res],open(fname, "w")) # calculate couplings and save to pickle fname = "couplings.p" print "# calculate %s" % fname couplings,res = bb.jcouplings(traj,topology=top) pickle.dump([couplings[1:],res],open(fname, "w")) # calculate couplings and save to pickle fname = "gvec.p" print "# calculate %s" % fname gvec,seq = bb.dump_gvec(traj,topology=top) pickle.dump([gvec[1:],seq],open(fname, "w")) Explanation: Now we calculate all the quantitites described above and save the data to pickle file. Note that heavy_atoms=True in RMSD calculation. Default mode is backbone-only. End of explanation
12,620
Given the following text description, write Python code to implement the functionality described below step by step Description: Keen readers of this blog (hi Mom!) might have noticed my recent focus on neural networks and deep learning. It's good for popularity, as deep learning posts are automatically cool (I'm really big in China now). Well, I'm going to leave the AI alone this time. In fact, this post won't even really constitute data science. Instead, I'm going to explore a topic that has been on my mind and maybe produce a few graphs. These days, my main interaction with modern music is through the radio at the gym. It wasn't always like this. I mean I used to be with it, but then they changed what it was. I wouldn't go so far as to say that modern music is weird and scary, but it's certainly getting harder to keep up. It doesn't help that songs now have about 5 people on them. Back in my day, you might include a brief rapper cameo to appear more edgy. So I thought I'd explore how song collaborations have come to dominate the charts. Note that the accompanying Jupyter notebook can be viewed here. Let's get started! Scrapy In my research, I came across a similar post. That one looked at the top 10 of the Billboard charts going back to 1990. Just to be different, I'll primarily focus on the UK singles chart, though I'll also pull data from the Billboard chart. From what I can tell, there's no public API. But it's not too hard to scrape the data off the official site. I'm going to use Scrapy. We'll set up a spider to pull the relevant data and then navigate to the previous week's chart and repeat that process until it finally reaches the first chart in November 1952. This is actually the first time I've ever used Scrapy (hence the motivation for this post), so check out its extensive documentation if you have any issues. Scrapy isn't the only option for web scraping with Python (others reviewed here, but I like how easy it is to deploy and automate your spiders for larger projects. Step1: Briefly explaining what happened there Step2: Pandas If that all went to plan, we can now load in the json file as pandas dataframe (unless you changed the file path, it should be sitting in your working directory). If you can't wait for the spider to conclude, then you can import the file directly from github (you can also find the corresponding Billboard Hot 100 file there- you might prefer downloading the files and importing them locally). Step3: That table shows the top 5 singles in the UK for week starting 8st December 2017. I think I recognise two of those songs. As we're interested in collaborations, you'll notice that we have a few in this top 5 alone, which are marked with an 'FT' in the artist name. Unfortunately, there's no consistent nomenclature to denote collaborations on the UK singles chart (the Billboard chart isn't as bad). Step4: Okay, we've identified various terms that denote collaborations of some form. Not too bad. We just need to count the number of instances where the artist name includes one of these terms. Right? Maybe not. Step5: I'm a firm believer that domain expertise is a fundamental component of data science, so good data scientists must always be mindful of AC/DC and Bob Marley. Obviously, these songs shouldn't be considered collaborations, so we need to exclude them from the analysis. Rather than manually evaluating each case, we'll discount artists that include '&', 'AND', 'WITH', 'VS' that registered more than one song on the chart ('FT' and 'FEATURING' are pretty reliable- please let me know if I'm overlooking some brilliant 1980s post-punk new wave synth-pop group called 'THE FT FEATURING FT'). Obviously, we'll still have some one hit wonders mistaken as collaborations. For example, Derek and the Dominoes had only one hit single (Layla); though we're actually lucky in this instance, as the song was rereleased in 1982 under a slight different name. Step6: We've appended a column denoting whether that song represents that artist's only ever entry in the charts. We can use a few more tricks to weed out mislabelled collaborations. We'll ignore entries where the artist name contains 'AND THE' or '& THE'. Again, it's not perfect, but it should get us most of the way (data science in a nutshell). For example, 'Ariana Grande & The Weeknd' would be overlooked, so I'll crudely include a clause to allow The Weeknd related collaborations. With those caveats, let's plot the historical frequency of these various collaboration terms. Step7: In the 1960s, 70s and 80s, colloborations were relatively rare (~5% of charted singles) and generally took the form of duets. Things changed in the mid 90s, when the number of colloborations increases significantly, with duets dying off and featured artists taking over. I blame rap music. Comparing the two charts, the UK and US prefer 'ft' and 'featuring', repsectively (two nations divided by a common language). The Billboard chart doesn't seem to like the '/' notation, while the UK is generally much more eclectic. Finally, we can plot the proportion of songs that were collobarations (satisfied any of these conditions).
Python Code: import scrapy import re # for text parsing import logging class ChartSpider(scrapy.Spider): name = 'ukChartSpider' # page to scrape start_urls = ['http://www.officialcharts.com/charts/'] # if you want to impose a delay between sucessive scrapes # download_delay = 0.5 def parse(self, response): self.logger.info('Scraping page: %s', response.url) chart_week = re.sub(' -.*', '', response.css('.article-heading+ .article-date::text').extract_first().strip()) for (artist, chart_pos, artist_num, track, label, lastweek, peak_pos, weeks_on_chart) in \ zip(response.css('#main .artist a::text').extract(), response.css('.position::text').extract(), response.css('#main .artist a::attr(href)').extract(), response.css('.track .title a::text').extract(), response.css('.label-cat .label::text').extract(), response.css('.last-week::text').extract(), response.css('td:nth-child(4)::text').extract(), response.css('td:nth-child(5)::text').extract()): yield {'chart_week': chart_week, 'chart_pos':chart_pos, 'track': track, 'artist': artist, 'artist_num':re.sub('/.*', '', re.sub('/artist/', '', artist_num)), 'label':label, 'last_week':re.findall('\d+|$', lastweek)[0], 'peak_pos':re.findall('\d+|$', peak_pos)[0], 'weeks_on_chart':re.findall('\d+|$', weeks_on_chart)[0]} # move onto next page (if it exists) for next_page in response.css('.charts-header-panel:nth-child(1) .chart-date-directions'): if next_page.css("a::text").extract_first()=='prev': yield response.follow(next_page, self.parse) import scrapy import re # for text parsing import logging class ChartSpider(scrapy.Spider): name = 'usChartSpider' # page to scrape start_urls = ['https://www.billboard.com/charts/hot-100/'] # if you want to impose a delay between sucessive scrapes # download_delay = 1.0 def parse(self, response): self.logger.info('Scraping page: %s', response.url) chart_week = response.xpath('.//time/@datetime').extract_first() for num, (artist, track, lastweek, peak_pos, weeks_on_chart) in \ enumerate(zip(response.css('.chart-row__artist::text').extract(), response.css('.chart-row__song::text').extract(), response.css('.chart-row__rank .chart-row__last-week::text').extract(), response.css('.chart-row__top-spot .chart-row__value::text').extract(), response.css('.chart-row__weeks-on-chart .chart-row__value::text').extract())): yield {'chart_week': chart_week, 'chart_pos':num+1, 'track': track, 'artist': artist.strip(), 'last_week':re.findall('\d+|$', lastweek)[0], 'peak_pos':re.findall('\d+|$', peak_pos)[0], 'weeks_on_chart':re.findall('\d+|$', weeks_on_chart)[0]} # move onto next page (if it exists) for next_page in response.css('.chart-nav__link'): if next_page.css('a::attr(title)').extract_first() == 'Previous Week': yield response.follow(next_page, self.parse) Explanation: Keen readers of this blog (hi Mom!) might have noticed my recent focus on neural networks and deep learning. It's good for popularity, as deep learning posts are automatically cool (I'm really big in China now). Well, I'm going to leave the AI alone this time. In fact, this post won't even really constitute data science. Instead, I'm going to explore a topic that has been on my mind and maybe produce a few graphs. These days, my main interaction with modern music is through the radio at the gym. It wasn't always like this. I mean I used to be with it, but then they changed what it was. I wouldn't go so far as to say that modern music is weird and scary, but it's certainly getting harder to keep up. It doesn't help that songs now have about 5 people on them. Back in my day, you might include a brief rapper cameo to appear more edgy. So I thought I'd explore how song collaborations have come to dominate the charts. Note that the accompanying Jupyter notebook can be viewed here. Let's get started! Scrapy In my research, I came across a similar post. That one looked at the top 10 of the Billboard charts going back to 1990. Just to be different, I'll primarily focus on the UK singles chart, though I'll also pull data from the Billboard chart. From what I can tell, there's no public API. But it's not too hard to scrape the data off the official site. I'm going to use Scrapy. We'll set up a spider to pull the relevant data and then navigate to the previous week's chart and repeat that process until it finally reaches the first chart in November 1952. This is actually the first time I've ever used Scrapy (hence the motivation for this post), so check out its extensive documentation if you have any issues. Scrapy isn't the only option for web scraping with Python (others reviewed here, but I like how easy it is to deploy and automate your spiders for larger projects. End of explanation from scrapy.crawler import CrawlerProcess process = CrawlerProcess({ 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'FEED_FORMAT': 'json', 'FEED_URI': 'uk_charts.json' }) # minimising the information presented on the scrapy log logging.getLogger('scrapy').setLevel(logging.WARNING) process.crawl(ChartSpider) process.start() Explanation: Briefly explaining what happened there: We create a class called ChartSpider, essentially our customised spider (called ukChartSpider). We specify the page we want to scrape (start_urls). The spider then selects specific CSS elements (response.css()) within the page that contain the information we want (e.g. #main .artist a represents the artist's name). These tags may seem complicated, but they're actually quite easy to retrieve with a tool like Selector Gadget. Isolate the elements you want to extract and copy the css elements highlighted with the tool (see image below). Finally, we'll opt to write the spider output to a json file called uk_charts.json. Scrapy accepts numerous file formats (including CSV), but I went with JSON as it's easier to append to this file type, which may be useful if your spider unexpectedly terminates. We're now ready to launch ukChartSpider. Note that the process for the US Billboard chart is very similar. That code can be found in the accompanying Jupyter notebook. End of explanation import pandas as pd uk_charts = pd.read_json('https://raw.githubusercontent.com/dashee87/blogScripts/master/files/uk_charts.json') # convert the date column to the correct date format uk_charts = uk_charts.assign(chart_week=pd.to_datetime(uk_charts['chart_week'])) uk_charts.head(5) Explanation: Pandas If that all went to plan, we can now load in the json file as pandas dataframe (unless you changed the file path, it should be sitting in your working directory). If you can't wait for the spider to conclude, then you can import the file directly from github (you can also find the corresponding Billboard Hot 100 file there- you might prefer downloading the files and importing them locally). End of explanation pd.concat((uk_charts[uk_charts['artist'].str.contains(' FEAT\\.')][0:1], uk_charts[uk_charts['artist'].str.contains(' FEATURING ')][0:1], uk_charts[uk_charts['artist'].str.contains(' FEAT ')][0:1], uk_charts[uk_charts['artist'].str.contains('/')][0:1], uk_charts[uk_charts['artist'].str.contains(' AND ')][0:1], uk_charts[uk_charts['artist'].str.contains(' & ')][0:1], uk_charts[uk_charts['artist'].str.contains(' WITH ')][0:1], uk_charts[uk_charts['artist'].str.contains(' VS ')][0:1], uk_charts[uk_charts['artist'].str.contains(' VS. ')][0:1])) Explanation: That table shows the top 5 singles in the UK for week starting 8st December 2017. I think I recognise two of those songs. As we're interested in collaborations, you'll notice that we have a few in this top 5 alone, which are marked with an 'FT' in the artist name. Unfortunately, there's no consistent nomenclature to denote collaborations on the UK singles chart (the Billboard chart isn't as bad). End of explanation pd.concat((uk_charts[uk_charts['artist'].str.contains('AC/DC')].tail(1), uk_charts[uk_charts['artist'].str.contains('BOB MARLEY AND')].tail(1), uk_charts[uk_charts['artist'].str.contains('BOB MARLEY &')].tail(1))) Explanation: Okay, we've identified various terms that denote collaborations of some form. Not too bad. We just need to count the number of instances where the artist name includes one of these terms. Right? Maybe not. End of explanation uk_charts[(uk_charts['artist'].str.contains('DEREK AND THE DOMINOES')) & (uk_charts['weeks_on_chart']==1)] uk_charts = pd.merge(uk_charts, uk_charts.groupby('artist').track.nunique().reset_index().rename( columns={'track': 'one_hit'}).assign(one_hit = lambda x: x.one_hit==1)).sort_values( ['chart_week', 'chart_pos'], ascending=[0, 1]).reset_index(drop=True) uk_charts.head() # doing all the same stuff for the scraped Billboard chart data us_charts =pd.read_json('https://raw.githubusercontent.com/dashee87/blogScripts/master/files/us_charts.json') us_charts = us_charts.assign(chart_week=pd.to_datetime(us_charts['chart_week'])) us_charts['artist'] = us_charts['artist'].str.upper() us_charts = pd.merge(us_charts, us_charts.groupby('artist').track.nunique().reset_index().rename( columns={'track': 'one_hit'}).assign(one_hit = lambda x: x.one_hit==1)).sort_values( ['chart_week', 'chart_pos'], ascending=[0, 1]).reset_index(drop=True) us_charts.head() Explanation: I'm a firm believer that domain expertise is a fundamental component of data science, so good data scientists must always be mindful of AC/DC and Bob Marley. Obviously, these songs shouldn't be considered collaborations, so we need to exclude them from the analysis. Rather than manually evaluating each case, we'll discount artists that include '&', 'AND', 'WITH', 'VS' that registered more than one song on the chart ('FT' and 'FEATURING' are pretty reliable- please let me know if I'm overlooking some brilliant 1980s post-punk new wave synth-pop group called 'THE FT FEATURING FT'). Obviously, we'll still have some one hit wonders mistaken as collaborations. For example, Derek and the Dominoes had only one hit single (Layla); though we're actually lucky in this instance, as the song was rereleased in 1982 under a slight different name. End of explanation import seaborn import matplotlib.pyplot as plt import datetime fig, (ax1, ax2) = plt.subplots(2,1) # we're just going to do the same operation twice # it's lazy; you could set up a loop or # combine the two dataframes into one (with a grouping column to tell which country it is) uk_charts = uk_charts.assign(FT=((uk_charts['artist'].str.contains(' FT '))), FEAT=((uk_charts['artist'].str.contains(' FEAT | FEAT\\. '))), FEATURING=((uk_charts['artist'].str.contains(' FEATURING '))), AND=((uk_charts['artist'].str.contains(' AND ')) & ~(uk_charts['artist'].str.contains(' AND THE ') & ~uk_charts['artist'].str.contains(' THE WEEKND')) & (uk_charts['one_hit'])), AMPERSAND=((uk_charts['artist'].str.contains(' & ')) & ~(uk_charts['artist'].str.contains(' & THE ') & ~uk_charts['artist'].str.contains(' THE WEEKND')) & (uk_charts['one_hit'])), SLASH=((uk_charts['artist'].str.contains('/')) & (uk_charts['one_hit'])), WITH=((uk_charts['artist'].str.contains(' WITH ')) & (uk_charts['one_hit'])), X=((uk_charts['artist'].str.contains(' X ')) & ~(uk_charts['artist'].str.contains('LIBERTY X|TWISTED X|MALCOLM X|RICHARD X|X MEN')) & (uk_charts['one_hit'])), VS=((uk_charts['artist'].str.contains(' VS | VS\\. ')) & (uk_charts['one_hit']))).assign( collab = lambda x: x.FT | x.FEATURING | x.AND | x.AMPERSAND | x.SLASH| x.WITH | x.VS| x.FEAT | x.X) us_charts = us_charts.assign(FT=((us_charts['artist'].str.contains(' FT '))), FEATURING=((us_charts['artist'].str.contains(' FEATURING '))), FEAT=((us_charts['artist'].str.contains(' FEAT | FEAT\\. '))), AND=((us_charts['artist'].str.contains(' AND ')) & ~(us_charts['artist'].str.contains(' AND THE ') & ~us_charts['artist'].str.contains(' THE WEEKND')) & (us_charts['one_hit'])), AMPERSAND=((us_charts['artist'].str.contains(' & ')) & ~(us_charts['artist'].str.contains(' & THE ') & ~us_charts['artist'].str.contains(' THE WEEKND')) & (us_charts['one_hit'])), SLASH=((us_charts['artist'].str.contains('/')) & (us_charts['one_hit'])), WITH=((us_charts['artist'].str.contains(' WITH ')) & (us_charts['one_hit'])), X=((us_charts['artist'].str.contains(' X ')) & ~(us_charts['artist'].str.contains('LIBERTY X|TWISTED X|MALCOLM X|RICHARD X|X MEN')) & (us_charts['one_hit'])), VS=((us_charts['artist'].str.contains(' VS | VS\\. ')) & (us_charts['one_hit']))).assign( collab = lambda x: x.FT | x.FEATURING | x.FEAT | x.AND | x.AMPERSAND | x.SLASH| x.WITH | x.VS | x.X) uk_charts.groupby(['chart_week'])['FT','FEATURING', 'FEAT', 'AMPERSAND', 'SLASH', 'AND', 'WITH', 'X', 'VS'].mean().plot( linewidth=1.5, ax=ax1) us_charts.groupby(['chart_week'])['FT','FEATURING', 'FEAT', 'AMPERSAND', 'SLASH', 'AND', 'WITH', 'X', 'VS'].mean().plot( linewidth=1.5, ax=ax2) ax1.set_xticklabels('') ax1.set_title('UK Singles Chart 1952-2017') ax2.set_title('Billboard Hot 100 1958-2017') for ax in [ax1, ax2]: ax.set_ylim([0, 0.43]) ax.set_xlabel('') ax.set_ylabel('') ax.set_yticklabels(['{:3.0f}'.format(x*100) for x in ax.get_yticks()]) ax.set_xlim([datetime.date(1952,10,1), datetime.date(2018,1,1)]) ax1.legend(bbox_to_anchor=(0.1, 1), loc=2, borderaxespad=0., prop={'size': 10}) fig.text(0.0, 0.5,'Songs on Chart (%)', va='center', rotation='vertical',fontsize=12) fig.set_size_inches(9, 6) fig.tight_layout() ax2.legend_.remove() plt.show() Explanation: We've appended a column denoting whether that song represents that artist's only ever entry in the charts. We can use a few more tricks to weed out mislabelled collaborations. We'll ignore entries where the artist name contains 'AND THE' or '& THE'. Again, it's not perfect, but it should get us most of the way (data science in a nutshell). For example, 'Ariana Grande & The Weeknd' would be overlooked, so I'll crudely include a clause to allow The Weeknd related collaborations. With those caveats, let's plot the historical frequency of these various collaboration terms. End of explanation fig, ax1 = plt.subplots(1,1) uk_charts.groupby(['chart_week'])['collab'].mean().plot(ax=ax1, color='#F38181') us_charts.groupby(['chart_week'])['collab'].mean().plot(ax=ax1, color='#756C83') ax1.set_xlabel('') ax1.set_ylabel('Collaborative Songs (%)') ax1.set_yticklabels(['{:3.0f}'.format(x*100) for x in ax1.get_yticks()]) ax1.set_xlim([datetime.date(1952,10,1), datetime.date(2018,1,1)]) fig.set_size_inches(9, 3.5) ax1.legend(["UK Singles Chart", "Billboard Hot 100"], bbox_to_anchor=(0.07, 1), loc=2, borderaxespad=0., prop={'size': 12}) fig.tight_layout() plt.show() Explanation: In the 1960s, 70s and 80s, colloborations were relatively rare (~5% of charted singles) and generally took the form of duets. Things changed in the mid 90s, when the number of colloborations increases significantly, with duets dying off and featured artists taking over. I blame rap music. Comparing the two charts, the UK and US prefer 'ft' and 'featuring', repsectively (two nations divided by a common language). The Billboard chart doesn't seem to like the '/' notation, while the UK is generally much more eclectic. Finally, we can plot the proportion of songs that were collobarations (satisfied any of these conditions). End of explanation
12,621
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow Tutorial #13-B Visual Analysis (MNIST) by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube Introduction Tutorial #13 showed how to find input images that maximized the response of individual neurons inside the Inception model, so as to find the images that the neuron liked to see. But because the Inception model is so large and complex the images were just complex wavy patterns. This tutorial uses a much simpler Convolutional Neural Network with the MNIST data-set for recognizing hand-written digits. The code is spliced together from Tutorial #03-B for constructing the neural network and Tutorial #13 for finding input images that maximize individual neuron responses inside the neural network, so a lot of this code may look familiar to you. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. Note that there are two separate optimization loops here Step1: This was developed using Python 3.6 (Anaconda) and TensorFlow version Step2: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. Step3: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. Step4: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now. Step5: Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below. Step6: Helper-functions for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. Step7: Function used to plot 10 images in a 2x5 grid. Step8: Function used to plot a single image. Step9: Plot a few images to see if data is correct Step10: TensorFlow Graph The neural network is constructed as a computational graph in TensorFlow using the tf.layers API, which is described in detail in Tutorial #03-B. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat. Step11: The convolutional layers expect x to be encoded as a 4-rank tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is Step12: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case. Step13: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point. Step14: Neural Network We now implement the Convolutional Neural Network using the Layers API. We use the net-variable to refer to the last layer while building the neural network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the net-variable to the reshaped input image. Step15: The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial #02. Step16: After the convolution we do a max-pooling which is also described in Tutorial #02. Step17: Then we make a second convolutional layer, also with max-pooling. Step18: The output then needs to be flattened so it can be used in fully-connected (aka. dense) layers. Step19: We can now add fully-connected (or dense) layers to the neural network. Step20: We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has num_classes=10 output neurons. Step21: The outputs of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name which we will also use further below. Step22: We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one. Step23: This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number. Step24: Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the neural network. The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model. TensorFlow has a function for calculating the cross-entropy, which uses the values of the logits-layer because it also calculates the softmax internally, so as to to improve numerical stability. Step25: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications. Step26: Optimization Method Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution. Step27: Classification Accuracy We need to calculate the classification accuracy so we can report progress to the user. First we create a vector of booleans telling us whether the predicted class equals the true class of each image. Step28: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers. Step29: Optimize the Neural Network Create TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph. Step30: Initialize variables The variables for the TensorFlow graph must be initialized before we start optimizing them. Step31: Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations. Step32: This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations. Step33: Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified. Step34: Helper-function to plot confusion matrix Step35: Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function. Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size. Step36: Performance before any optimization The accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly. Step37: Performance after 10,000 optimization iterations After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%. Step38: Optimizing the Input Images Now that the neural network has been optimized so it can recognize hand-written digits with about 99% accuracy, we will then find the input images that maximize certain features inside the neural network. This will show us what images the neural network likes to see the most. We will do this by creating another form of optimization for the neural network, and we need several helper functions for doing this. Helper-function for getting the names of convolutional layers Function for getting the names of all the convolutional layers in the neural network. We could have made this list manually, but for larger neural networks it is easier to do this with a function. Step40: Helper-function for finding the input image This function finds the input image that maximizes a given feature in the network. It essentially just performs optimization with gradient ascent. The image is initialized with small random values and is then iteratively updated using the gradient for the given feature with regard to the image. Step42: This next function finds the images that maximize the first 10 features of a layer, by calling the above function 10 times. Step43: First Convolutional Layer These are the input images that maximize the features in the first convolutional layer, so these are the images that it likes to see. Step44: Note how these are very simple shapes such as lines and angles. Some of these images may be completely white, which suggests that those features of the neural network are perhaps unused, so the number of features could be reduced in this layer. Second Convolutional Layer This shows the images that maximize the features or neurons in the second convolutional layer, so these are the input images it likes to see. Note how these are more complex lines and patterns compared to the first convolutional layer. Step45: Final output layer Now find the image for the 2nd feature of the final output of the neural network. That is, we want to find an image that makes the neural network classify that image as the digit 2. This is the image that the neural network likes to see the most for the digit 2. Step46: Note how the predicted class indeed becomes 2 already within the first few iterations so the optimization is working as intended. Also note how the loss-measure is increasing rapidly until it apparently converges. This is because the loss-measure is actually just the value of the feature or neuron that we are trying to maximize. Because this is the logits-layer prior to the softmax, these values can potentially be infinitely high, but they are limited because we limit the image-values between 0 and 1. Now plot the image that was found. This is the image that the neural network believes looks most like the digit 2. Step47: Although some of the curves do hint somewhat at the digit 2, it is hard for a human to see why the neural network believes this is the optimal image for the digit 2. This can only be understood when the optimal images for the remaining digits are also shown. Step48: These images may vary each time you run the optimization. Some of the images can be seen to somewhat resemble the hand-written digits. But the other images are often impossible to recognize and it is hard to understand why the neural network thinks these are the optimal input images for those digits. The reason is perhaps that the neural network tries to recognize all digits simultaneously, and it has found that certain pixels often determine whether the image shows one digit or another. So the neural network has learned to differentiate those pixels that it has found to be important, but not the underlying curves and shapes of the digits, in the same way that a human recognizes the digits. Another possibility is that the data-set contains mis-classified digits which may confuse the neural network during training. We have previously seen how some of the digits in the data-set are very hard to read even for humans, and this may cause the neural network to become distorted and trying to recognize strange artifacts in the images. Yet another possibility is that the optimization process has stagnated in a local optimum. One way to test this, would be to run the optimization 50 times for the digits that are unclear, and see if some of the resulting images become more clear. Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np from sklearn.metrics import confusion_matrix import math Explanation: TensorFlow Tutorial #13-B Visual Analysis (MNIST) by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube Introduction Tutorial #13 showed how to find input images that maximized the response of individual neurons inside the Inception model, so as to find the images that the neuron liked to see. But because the Inception model is so large and complex the images were just complex wavy patterns. This tutorial uses a much simpler Convolutional Neural Network with the MNIST data-set for recognizing hand-written digits. The code is spliced together from Tutorial #03-B for constructing the neural network and Tutorial #13 for finding input images that maximize individual neuron responses inside the neural network, so a lot of this code may look familiar to you. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. Note that there are two separate optimization loops here: First the weights of the neural network are optimized by inputting images and their true classes to the network so as to improve the classification accuracy. Afterwards a second optimization is performed which finds the input image that maximizes a given feature or neuron inside the network. This finds an image that the network likes to see. Imports End of explanation tf.__version__ Explanation: This was developed using Python 3.6 (Anaconda) and TensorFlow version: End of explanation from tensorflow.examples.tutorials.mnist import input_data data = input_data.read_data_sets('data/MNIST/', one_hot=True) Explanation: Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path. End of explanation print("Size of:") print("- Training-set:\t\t{}".format(len(data.train.labels))) print("- Test-set:\t\t{}".format(len(data.test.labels))) print("- Validation-set:\t{}".format(len(data.validation.labels))) Explanation: The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial. End of explanation data.test.cls = np.argmax(data.test.labels, axis=1) Explanation: The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now. End of explanation # We know that MNIST images are 28 pixels in each dimension. img_size = 28 # Images are stored in one-dimensional arrays of this length. img_size_flat = img_size * img_size # Tuple with height and width of images used to reshape arrays. img_shape = (img_size, img_size) # Number of colour channels for the images: 1 channel for gray-scale. num_channels = 1 # Number of classes, one class for each of 10 digits. num_classes = 10 Explanation: Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below. End of explanation def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape(img_shape), cmap='binary') # Show true and predicted classes. if cls_pred is None: xlabel = "True: {0}".format(cls_true[i]) else: xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i]) # Show the classes as the label on the x-axis. ax.set_xlabel(xlabel) # Remove ticks from the plot. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Helper-functions for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image. End of explanation def plot_images10(images, smooth=True): # Interpolation type. if smooth: interpolation = 'spline16' else: interpolation = 'nearest' # Create figure with sub-plots. fig, axes = plt.subplots(2, 5) # Adjust vertical spacing. fig.subplots_adjust(hspace=0.1, wspace=0.1) # For each entry in the grid. for i, ax in enumerate(axes.flat): # Get the i'th image and only use the desired pixels. img = images[i, :, :] # Plot the image. ax.imshow(img, interpolation=interpolation, cmap='binary') # Remove ticks. ax.set_xticks([]) ax.set_yticks([]) # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Function used to plot 10 images in a 2x5 grid. End of explanation def plot_image(image): plt.imshow(image, interpolation='nearest', cmap='binary') plt.xticks([]) plt.yticks([]) Explanation: Function used to plot a single image. End of explanation # Get the first images from the test-set. images = data.test.images[0:9] # Get the true classes for those images. cls_true = data.test.cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true) Explanation: Plot a few images to see if data is correct End of explanation x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x') Explanation: TensorFlow Graph The neural network is constructed as a computational graph in TensorFlow using the tf.layers API, which is described in detail in Tutorial #03-B. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat. End of explanation x_image = tf.reshape(x, [-1, img_size, img_size, num_channels]) Explanation: The convolutional layers expect x to be encoded as a 4-rank tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is: End of explanation y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true') Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case. End of explanation y_true_cls = tf.argmax(y_true, axis=1) Explanation: We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point. End of explanation net = x_image Explanation: Neural Network We now implement the Convolutional Neural Network using the Layers API. We use the net-variable to refer to the last layer while building the neural network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the net-variable to the reshaped input image. End of explanation net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same', filters=16, kernel_size=5, activation=tf.nn.relu) Explanation: The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial #02. End of explanation net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2) Explanation: After the convolution we do a max-pooling which is also described in Tutorial #02. End of explanation net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same', filters=36, kernel_size=5, activation=tf.nn.relu) net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2) Explanation: Then we make a second convolutional layer, also with max-pooling. End of explanation net = tf.contrib.layers.flatten(net) # This should eventually be replaced by: # net = tf.layers.flatten(net) Explanation: The output then needs to be flattened so it can be used in fully-connected (aka. dense) layers. End of explanation net = tf.layers.dense(inputs=net, name='layer_fc1', units=128, activation=tf.nn.relu) Explanation: We can now add fully-connected (or dense) layers to the neural network. End of explanation net = tf.layers.dense(inputs=net, name='layer_fc_out', units=num_classes, activation=None) Explanation: We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has num_classes=10 output neurons. End of explanation logits = net Explanation: The outputs of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name which we will also use further below. End of explanation y_pred = tf.nn.softmax(logits=logits) Explanation: We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one. End of explanation y_pred_cls = tf.argmax(y_pred, axis=1) Explanation: This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number. End of explanation cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits) Explanation: Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the neural network. The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model. TensorFlow has a function for calculating the cross-entropy, which uses the values of the logits-layer because it also calculates the softmax internally, so as to to improve numerical stability. End of explanation loss = tf.reduce_mean(cross_entropy) Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications. End of explanation optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss) Explanation: Optimization Method Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4. Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution. End of explanation correct_prediction = tf.equal(y_pred_cls, y_true_cls) Explanation: Classification Accuracy We need to calculate the classification accuracy so we can report progress to the user. First we create a vector of booleans telling us whether the predicted class equals the true class of each image. End of explanation accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) Explanation: The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers. End of explanation session = tf.Session() Explanation: Optimize the Neural Network Create TensorFlow session Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph. End of explanation session.run(tf.global_variables_initializer()) Explanation: Initialize variables The variables for the TensorFlow graph must be initialized before we start optimizing them. End of explanation train_batch_size = 64 Explanation: Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations. End of explanation # Counter for total number of iterations performed so far. total_iterations = 0 def optimize(num_iterations): # Ensure we update the global variable rather than a local copy. global total_iterations for i in range(total_iterations, total_iterations + num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch = data.train.next_batch(train_batch_size) # Put the batch into a dict with the proper names # for placeholder variables in the TensorFlow graph. feed_dict_train = {x: x_batch, y_true: y_true_batch} # Run the optimizer using this batch of training data. # TensorFlow assigns the variables in feed_dict_train # to the placeholder variables and then runs the optimizer. session.run(optimizer, feed_dict=feed_dict_train) # Print status every 100 iterations. if i % 100 == 0: # Calculate the accuracy on the training-set. acc = session.run(accuracy, feed_dict=feed_dict_train) # Message for printing. msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}" # Print it. print(msg.format(i + 1, acc)) # Update the total number of iterations performed. total_iterations += num_iterations Explanation: This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations. End of explanation def plot_example_errors(cls_pred, correct): # This function is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # correct is a boolean array whether the predicted class # is equal to the true class for each image in the test-set. # Negate the boolean array. incorrect = (correct == False) # Get the images from the test-set that have been # incorrectly classified. images = data.test.images[incorrect] # Get the predicted classes for those images. cls_pred = cls_pred[incorrect] # Get the true classes for those images. cls_true = data.test.cls[incorrect] # Plot the first 9 images. plot_images(images=images[0:9], cls_true=cls_true[0:9], cls_pred=cls_pred[0:9]) Explanation: Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified. End of explanation def plot_confusion_matrix(cls_pred): # This is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # Get the true classifications for the test-set. cls_true = data.test.cls # Get the confusion matrix using sklearn. cm = confusion_matrix(y_true=cls_true, y_pred=cls_pred) # Print the confusion matrix as text. print(cm) # Plot the confusion matrix as an image. plt.matshow(cm) # Make various adjustments to the plot. plt.colorbar() tick_marks = np.arange(num_classes) plt.xticks(tick_marks, range(num_classes)) plt.yticks(tick_marks, range(num_classes)) plt.xlabel('Predicted') plt.ylabel('True') # Ensure the plot is shown correctly with multiple plots # in a single Notebook cell. plt.show() Explanation: Helper-function to plot confusion matrix End of explanation # Split the test-set into smaller batches of this size. test_batch_size = 256 def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False): # Number of images in the test-set. num_test = len(data.test.images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.zeros(shape=num_test, dtype=np.int) # Now calculate the predicted classes for the batches. # We will just iterate through all the batches. # There might be a more clever and Pythonic way of doing this. # The starting index for the next batch is denoted i. i = 0 while i < num_test: # The ending index for the next batch is denoted j. j = min(i + test_batch_size, num_test) # Get the images from the test-set between index i and j. images = data.test.images[i:j, :] # Get the associated labels. labels = data.test.labels[i:j, :] # Create a feed-dict with these images and labels. feed_dict = {x: images, y_true: labels} # Calculate the predicted class using TensorFlow. cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict) # Set the start-index for the next batch to the # end-index of the current batch. i = j # Convenience variable for the true class-numbers of the test-set. cls_true = data.test.cls # Create a boolean array whether each image is correctly classified. correct = (cls_true == cls_pred) # Calculate the number of correctly classified images. # When summing a boolean array, False means 0 and True means 1. correct_sum = correct.sum() # Classification accuracy is the number of correctly classified # images divided by the total number of images in the test-set. acc = float(correct_sum) / num_test # Print the accuracy. msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})" print(msg.format(acc, correct_sum, num_test)) # Plot some examples of mis-classifications, if desired. if show_example_errors: print("Example errors:") plot_example_errors(cls_pred=cls_pred, correct=correct) # Plot the confusion matrix, if desired. if show_confusion_matrix: print("Confusion Matrix:") plot_confusion_matrix(cls_pred=cls_pred) Explanation: Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function. Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size. End of explanation print_test_accuracy() Explanation: Performance before any optimization The accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly. End of explanation %%time optimize(num_iterations=10000) print_test_accuracy(show_example_errors=True, show_confusion_matrix=True) Explanation: Performance after 10,000 optimization iterations After 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%. End of explanation def get_conv_layer_names(): graph = tf.get_default_graph() # Create a list of names for the operations in the graph # for the Inception model where the operator-type is 'Conv2D'. names = [op.name for op in graph.get_operations() if op.type=='Conv2D'] return names conv_names = get_conv_layer_names() conv_names len(conv_names) Explanation: Optimizing the Input Images Now that the neural network has been optimized so it can recognize hand-written digits with about 99% accuracy, we will then find the input images that maximize certain features inside the neural network. This will show us what images the neural network likes to see the most. We will do this by creating another form of optimization for the neural network, and we need several helper functions for doing this. Helper-function for getting the names of convolutional layers Function for getting the names of all the convolutional layers in the neural network. We could have made this list manually, but for larger neural networks it is easier to do this with a function. End of explanation def optimize_image(conv_id=None, feature=0, num_iterations=30, show_progress=True): Find an image that maximizes the feature given by the conv_id and feature number. Parameters: conv_id: Integer identifying the convolutional layer to maximize. It is an index into conv_names. If None then use the last fully-connected layer before the softmax output. feature: Index into the layer for the feature to maximize. num_iteration: Number of optimization iterations to perform. show_progress: Boolean whether to show the progress. # Create the loss-function that must be maximized. if conv_id is None: # If we want to maximize a feature on the last layer, # then we use the fully-connected layer prior to the # softmax-classifier. The feature no. is the class-number # and must be an integer between 1 and 1000. # The loss-function is just the value of that feature. loss = tf.reduce_mean(logits[:, feature]) else: # If instead we want to maximize a feature of a # convolutional layer inside the neural network. # Get the name of the convolutional operator. conv_name = conv_names[conv_id] # Get the default TensorFlow graph. graph = tf.get_default_graph() # Get a reference to the tensor that is output by the # operator. Note that ":0" is added to the name for this. tensor = graph.get_tensor_by_name(conv_name + ":0") # The loss-function is the average of all the # tensor-values for the given feature. This # ensures that we generate the whole input image. # You can try and modify this so it only uses # a part of the tensor. loss = tf.reduce_mean(tensor[:,:,:,feature]) # Get the gradient for the loss-function with regard to # the input image. This creates a mathematical # function for calculating the gradient. gradient = tf.gradients(loss, x_image) # Generate a random image of the same size as the raw input. # Each pixel is a small random value between 0.45 and 0.55, # which is the middle of the valid range between 0 and 1. image = 0.1 * np.random.uniform(size=img_shape) + 0.45 # Perform a number of optimization iterations to find # the image that maximizes the loss-function. for i in range(num_iterations): # Reshape the array so it is a 4-rank tensor. img_reshaped = image[np.newaxis,:,:,np.newaxis] # Create a feed-dict for inputting the image to the graph. feed_dict = {x_image: img_reshaped} # Calculate the predicted class-scores, # as well as the gradient and the loss-value. pred, grad, loss_value = session.run([y_pred, gradient, loss], feed_dict=feed_dict) # Squeeze the dimensionality for the gradient-array. grad = np.array(grad).squeeze() # The gradient now tells us how much we need to change the # input image in order to maximize the given feature. # Calculate the step-size for updating the image. # This step-size was found to give fast convergence. # The addition of 1e-8 is to protect from div-by-zero. step_size = 1.0 / (grad.std() + 1e-8) # Update the image by adding the scaled gradient # This is called gradient ascent. image += step_size * grad # Ensure all pixel-values in the image are between 0 and 1. image = np.clip(image, 0.0, 1.0) if show_progress: print("Iteration:", i) # Convert the predicted class-scores to a one-dim array. pred = np.squeeze(pred) # The predicted class for the Inception model. pred_cls = np.argmax(pred) # The score (probability) for the predicted class. cls_score = pred[pred_cls] # Print the predicted score etc. msg = "Predicted class: {0}, score: {1:>7.2%}" print(msg.format(pred_cls, cls_score)) # Print statistics for the gradient. msg = "Gradient min: {0:>9.6f}, max: {1:>9.6f}, stepsize: {2:>9.2f}" print(msg.format(grad.min(), grad.max(), step_size)) # Print the loss-value. print("Loss:", loss_value) # Newline. print() return image.squeeze() Explanation: Helper-function for finding the input image This function finds the input image that maximizes a given feature in the network. It essentially just performs optimization with gradient ascent. The image is initialized with small random values and is then iteratively updated using the gradient for the given feature with regard to the image. End of explanation def optimize_images(conv_id=None, num_iterations=30): Find 10 images that maximize the 10 first features in the layer given by the conv_id. Parameters: conv_id: Integer identifying the convolutional layer to maximize. It is an index into conv_names. If None then use the last layer before the softmax output. num_iterations: Number of optimization iterations to perform. # Which layer are we using? if conv_id is None: print("Final fully-connected layer before softmax.") else: print("Layer:", conv_names[conv_id]) # Initialize the array of images. images = [] # For each feature do the following. for feature in range(0,10): print("Optimizing image for feature no.", feature) # Find the image that maximizes the given feature # for the network layer identified by conv_id (or None). image = optimize_image(conv_id=conv_id, feature=feature, show_progress=False, num_iterations=num_iterations) # Squeeze the dim of the array. image = image.squeeze() # Append to the list of images. images.append(image) # Convert to numpy-array so we can index all dimensions easily. images = np.array(images) # Plot the images. plot_images10(images=images) Explanation: This next function finds the images that maximize the first 10 features of a layer, by calling the above function 10 times. End of explanation optimize_images(conv_id=0) Explanation: First Convolutional Layer These are the input images that maximize the features in the first convolutional layer, so these are the images that it likes to see. End of explanation optimize_images(conv_id=1) Explanation: Note how these are very simple shapes such as lines and angles. Some of these images may be completely white, which suggests that those features of the neural network are perhaps unused, so the number of features could be reduced in this layer. Second Convolutional Layer This shows the images that maximize the features or neurons in the second convolutional layer, so these are the input images it likes to see. Note how these are more complex lines and patterns compared to the first convolutional layer. End of explanation image = optimize_image(conv_id=None, feature=2, num_iterations=10, show_progress=True) Explanation: Final output layer Now find the image for the 2nd feature of the final output of the neural network. That is, we want to find an image that makes the neural network classify that image as the digit 2. This is the image that the neural network likes to see the most for the digit 2. End of explanation plot_image(image) Explanation: Note how the predicted class indeed becomes 2 already within the first few iterations so the optimization is working as intended. Also note how the loss-measure is increasing rapidly until it apparently converges. This is because the loss-measure is actually just the value of the feature or neuron that we are trying to maximize. Because this is the logits-layer prior to the softmax, these values can potentially be infinitely high, but they are limited because we limit the image-values between 0 and 1. Now plot the image that was found. This is the image that the neural network believes looks most like the digit 2. End of explanation optimize_images(conv_id=None) Explanation: Although some of the curves do hint somewhat at the digit 2, it is hard for a human to see why the neural network believes this is the optimal image for the digit 2. This can only be understood when the optimal images for the remaining digits are also shown. End of explanation # This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. # session.close() Explanation: These images may vary each time you run the optimization. Some of the images can be seen to somewhat resemble the hand-written digits. But the other images are often impossible to recognize and it is hard to understand why the neural network thinks these are the optimal input images for those digits. The reason is perhaps that the neural network tries to recognize all digits simultaneously, and it has found that certain pixels often determine whether the image shows one digit or another. So the neural network has learned to differentiate those pixels that it has found to be important, but not the underlying curves and shapes of the digits, in the same way that a human recognizes the digits. Another possibility is that the data-set contains mis-classified digits which may confuse the neural network during training. We have previously seen how some of the digits in the data-set are very hard to read even for humans, and this may cause the neural network to become distorted and trying to recognize strange artifacts in the images. Yet another possibility is that the optimization process has stagnated in a local optimum. One way to test this, would be to run the optimization 50 times for the digits that are unclear, and see if some of the resulting images become more clear. Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources. End of explanation
12,622
Given the following text description, write Python code to implement the functionality described below step by step Description: <figure> <IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right"> </figure> IHE Python course, 2017 Time series manipulation T.N.Olsthoorn, April 18, 2017 Most scientists and engineers, including hydrologists, physisists, electronic engineers, social scientists and economists are often faced with time series that bear information that is to be extracted or to be used in predictions. Pandas has virtually all the tools that are required to handle time series, while keeping dates and data strictly connected. These time series loaded into pandas then form the basis of further analysis. Loading into pandas can be done with pd.read_csv, pd.read_table, pd.read_excel as we used before as well as with numerous other functions ready to be using in pandas. Just use tab-complition to see al the possibilities Step1: Show which reading functions pandas has as onboard methods. We can use a coprehension to select what we want Step2: Hence there's a large number of possibilities. Move to the directory with the examples. Then print pwd to see if you're there. Notice, the first part of the pwd command will be different on your computer. Step3: See if we have a csv datafile, which is a long year groundwater head series in the south of the Netherlands (chosen more or less at random for its length). Step4: It's not a bad habit to use os to verify that the file exists. Step5: Ok, now we will naively try to read it in using pd.read_csv. This may fail or not. If it fails we sharpen the knife by adding or using one or more options provided by pd.read_csv. Step6: Obviously, the read_csv above failed. Upon inspection of the file in an editor, we see that the top is a mess. Not really, but at least we want to sktip this part and get to the actual time series data of interest further down in the file. So let's skip a few rows (too few, but we can correct step by step) Step7: Ok, we got some top table in the file. See which line pd thought was the header. Ok. skip a few more lines. Step8: Now we really got the first table in the file, but this is not the one we need. On line 3 we see the desired header line. So skip 3 more lines to get there. Step9: This is fine. At least a good start. But we want "Peildatum" as our index. So Step10: Better, but the idex still consists of strings and not of dates. Therefore, tell read_csv to part the dates Step11: Problem is that some dates will be messed up as pandas will by default interprete dates as mm-dd-yyyyy, while we have dd-mm-yyyy. For some dates this does not matter but for other dates this is ambiguous unless it is specified that the dates start with the day instead of the month. Step12: So far so good. Now do some clean-up as we only need the 6th column with the head above national datum. We can tell read_csv what columns to use by specifying a list of headers. First trial Step13: This failed, because we now have to specify all columns we want to use. This should include the columne "Peildatum". So add it to the list. Step14: This is fine. We now have a one-column dataFrame with the proper index. For English speakers, change the column header for better readability. Step15: Check that pb is still a data frame, and only when we select one column from a dataFrame it becomes a series. Step16: So select this column to get a time series. Step17: Dataframes and series can immediately be plotted. Of course, you may also plot titles on the axes and above the plot. But because of lazyness, I leave this out for this exercise. Step18: The next problem is to get the mean of the highest three measurements within each hydrological year, which starts on April 1 and ends at March 31. This requires resampling the data per hydrologic year. Which can be done with aliases put in the rule of the resample function of pandas series and dataFrames. Here are options Step19: This uses Groupby functionality. Which we'll inspect next. In fact, pb.resample(...) yields a DatetimeIndexResampler Step20: This resampler has its own functinality that can be used. This fucntionality is shown here Step21: It's now easy to plot the resampled data using several of the functions, like so Step22: Insteresting is the agg function (which is an abbreviation of aggregate function). Here is its documentation Step25: So we can use any function and apply it on the data grouped by the resampler. These data are the time series consisting of the data that fall in the interval between the last resample moment and the currrent one. Let's try this out to get the three highest values of any hydrological year. For this we define our own function, called highest3. It works by taking z which should be a time series, one consisting of any of the hydrological years in your long-year time series. We use argsort to the indices of the ordered values (we could also directly use the values themselves, but it's good to know argsort exists). The series is sorted from low to high, so we take the last 3 values, i.e. the highest 3. Notice that this also works in Python of the total number of values is less than three, so we don't need to check this. Then we return the mean of these highest three values. That's all. Step26: This, of course, solves the problem. Which means we could just as well also compute the lowest 3 at the same time. And why not also remember the highest and lowest 3 Step28: The above functions all reduce, that is, they all aggreate the data held by the resampler for each sampling interval to a single value (or a tuple) Step29: This does indeed give a tuple of the three highest values within each sampling interval, but we can't plot these values easily on the graph of the time series. Other functionality of the sampler are the indices, i.e. Z.indices. This yields a dictionary with the indices into the overall time series that belong to each resampled timestamp. Therefore we can readily find the values that belong to each hydrological year. Step31: So appy() works the same as agg() at least here. If we want to plot the three highest points in each hydrlogical year, we could make a list with the sub time series that consist of the three highest points with there timestamp as index. Then, each item in this list is a series consisting of three values, which we may plot one after the other. Step32: The next step is to plot them. But we first plot the entire data set as a line. Then we plot each sub time series as small circles. The adjacent hydrological years then have a different color. Step34: If we want to color the data in the same hydrological year in the same color, then we also make a list of all data in each sampling interval next to the list of the three highest values. Each item in dd has the complete time series of the interval, each item in dd3 has a tiem series of the three highest values alone. The append within the function is away of using a side-effect to get things done. It's a bit sneaky, not very elegant. But it works Step35: Show that the argort works to get the indices that sort the time series Step36: Instead of appending to the list dd and dd3 sneakyly behind the scene (hidden inside the function, that is as a side effect of the function), we can also aim in achieveing the same thing head-on. This can be done using the indices of each sub-timeseries, which is also a functionality of the sample. Step37: The resampler object Z also has a method indices, which yields a dictionary with the indices of the values that fall in each sampling interval. The indices are the absolute indices, i.e. they point into the large, original time series. Let's see how this works. First generate the dictionary. Step38: A dict has keys. So let's show one item in this dict like so Step39: This implies that we can now plot each sub time series like so
Python Code: import pandas as pd import matplotlib.pyplot as plt import numpy as np Explanation: <figure> <IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right"> </figure> IHE Python course, 2017 Time series manipulation T.N.Olsthoorn, April 18, 2017 Most scientists and engineers, including hydrologists, physisists, electronic engineers, social scientists and economists are often faced with time series that bear information that is to be extracted or to be used in predictions. Pandas has virtually all the tools that are required to handle time series, while keeping dates and data strictly connected. These time series loaded into pandas then form the basis of further analysis. Loading into pandas can be done with pd.read_csv, pd.read_table, pd.read_excel as we used before as well as with numerous other functions ready to be using in pandas. Just use tab-complition to see al the possibilities End of explanation [d for d in dir(pd) if d.startswith("read")] pd.read_table() [d for d in dir(pd) if d.startswith("read")] Explanation: Show which reading functions pandas has as onboard methods. We can use a coprehension to select what we want: End of explanation cd python/IHEcourse2017/exercises/Apr18/ pwd Explanation: Hence there's a large number of possibilities. Move to the directory with the examples. Then print pwd to see if you're there. Notice, the first part of the pwd command will be different on your computer. End of explanation ls Explanation: See if we have a csv datafile, which is a long year groundwater head series in the south of the Netherlands (chosen more or less at random for its length). End of explanation import os os.path.isfile("B50E0133001_1.csv") Explanation: It's not a bad habit to use os to verify that the file exists. End of explanation pb = pd.read_csv("B50E0133001_1.csv") pb.head() Explanation: Ok, now we will naively try to read it in using pd.read_csv. This may fail or not. If it fails we sharpen the knife by adding or using one or more options provided by pd.read_csv. End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=9) pb.head() Explanation: Obviously, the read_csv above failed. Upon inspection of the file in an editor, we see that the top is a mess. Not really, but at least we want to sktip this part and get to the actual time series data of interest further down in the file. So let's skip a few rows (too few, but we can correct step by step) End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=11) pb.head() Explanation: Ok, we got some top table in the file. See which line pd thought was the header. Ok. skip a few more lines. End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15) pb.head() Explanation: Now we really got the first table in the file, but this is not the one we need. On line 3 we see the desired header line. So skip 3 more lines to get there. End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum") pb.head() Explanation: This is fine. At least a good start. But we want "Peildatum" as our index. So: End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True) pb.head() Explanation: Better, but the idex still consists of strings and not of dates. Therefore, tell read_csv to part the dates: End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True) pb.head() pb.head() Explanation: Problem is that some dates will be messed up as pandas will by default interprete dates as mm-dd-yyyyy, while we have dd-mm-yyyy. For some dates this does not matter but for other dates this is ambiguous unless it is specified that the dates start with the day instead of the month. End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True, usecols=["Stand (cm t.o.v. NAP)"]) pb.head() Explanation: So far so good. Now do some clean-up as we only need the 6th column with the head above national datum. We can tell read_csv what columns to use by specifying a list of headers. First trial End of explanation pb = pd.read_csv("B50E0133001_1.csv", skiprows=15, index_col="Peildatum", parse_dates=True, dayfirst=True, usecols=["Peildatum", "Stand (cm t.o.v. NAP)"]) pb.head() Explanation: This failed, because we now have to specify all columns we want to use. This should include the columne "Peildatum". So add it to the list. End of explanation pb.columns = ["NAP"] pb.head() Explanation: This is fine. We now have a one-column dataFrame with the proper index. For English speakers, change the column header for better readability. End of explanation print(type(pb)) print(type(pb['NAP'])) Explanation: Check that pb is still a data frame, and only when we select one column from a dataFrame it becomes a series. End of explanation pb = pb['NAP'] print(type(pb)) Explanation: So select this column to get a time series. End of explanation pb.plot() plt.show() # default color is blue, and default plot is line. Explanation: Dataframes and series can immediately be plotted. Of course, you may also plot titles on the axes and above the plot. But because of lazyness, I leave this out for this exercise. End of explanation pb.resample("AS").mean().head() pb.resample("AS-APR").mean().head() Explanation: The next problem is to get the mean of the highest three measurements within each hydrological year, which starts on April 1 and ends at March 31. This requires resampling the data per hydrologic year. Which can be done with aliases put in the rule of the resample function of pandas series and dataFrames. Here are options: Offset aliases (previously alled time rules) that can be used or resampling a time series or a dataFrame http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases B business day frequency C custom business day frequency (experimental) D calendar day frequency W weekly frequency M month end frequency SM semi-month end frequency (15th and end of month) BM business month end frequency CBM custom business month end frequency MS month start frequency SMS semi-month start frequency (1st and 15th) BMS business month start frequency CBMS custom business month start frequency Q quarter end frequency BQ business quarter endfrequency QS quarter start frequency BQS business quarter start frequency A year end frequency BA business year end frequency AS year start frequency BAS business year start frequency BH business hour frequency H hourly frequency T minutely frequency S secondly frequency L milliseonds U microseconds N nanoseconds But fo sample at some arbitrary interval we need anchored offsets as the resample rule. Here are the options. Anchored offsets http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases For some frequencies you can specify an anchoring suffix: Alias Description W-SUN weekly frequency (sundays). Same as ‘W’ W-MON weekly frequency (mondays) W-TUE weekly frequency (tuesdays) W-WED weekly frequency (wednesdays) W-THU weekly frequency (thursdays) W-FRI weekly frequency (fridays) W-SAT weekly frequency (saturdays) (B)Q(S)-DEC quarterly frequency, year ends in December. Same as ‘Q’ (B)Q(S)-JAN quarterly frequency, year ends in January (B)Q(S)-FEB quarterly frequency, year ends in February (B)Q(S)-MAR quarterly frequency, year ends in March (B)Q(S)-APR quarterly frequency, year ends in April (B)Q(S)-MAY quarterly frequency, year ends in May (B)Q(S)-JUN quarterly frequency, year ends in June (B)Q(S)-JUL quarterly frequency, year ends in July (B)Q(S)-AUG quarterly frequency, year ends in August (B)Q(S)-SEP quarterly frequency, year ends in September (B)Q(S)-OCT quarterly frequency, year ends in October (B)Q(S)-NOV quarterly frequency, year ends in November (B)A(S)-DEC annual frequency, anchored end of December. Same as ‘A’ (B)A(S)-JAN annual frequency, anchored end of January (B)A(S)-FEB annual frequency, anchored end of February (B)A(S)-MAR annual frequency, anchored end of March (B)A(S)-APR annual frequency, anchored end of April (B)A(S)-MAY annual frequency, anchored end of May (B)A(S)-JUN annual frequency, anchored end of June (B)A(S)-JUL annual frequency, anchored end of July (B)A(S)-AUG annual frequency, anchored end of August (B)A(S)-SEP annual frequency, anchored end of September (B)A(S)-OCT annual frequency, anchored end of October (B)A(S)-NOV annual frequency, anchored end of November To see this at work. Resample the time series by hydrological year and compute the mean head in every hydrological year. This can be done as follows: End of explanation Z = pb.resample("AS-APR") type(Z) Explanation: This uses Groupby functionality. Which we'll inspect next. In fact, pb.resample(...) yields a DatetimeIndexResampler End of explanation [z for z in dir(Z) if not z.startswith("_")] Explanation: This resampler has its own functinality that can be used. This fucntionality is shown here: End of explanation Z.max().plot(label="max") Z.mean().plot(label="mean") Z.min().plot(label="min") plt.title("The max, mean and min of the head in each hydrological year") plt.legend(loc='best') plt.show() Z.max() for z in Z: print(z) Explanation: It's now easy to plot the resampled data using several of the functions, like so: Notice that Z.mean() is a pandas series so that Z.mean().plot() is plot method of the pandas series. End of explanation print(Z.agg.__doc__) Explanation: Insteresting is the agg function (which is an abbreviation of aggregate function). Here is its documentation: End of explanation def highest3(z): returns mean of highest 3 values using np.argsort I = np.argsort(z)[-3:] return z[I].mean() def highest3a(z): returns mean of highest 3 values using np.sort z = np.sort(z) return z[-3:].mean() # Apply print("Using np.argsort") highest = pb.resample("AS-APR").agg(highest3) highest.columns = ["mean_highest_value"] print(highest.head()) print("\nUsing np.sort") highesta = pb.resample("AS-APR").agg(highest3a) highesta.columns = ["mean_highest_value"] print(highesta.head()) Explanation: So we can use any function and apply it on the data grouped by the resampler. These data are the time series consisting of the data that fall in the interval between the last resample moment and the currrent one. Let's try this out to get the three highest values of any hydrological year. For this we define our own function, called highest3. It works by taking z which should be a time series, one consisting of any of the hydrological years in your long-year time series. We use argsort to the indices of the ordered values (we could also directly use the values themselves, but it's good to know argsort exists). The series is sorted from low to high, so we take the last 3 values, i.e. the highest 3. Notice that this also works in Python of the total number of values is less than three, so we don't need to check this. Then we return the mean of these highest three values. That's all. End of explanation def h_and_l_3(z): z = np.sort(z) # rounding off for a nicer list, but is not necessary return (np.round(z[ :3].mean()), np.round(z[-3:].mean())) # Apply h_and_l = pb.resample("AS-APR").agg(h_and_l_3) h_and_l.columns = ["mean_lowest_and_highest_values"] h_and_l.head() Explanation: This, of course, solves the problem. Which means we could just as well also compute the lowest 3 at the same time. And why not also remember the highest and lowest 3 End of explanation def h3(z): Returns a tuple of the three highest value within sampling interval return (z[np.argsort(z)[-3:]],) Z.agg(h3).head() Explanation: The above functions all reduce, that is, they all aggreate the data held by the resampler for each sampling interval to a single value (or a tuple) End of explanation Z.apply(h3).head() Explanation: This does indeed give a tuple of the three highest values within each sampling interval, but we can't plot these values easily on the graph of the time series. Other functionality of the sampler are the indices, i.e. Z.indices. This yields a dictionary with the indices into the overall time series that belong to each resampled timestamp. Therefore we can readily find the values that belong to each hydrological year. End of explanation dd = list() def h33(z): Returns a tuple of the three highest value within sampling interval #print(type(z)) dd.append(z[z.argsort()[-3:]]) return # the time series are put in the list dd Z.apply(h33).head() #Z.agg(h33).head() # alternative works just as well # for instance show dd[3] print(type(dd[3])) dd[3] Explanation: So appy() works the same as agg() at least here. If we want to plot the three highest points in each hydrlogical year, we could make a list with the sub time series that consist of the three highest points with there timestamp as index. Then, each item in this list is a series consisting of three values, which we may plot one after the other. End of explanation pb.plot() # plot all data as a line for d in dd: #plot sub time series of the three highest points d.plot(marker='o') plt.show() Explanation: The next step is to plot them. But we first plot the entire data set as a line. Then we plot each sub time series as small circles. The adjacent hydrological years then have a different color. End of explanation dd dd = list() # the entire time series in each sampling interval. dd3 = list() # only the three highest values in each sampling interval. def h33(z): Returns a tuple of the three highest value within sampling interval Notice that this function just used append() to generate a list as a side-effect. It effectively consists of two lines and returns nothing. # z is what the sampler Z yields while resampling the original time series # It isthe sub-time series that falls in the running interval. # With tis append we get a list of the sub time series. dd.append(z[:]) # z[:] forces a copy # you can do an argsort on z. This yields a time series with the same index # but with as values the index in the original series. You can see it if # you print it here or make a list of these index-time series. dd3.append(z[z.argsort()[-3:]]) return # Here we apply the function by calling the method .agg() of the sampler Z. # The method receives the just created function as input. It applies this function # on every iteration, that is on every sub-time series. # Each time the function h33 is called it appends to the lists dd and ddr. # The sampler Z method agg calls the funcion h33 for every sample interval. # You may be tempted to insert a print statement in the function to see that # this is what actually happens. Z.apply(h33) # Then plot the sub-time series in the lists in dd and dd3. # We make sure to use the same color for all points in the same # hydrological year in both dd and dd3. # The subseries in dd are plotted as a line, those in dd3 as small circles. clr = 'brgkmcy'; i=0 # colors to use for d3, d in zip(dd3, dd): d.plot(marker='.', color=clr[i]) # all data in hydrological year d3.plot(marker='o', color=clr[i]) # highest three i += 1 if i==len(clr): i=0 # set i to 0 when colors are exhausted. plt.title("measurements per hydr. yr with the 3 highest accentuated") plt.xlabel('time') plt.ylabel('cm above national datum NAP') plt.show() Explanation: If we want to color the data in the same hydrological year in the same color, then we also make a list of all data in each sampling interval next to the list of the three highest values. Each item in dd has the complete time series of the interval, each item in dd3 has a tiem series of the three highest values alone. The append within the function is away of using a side-effect to get things done. It's a bit sneaky, not very elegant. But it works: End of explanation print("The sub time series for 1964\n") print(dd[10]) print("\nThe indices that sort this sub series. It is itself a time series") dd[10].argsort() Explanation: Show that the argort works to get the indices that sort the time series End of explanation # Don't need this, but just to make sure refresh our sampler Z = pb.resample("AS-APR") Explanation: Instead of appending to the list dd and dd3 sneakyly behind the scene (hidden inside the function, that is as a side effect of the function), we can also aim in achieveing the same thing head-on. This can be done using the indices of each sub-timeseries, which is also a functionality of the sample. End of explanation Idict = Z.indices type(Idict) Explanation: The resampler object Z also has a method indices, which yields a dictionary with the indices of the values that fall in each sampling interval. The indices are the absolute indices, i.e. they point into the large, original time series. Let's see how this works. First generate the dictionary. End of explanation pb.ix[3] # Show the indices for one of the keys of the Idict for k in Idict.keys(): print(k) # the key print() print(Idict[k]) # the indices print() print(pb.ix[Idict[k]]) # the values beloning to these indices break Explanation: A dict has keys. So let's show one item in this dict like so: End of explanation I # Show the indices for one of the keys of the Idict fig, ax = plt.subplots() clr = "brgkmcy"; i=0 Idict = Z.indices for k in Idict.keys(): I = Idict[k] # The indices belonging to this key k ax.plot(pb.ix[I].index, pb.ix[I].values, color=clr[i]) # The values have dimension [1,n] so use values[0] to get a 1D array of indices J = np.argsort(pb.ix[I].values[0])[-3:] # Need a comprehension to get the indexes because # indexing like I[J] is not allowed for lists Idx = [I[j] for j in J] ax.plot(pb.index[Idx], pb.values[Idx], color=clr[i], marker='o') i += 1; if i==len(clr): i=0 # plot the hydrological year boundaries as vertical grey lines ylim = ax.get_ylim() for k in Idict.keys(): i = Idict[k][-1] ax.plot(pb.index[[i, i]], ylim, color=[0.8, 0.8, 0.8]) plt.show() #pb.ix[I].plot(ax=ax) # the values beloning to these indices (can't omit the legend) Explanation: This implies that we can now plot each sub time series like so: To plot them together with the boundaries of each hydrological year, we first plot the data as a colored line, within each hydrological year. Then we plot the vertical lines that separate the hydrological years. The lines are colored light grey using color=[R, G, B] where R, G and B are all 0.8. ax=get_ylim() gets the extremes of the vertical axis, which are then used to draw the vertical lines. End of explanation
12,623
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Step1: MPI Modes As of the 2.1 release, PHOEBE officially support parallelization using MPI within run_compute. The 2.3 release introduced support for run_solver, including support for both MPI and multiprocessing. There are several "modes of operation" depending on your settings and whether you're running your script within python or mpirun. You can enable/disable MPI within phoebe by placing phoebe.mpi_on() or phoebe.mpi_off() at the top of your script. If you do not do this, MPI will be enabled by default if within mpirun and disabled otherwise. When MPI is enabled, PHOEBE will do the following Step2: PHOEBE determines whether the current script is running within an MPI environment by checking for environment variables set by mpirun/mpiexec. If you run into any issues with PHOEBE not behaving as expected, check to see whether PHOEBE thinks its within mpirun.
Python Code: #!pip install -I "phoebe>=2.3,<2.4" import phoebe Explanation: Advanced: Running PHOEBE in MPI Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation print(phoebe.mpi.enabled) print(phoebe.mpi.mode) phoebe.mpi_on() print(phoebe.mpi.enabled) print(phoebe.mpi.mode) print(phoebe.mpi.myrank) print(phoebe.mpi.nprocs) Explanation: MPI Modes As of the 2.1 release, PHOEBE officially support parallelization using MPI within run_compute. The 2.3 release introduced support for run_solver, including support for both MPI and multiprocessing. There are several "modes of operation" depending on your settings and whether you're running your script within python or mpirun. You can enable/disable MPI within phoebe by placing phoebe.mpi_on() or phoebe.mpi_off() at the top of your script. If you do not do this, MPI will be enabled by default if within mpirun and disabled otherwise. When MPI is enabled, PHOEBE will do the following: * if within mpirun: * run_compute: uses PHOEBE's built-in per-dataset or per-time parallelization. The main code you write in your script is executed on a single processor, but during run_compute the task is divided among the available resources. * run_solver: when applicable, PHOEBE will parallelize over the individual models within the solver and serialize run_compute. For sampler.emcee, this will only be done if nwalkers <= nprocs. * if not within mpirun (ie. in a serial python environment): will spawn a separate thread at run_compute or run_solver, using number of processors sent to phoebe.mpi_on (for example: phoebe.mpi_on(nprocs=4)). When MPI is disabled, PHOEBE will do the following: * if within mpirun: PHOEBE will run equally on all processors. The user can customize parallelization with access to phoebe.mpi.nprocs, phoebe.mpi.myrank. Your script runs equally on each processor, meaning you have multiple (separate) copies of the bundle. USE WITH CAUTION. * if not within mpirun (ie. in a serial python environment): * run_compute: If sample_from is used, PHOEBE will make use of multiprocessing across all available processors to parallize across sampled models. In all other cases, PHOEBE will run on a single processor in serial-mode. * run_solver: PHOEBE will make use of multiprocessing across all available processors whenever the solver backend support multiprocessing, otherwise will fallback on serial-model Accessing/Changing MPI Settings To check the currently adopted settings, as well as quickly access information needed for manually doing your own parallelization, access the phoebe.mpi object. End of explanation print(phoebe.mpi.within_mpirun) Explanation: PHOEBE determines whether the current script is running within an MPI environment by checking for environment variables set by mpirun/mpiexec. If you run into any issues with PHOEBE not behaving as expected, check to see whether PHOEBE thinks its within mpirun. End of explanation
12,624
Given the following text description, write Python code to implement the functionality described below step by step Description: comment data 가져오기 및 전처리 Step1: user dataframe 만들기 Step2: user, book 인덱스 및 처리 Step3: user * book matrix 만들기 Step5: user * user cosine similarity 매트릭스 만들기 1 권 169464 명 1분 59초 2 권 57555 명 40.6초 3 권 31808 명 22.4초 4 권 20470 명 14.5초 5 권 14393 명 10.2초 6 권 10630 명 7.58초 7 권 8074 명 5.8초 8 권 6306 명 4.54초 9 권 4995 명 3.56초 10 권 4052 명 2.91초
Python Code: episode_comment = pd.read_csv("data/webnovel/episode_comments.csv", index_col=0, encoding="cp949") episode_comment["ID"] = episode_comment["object_id"].apply(lambda x: x.split("-")[0]) episode_comment["volume"] = episode_comment["object_id"].apply(lambda x: x.split("-")[1]).astype("int") episode_comment["writer_nickname"].fillna("", inplace=True) def make_user_id(i): if episode_comment["writer_nickname"].loc[i] == "": return episode_comment["writer_ip"].loc[i] + episode_comment["writer_id"].loc[i] else: return episode_comment["writer_nickname"].loc[i] + episode_comment["writer_id"].loc[i] user_id = [ make_user_id(i) for i in range(len(episode_comment)) ] episode_comment["user_id"] = user_id episode_comment.drop( [ "contents", "down_count", "modified_ymdt", "registered_ymdt", "ticket", "up_count", "writer_ip", "writer_id", "writer_nickname", "writer_profile_type", "object_id", ], axis=1, inplace=True ) episode_comment.head() main_comment = pd.read_csv("data/webnovel/main_comments.csv", index_col=0, encoding="cp949") main_comment["ID"] = main_comment["object_id"].apply(lambda x: x.split("-")[1]) main_comment["volume"] = 0 main_comment["writer_nickname"].fillna("", inplace=True) def make_user_id(i): if main_comment["writer_nickname"].loc[i] == "": return main_comment["writer_ip"].loc[i] + main_comment["writer_id"].loc[i] else: return main_comment["writer_nickname"].loc[i] + main_comment["writer_id"].loc[i] user_id = [ make_user_id(i) for i in range(len(main_comment)) ] main_comment["user_id"] = user_id main_comment.drop( [ "contents", "down_count", "modified_ymdt", "registered_ymdt", "ticket", "up_count", "writer_ip", "writer_id", "writer_nickname", "writer_profile_type", "object_id", ], axis=1, inplace=True ) main_comment.head() Explanation: comment data 가져오기 및 전처리 End of explanation user_df = pd.concat([episode_comment, main_comment]).groupby(["user_id", "ID"], as_index=False).agg({"volume":np.size}) len(user_df) df = pd.read_csv("data/webnovel/main_df.csv", encoding="cp949", index_col=0) df["ID"] = df["ID"].astype("str") df = user_df.merge(df, on="ID")[["user_id", "genre", "volume"]].drop_duplicates() len(df["user_id"].unique()) romance = df[df["genre"] == 101] no_romance = df[df["genre"] != 101] len(romance.merge(no_romance, on="user_id")) Explanation: user dataframe 만들기 End of explanation user_size = len(user_df["user_id"].unique()) users = user_df["user_id"].unique() users_index = { user:index for index, user in enumerate(users) } book_df = pd.read_csv("data/webnovel/main_df.csv", encoding="cp949", index_col=0) book_size = len(book_df.ID.unique()) books = book_df.ID.unique() len(books) books_index = { str(book):index for index, book in enumerate(books) } user_df["book_index"] = user_df["ID"].apply(lambda x: books_index[x]) user_df["user_index"] = user_df["user_id"].apply(lambda x: users_index[x]) Explanation: user, book 인덱스 및 처리 End of explanation empty_matrix = np.zeros((user_size, book_size)) for index, i in user_df.iterrows(): empty_matrix[i["user_index"], i["book_index"]] = i["volume"] user_book_matrix = pd.DataFrame(empty_matrix, columns=books) user_book_matrix.index = users user_book_matrix Explanation: user * book matrix 만들기 End of explanation for i in range(15): print(i+1, "권 이상 읽은 사람은",len(user_book_matrix[user_book_matrix.sum(axis=1)>i]), "명 입니다.") from scipy.spatial import distance def cosine_distance(a, b): return 1 - distance.cosine(a, b) def make_score(books): MAE 스코어 계산 user_books_matrix_two = user_book_matrix[user_book_matrix.sum(axis=1)>books] empty_matrix = np.zeros((50, len(user_books_matrix_two))) # 샘플 10명 users_two_index = user_books_matrix_two.index user_books_matrix_two.index = range(len(user_books_matrix_two)) for index_1, i in user_books_matrix_two[:10].iterrows(): for index_2, j in user_books_matrix_two[index_1+1:].iterrows(): empty_matrix[index_1, index_2] = cosine_distance(i, j) score_list = [] for i in range(10): ID_index = [] while len(ID_index) < 11: if empty_matrix[i].argmax() >= 1: empty_matrix[i, empty_matrix[i].argmax()] = 0 else: ID_index.append(empty_matrix[i].argmax()) empty_matrix[i, empty_matrix[i].argmax()] = 0 data = user_books_matrix_two.loc[i] predict = user_books_matrix_two.loc[ID_index].mean() score = data[data > 0] - predict[data > 0] score_list.append(np.absolute(score).sum()/len(score)) print(np.array(score_list).mean()) return np.array(score_list).mean() scores = list(map(make_score, [0,1,2,3,4,5,6,7,8,9])) user_df[user_df["user_id"] == users_two_index[empty_matrix[0].argmax()]] user_df[user_df["user_id"] == users_two_index[0]] user_books_matrix_two Explanation: user * user cosine similarity 매트릭스 만들기 1 권 169464 명 1분 59초 2 권 57555 명 40.6초 3 권 31808 명 22.4초 4 권 20470 명 14.5초 5 권 14393 명 10.2초 6 권 10630 명 7.58초 7 권 8074 명 5.8초 8 권 6306 명 4.54초 9 권 4995 명 3.56초 10 권 4052 명 2.91초 End of explanation
12,625
Given the following text description, write Python code to implement the functionality described below step by step Description: Matplotlib tutorial 02 Step1: Checking and Defining the Range of Axes Step2: "linspace" to Define X Values linspace can be used to create evenly spaced numbers over a specified interval. linspace(start, stop, num=50, endpoint=True, retstep=False) Step3: Customizing Ticks Step4: Ticks actually holding the two things, 1 - ticks value , 2 - ticks label Let's change the values first Step5: Now changing the ticks lable Step6: Chnaging Spine the gca function returns the current Axes instance on the current figure.
Python Code: # import import numpy as np import matplotlib.pyplot as plt %matplotlib inline # generating some data points X = np.linspace(-np.pi, np.pi, 20, endpoint=True) C, S = np.cos(X), np.sin(X) # Simply plotting these in same plot plt.plot(X, C, X, S) plt.plot(X, C, X, C, 'oy', X, S, X, S, 'or') Explanation: Matplotlib tutorial 02 End of explanation plt.plot(X, C, X, C, 'oy', X, S, X, S, 'or') print(plt.axis()) # this will print the current plotting X and Y values # we can change it by assigning new one plt.plot(X, C, X, C, 'oy', X, S, X, S, 'or') print(plt.axis()) x1, x2, y1, y2 = (-5, 5, -1.5, 1.5) plt.axis([x1, x2, y1, y2]) print(plt.axis()) Explanation: Checking and Defining the Range of Axes End of explanation X = np.linspace(0, 2 * np.pi, 20, endpoint=True) F = np.sin(X) plt.plot(X,F) startx, endx = -0.1, 2*np.pi + 0.1 starty, endy = -1.1, 1.1 plt.axis([startx, endx, starty, endy]) plt.show() X = np.linspace(-2 * np.pi, 2 * np.pi, 20, endpoint=True) F1 = 3 * np.sin(X) F2 = np.sin(2*X) F3 = 0.3 * np.sin(X) startx, endx = -2 * np.pi - 0.1, 2*np.pi + 0.1 starty, endy = -3.1, 3.1 plt.axis([startx, endx, starty, endy]) plt.plot(X,F1) plt.plot(X,F2) plt.plot(X,F3) plt.show() X = np.linspace(-2 * np.pi, 2 * np.pi, 20, endpoint=True) F1 = 3 * np.sin(X) F2 = np.sin(2*X) F3 = 0.3 * np.sin(X) startx, endx = -2 * np.pi - 0.1, 2*np.pi + 0.1 starty, endy = -3.1, 3.1 plt.axis([startx, endx, starty, endy]) plt.plot(X,F1) plt.plot(X,F2) plt.plot(X,F3) plt.plot(X, F1, 'ro') plt.plot(X, F2, 'bx') plt.show() Explanation: "linspace" to Define X Values linspace can be used to create evenly spaced numbers over a specified interval. linspace(start, stop, num=50, endpoint=True, retstep=False) End of explanation plt.plot(X, C, X, S) print(plt.xticks()) print(plt.yticks()) Explanation: Customizing Ticks End of explanation plt.plot(X, C, X, S) plt.xticks([0, 1, 2, 4, 5]) plt.yticks([0, 1]) plt.plot(X, C, X, S) plt.xticks(np.arange(-10, 10, 2)) plt.yticks(np.arange(-2, 2, 0.5)) Explanation: Ticks actually holding the two things, 1 - ticks value , 2 - ticks label Let's change the values first End of explanation plt.plot(X, C, X, S) plt.xticks(np.arange(-10, 10, 2), ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j']) plt.yticks(np.arange(-2, 2, 0.5)) Explanation: Now changing the ticks lable End of explanation plt.plot(X, C, X, S) # getting current axis and spine instance ax = plt.gca() print(ax) # making the top and right spine invisible: ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') plt.plot(X, C, X, S) ax = plt.gca() # making the top and right spine invisible: ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') # moving all top ticks tp bottom and right ticks to left ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') plt.plot(X, C, X, S) ax = plt.gca() # making the top and right spine invisible: ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') # moving all top ticks tp bottom and right ticks to left ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') # moving bottom spine up to y=0 position: ax.spines['bottom'].set_position(('data',0)) # moving left spine to the right to position x == 0: ax.spines['left'].set_position(('data',0)) plt.plot(X, C, X, S) ax = plt.gca() # making the top and right spine invisible: ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') # moving all top ticks tp bottom and right ticks to left ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') # moving bottom spine up to y=0 position: ax.spines['bottom'].set_position(('data',0)) # moving left spine to the right to position x == 0: ax.spines['left'].set_position(('data',0)) # setting ticks value plt.xticks(np.arange(-8, 8, 2)) plt.yticks(np.arange(-2, 2, 0.5)) Explanation: Chnaging Spine the gca function returns the current Axes instance on the current figure. End of explanation
12,626
Given the following text description, write Python code to implement the functionality described below step by step Description: Vector-space models Step1: Contents Overview Set-up The retrofitting model Examples Only node 0 has outgoing edges All nodes connected to all others As before, but now 2 has no outgoing edges All nodes connected to all others, but $\alpha = 0$ WordNet Background on WordNet WordNet and VSMs Reproducing the WordNet synonym graph experiment Other retrofitting models and ideas Overview Thus far, all of the information in our word vectors has come solely from co-occurrences patterns in text. This information is often very easy to obtain – though one does need a lot of text – and it is striking how rich the resulting representations can be. Nonetheless, it seems clear that there is important information that we will miss this way – relationships that just aren't encoded at all in co-occurrences or that get distorted by such patterns. For example, it is probably straightforward to learn representations that will support the inference that all puppies are dogs (puppy entails dog), but it might be difficult to learn that dog entails mammal because of the unusual way that very broad taxonomic terms like mammal are used in text. The question then arises Step2: Note Step3: If you decide to download the data to a different directory than the default, then you'll have to set NLTK_DATA in your shell profile. (If that doesn't make sense to you, then we recommend choosing the default download directory!) The retrofitting model For an an existing VSM $\widehat{Q}$ of dimension $m \times n$, and a set of edges $E$ (pairs of indices into rows in $\widehat{Q}$), the retrofitting objective is to obtain a new VSM $Q$ (also dimension $m \times n$) according to the following objective Step4: Only node 0 has outgoing edges Step5: All nodes connected to all others Step6: As before, but now 2 has no outgoing edges Step7: All nodes connected to all others, but $\alpha = 0$ Step8: WordNet Faruqui et al. conduct experiments on three knowledge graphs Step9: WordNet and VSMs A central challenge of working with WordNet is that one doesn't usually encounter lemmas or synsets in the wild. One probably gets just strings, or maybe strings with part-of-speech tags. Mapping these objects to lemmas is incredibly difficult. For our experiments with VSMs, we simply collapse together all the senses that a given string can have. This is expedient, of course. It might also be a good choice linguistically Step10: Reproducing the WordNet synonym graph experiment For our VSM, let's use the 300d file included in this distribution from the GloVe team, as it is close to or identical to the one used in the paper Step11: This is the initial embedding space $\widehat{Q}$ Step12: Now we just need to replace all of the strings in edges with indices into X_glove Step13: And now we can retrofit Step14: You can now evaluate X_retro using the homework/bake-off notebook hw_wordrelatedness.ipynb!
Python Code: __author__ = "Christopher Potts" __version__ = "CS224u, Stanford, Spring 2022" Explanation: Vector-space models: retrofitting End of explanation from collections import defaultdict from nltk.corpus import wordnet as wn import numpy as np import os import pandas as pd import retrofitting from retrofitting import Retrofitter import utils data_home = 'data' Explanation: Contents Overview Set-up The retrofitting model Examples Only node 0 has outgoing edges All nodes connected to all others As before, but now 2 has no outgoing edges All nodes connected to all others, but $\alpha = 0$ WordNet Background on WordNet WordNet and VSMs Reproducing the WordNet synonym graph experiment Other retrofitting models and ideas Overview Thus far, all of the information in our word vectors has come solely from co-occurrences patterns in text. This information is often very easy to obtain – though one does need a lot of text – and it is striking how rich the resulting representations can be. Nonetheless, it seems clear that there is important information that we will miss this way – relationships that just aren't encoded at all in co-occurrences or that get distorted by such patterns. For example, it is probably straightforward to learn representations that will support the inference that all puppies are dogs (puppy entails dog), but it might be difficult to learn that dog entails mammal because of the unusual way that very broad taxonomic terms like mammal are used in text. The question then arises: how can we bring structured information – labels – into our representations? If we can do that, then we might get the best of both worlds: the ease of using co-occurrence data and the refinement that comes from using labeled data. In this notebook, we look at one powerful method for doing this: the retrofitting model of Faruqui et al. 2016. In this model, one learns (or just downloads) distributed representations for nodes in a knowledge graph and then updates those representations to bring connected nodes closer to each other. This is an incredibly fertile idea; the final section of the notebook reviews some recent extensions, and new ones are likely appearing all the time. Set-up End of explanation import nltk nltk.download("wordnet") Explanation: Note: To make full use of this notebook, you will need the NLTK data distribution – or, at the very least, its WordNet files. Anaconda comes with NLTK but not with its data distribution. The following will download WordNet and make it available (if it's not already available): End of explanation Q_hat = pd.DataFrame( [[0.0, 0.0], [0.0, 0.5], [0.5, 0.0]], columns=['x', 'y']) Q_hat Explanation: If you decide to download the data to a different directory than the default, then you'll have to set NLTK_DATA in your shell profile. (If that doesn't make sense to you, then we recommend choosing the default download directory!) The retrofitting model For an an existing VSM $\widehat{Q}$ of dimension $m \times n$, and a set of edges $E$ (pairs of indices into rows in $\widehat{Q}$), the retrofitting objective is to obtain a new VSM $Q$ (also dimension $m \times n$) according to the following objective: $$\sum_{i=1}^{m} \left[ \alpha_{i}\|q_{i} - \widehat{q}{i}\|{2}^{2} + \sum_{j : (i,j) \in E}\beta_{ij}\|q_{i} - q_{j}\|_{2}^{2} \right]$$ The left term encodes a pressure to stay like the original vector. The right term encodes a pressure to be more like one's neighbors. In minimizing this objective, we should be able to strike a balance between old and new, VSM and graph. Definitions: $\|u - v\|_{2}^{2}$ gives the squared euclidean distance from $u$ to $v$. $\alpha$ and $\beta$ are weights we set by hand, controlling the relative strength of the two pressures. In the paper, they use $\alpha=1$ and $\beta = \frac{1}{{j : (i, j) \in E}}$. Examples To get a feel for what's happening, it's helpful to visualize the changes that occur in small, easily understood VSMs and graphs. The function retrofitting.plot_retro_path helps with this. End of explanation edges_0 = {0: {1, 2}, 1: set(), 2: set()} _ = retrofitting.plot_retro_path(Q_hat, edges_0) Explanation: Only node 0 has outgoing edges End of explanation edges_all = {0: {1, 2}, 1: {0, 2}, 2: {0, 1}} _ = retrofitting.plot_retro_path(Q_hat, edges_all) Explanation: All nodes connected to all others End of explanation edges_isolated = {0: {1, 2}, 1: {0, 2}, 2: set()} _ = retrofitting.plot_retro_path(Q_hat, edges_isolated) Explanation: As before, but now 2 has no outgoing edges End of explanation _ = retrofitting.plot_retro_path( Q_hat, edges_all, retrofitter=Retrofitter(alpha=lambda x: 0)) Explanation: All nodes connected to all others, but $\alpha = 0$ End of explanation lems = wn.lemmas('crane', pos=None) for lem in lems: ss = lem.synset() print("="*70) print("Lemma name: {}".format(lem.name())) print("Lemma Synset: {}".format(ss)) print("Synset definition: {}".format(ss.definition())) Explanation: WordNet Faruqui et al. conduct experiments on three knowledge graphs: WordNet, FrameNet, and the Penn Paraphrase Database (PPDB). The repository for their paper includes the graphs that they derived for their experiments. Here, we'll reproduce just one of the two WordNet experiments they report, in which the graph is formed based on synonymy. Background on WordNet WordNet is an incredible, hand-built lexical resource capturing a wealth of information about English words and their inter-relationships. (Here is a collection of WordNets in other languages.) For a detailed overview using NLTK, see this tutorial. The core concepts: A lemma is something like our usual notion of word. Lemmas are highly sense-disambiguated. For instance, there are six lemmas that are consistent with the string crane: the bird, the machine, the poets, ... A synset is a collection of lemmas that are synonymous in the WordNet sense (which is WordNet-specific; words with intuitively different meanings might still be grouped together into synsets.). WordNet is a graph of relations between lemmas and between synsets, capturing things like hypernymy, antonymy, and many others. For the most part, the relations are defined between nouns; the graph is sparser for other areas of the lexicon. End of explanation def get_wordnet_edges(): edges = defaultdict(set) for ss in wn.all_synsets(): lem_names = {lem.name() for lem in ss.lemmas()} for lem in lem_names: edges[lem] |= lem_names return edges wn_edges = get_wordnet_edges() Explanation: WordNet and VSMs A central challenge of working with WordNet is that one doesn't usually encounter lemmas or synsets in the wild. One probably gets just strings, or maybe strings with part-of-speech tags. Mapping these objects to lemmas is incredibly difficult. For our experiments with VSMs, we simply collapse together all the senses that a given string can have. This is expedient, of course. It might also be a good choice linguistically: senses are flexible and thus hard to individuate, and we might hope that our vectors can model multiple senses at the same time. (That said, there is excellent work on creating sense-vectors; see Reisinger and Mooney 2010; Huang et al 2012.) The following code uses the NLTK WordNet API to create the edge dictionary we need for using the Retrofitter class: End of explanation glove_dict = utils.glove2dict( os.path.join(data_home, 'glove.6B', 'glove.6B.300d.txt')) Explanation: Reproducing the WordNet synonym graph experiment For our VSM, let's use the 300d file included in this distribution from the GloVe team, as it is close to or identical to the one used in the paper: http://nlp.stanford.edu/data/glove.6B.zip If you download this archive, place it in vsmdata, and unpack it, then the following will load the file into a dictionary for you: End of explanation X_glove = pd.DataFrame(glove_dict).T X_glove.T.shape Explanation: This is the initial embedding space $\widehat{Q}$: End of explanation def convert_edges_to_indices(edges, Q): lookup = dict(zip(Q.index, range(Q.shape[0]))) index_edges = defaultdict(set) for start, finish_nodes in edges.items(): s = lookup.get(start) if s: f = {lookup[n] for n in finish_nodes if n in lookup} if f: index_edges[s] = f return index_edges wn_index_edges = convert_edges_to_indices(wn_edges, X_glove) Explanation: Now we just need to replace all of the strings in edges with indices into X_glove: End of explanation wn_retro = Retrofitter(verbose=True) X_retro = wn_retro.fit(X_glove, wn_index_edges) Explanation: And now we can retrofit: End of explanation # Optionally write `X_retro` to disk for use elsewhere: # # X_retro.to_csv( # os.path.join(data_home, 'glove6B300d-retrofit-wn.csv.gz'), # compression='gzip') Explanation: You can now evaluate X_retro using the homework/bake-off notebook hw_wordrelatedness.ipynb! End of explanation
12,627
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: 量化感知训练综合指南 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https Step2: 定义量化感知模型 通过按以下方式定义模型,可以获得概述页面中所列后端的部署路径。默认情况下,使用 8 位量化。 注:量化感知模型实际上并未量化。创建量化模型是一个单独的步骤。 量化整个模型 您的用例: 不支持子类化模型。 提高模型准确率的提示: 尝试“量化某些层”以跳过量化对准确率影响最大的层。 与从头开始训练相比,使用量化感知训练进行微调的效果一般更好。 要使整个模型可以感知量化,请将 tfmot.quantization.keras.quantize_model 应用于模型。 Step3: 量化某些层 量化模型可能会对准确率造成负面影响。您可以选择性地量化模型的各个层来探索准确率、速度和模型大小之间的最佳平衡。 您的用例: 要部署到仅适用于完全量化模型(例如 EdgeTPU v1、大多数 DSP)的后端,请尝试“量化整个模型”。 提高模型准确率的提示: 与从头开始训练相比,使用量化感知训练进行微调的效果一般更好。 尝试量化后面的层而不是前面的层。 避免量化关键层(例如注意力机制)。 在下面的示例中,仅量化 Dense 层。 Step4: 尽管此示例使用层的类型来决定要量化的内容,但是量化特定层的最简单方式是设置其 name 属性,然后在 clone_function 中查找该名称。 Step5: 更具可读性,但模型准确率可能较低 这与通过量化感知训练进行的微调不兼容,这就是它的准确率可能低于上述示例的原因。 函数式模型示例 Step6: 序贯模型示例 Step7: 设置检查点和反序列化 您的用例:仅 HDF5 模型格式需要此代码(HDF5 权重或其他格式不需要)。 Step8: 创建并部署量化模型 通常,请参考将要使用的部署后端的文档。 下面是一个 TFLite 后端的示例。 Step9: 试验量化 您的用例:使用以下 API 意味着没有支持的部署路径。这些功能也是实验性功能,不具备向后兼容性。 tfmot.quantization.keras.QuantizeConfig tfmot.quantization.keras.quantizers.Quantizer tfmot.quantization.keras.quantizers.LastValueQuantizer tfmot.quantization.keras.quantizers.MovingAverageQuantizer 设置:DefaultDenseQuantizeConfig 要进行实验,需要使用 tfmot.quantization.keras.QuantizeConfig,它描述了如何量化层的权重、激活和输出。 以下示例定义了 API 默认值中用于 Dense 层的相同 QuantizeConfig。 在此示例的正向传播过程中,以 layer.kernel 作为输入调用了 get_weights_and_quantizers 中返回的 LastValueQuantizer,从而产生了输出。通过 set_quantize_weights 中定义的逻辑,输出将替换 Dense 层的原始正向传播中的 layer.kernel。同样的构想也适用于激活和输出。 Step10: 量化自定义 Keras 层 本示例使用 DefaultDenseQuantizeConfig 来量化 CustomLayer。 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 CustomLayer 并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 Step11: 修改量化参数 常见误区:将偏差量化为少于 32 位通常会严重影响模型准确率。 本示例将 Dense 层修改为将 4 位用于其权重,而不是默认的 8 位。模型的其余部分继续使用 API 默认值。 Step12: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 Step13: 修改要量化的层的部分 本示例将 Dense 层修改为跳过量化激活。模型的其余部分继续使用 API 默认值。 Step14: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 Step16: 使用自定义量化算法 tfmot.quantization.keras.quantizers.Quantizer 类是一个可调用对象,可以将任何算法应用于其输入。 在本示例中,输入是权重,我们将 FixedRangeQuantizer call 函数中的数学运算应用于权重。现在,FixedRangeQuantizer 的输出将代替原始权重值传递给使用这些权重的任何对象。 Step17: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation ! pip uninstall -y tensorflow ! pip install -q tf-nightly ! pip install -q tensorflow-model-optimization import tensorflow as tf import numpy as np import tensorflow_model_optimization as tfmot import tempfile input_shape = [20] x_train = np.random.randn(1, 20).astype(np.float32) y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20) def setup_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(20, input_shape=input_shape), tf.keras.layers.Flatten() ]) return model def setup_pretrained_weights(): model= setup_model() model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) model.fit(x_train, y_train) _, pretrained_weights = tempfile.mkstemp('.tf') model.save_weights(pretrained_weights) return pretrained_weights def setup_pretrained_model(): model = setup_model() pretrained_weights = setup_pretrained_weights() model.load_weights(pretrained_weights) return model setup_model() pretrained_weights = setup_pretrained_weights() Explanation: 量化感知训练综合指南 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/model_optimization/guide/quantization/training_comprehensive_guide"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/model_optimization/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td> </table> 欢迎阅读 Keras 量化感知训练的综合指南。 本页面记录了各种用例,并展示了如何将 API 用于每种用例​​。了解需要哪些 API 后,可在 API 文档中找到参数和底层详细信息: 如果要查看量化感知训练的好处以及支持的功能,请参阅概述。 有关单个端到端示例,请参阅量化感知训练示例。 涵盖了以下用例: 按下列步骤操作,部署 8 位量化模型。 定义一个量化感知模型。 仅对于 Keras HDF5 模型,使用特殊的检查点和反序列化逻辑。否则,将使用标准训练。 通过量化感知模型创建量化模型。 试验量化。 实验的任何方面都没有支持的部署路径。 自定义 Keras 层处于实验阶段。 设置 如果只是查找您需要的 API 并了解其用途,您可以运行但不阅读本部分。 End of explanation base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) quant_aware_model.summary() Explanation: 定义量化感知模型 通过按以下方式定义模型,可以获得概述页面中所列后端的部署路径。默认情况下,使用 8 位量化。 注:量化感知模型实际上并未量化。创建量化模型是一个单独的步骤。 量化整个模型 您的用例: 不支持子类化模型。 提高模型准确率的提示: 尝试“量化某些层”以跳过量化对准确率影响最大的层。 与从头开始训练相比,使用量化感知训练进行微调的效果一般更好。 要使整个模型可以感知量化,请将 tfmot.quantization.keras.quantize_model 应用于模型。 End of explanation # Create a base model base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy # Helper function uses `quantize_annotate_layer` to annotate that only the # Dense layers should be quantized. def apply_quantization_to_dense(layer): if isinstance(layer, tf.keras.layers.Dense): return tfmot.quantization.keras.quantize_annotate_layer(layer) return layer # Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense` # to the layers of the model. annotated_model = tf.keras.models.clone_model( base_model, clone_function=apply_quantization_to_dense, ) # Now that the Dense layers are annotated, # `quantize_apply` actually makes the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() Explanation: 量化某些层 量化模型可能会对准确率造成负面影响。您可以选择性地量化模型的各个层来探索准确率、速度和模型大小之间的最佳平衡。 您的用例: 要部署到仅适用于完全量化模型(例如 EdgeTPU v1、大多数 DSP)的后端,请尝试“量化整个模型”。 提高模型准确率的提示: 与从头开始训练相比,使用量化感知训练进行微调的效果一般更好。 尝试量化后面的层而不是前面的层。 避免量化关键层(例如注意力机制)。 在下面的示例中,仅量化 Dense 层。 End of explanation print(base_model.layers[0].name) Explanation: 尽管此示例使用层的类型来决定要量化的内容,但是量化特定层的最简单方式是设置其 name 属性,然后在 clone_function 中查找该名称。 End of explanation # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. i = tf.keras.Input(shape=(20,)) x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i) o = tf.keras.layers.Flatten()(x) annotated_model = tf.keras.Model(inputs=i, outputs=o) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) # For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the # quantized model can take in float inputs instead of only uint8. quant_aware_model.summary() Explanation: 更具可读性,但模型准确率可能较低 这与通过量化感知训练进行的微调不兼容,这就是它的准确率可能低于上述示例的原因。 函数式模型示例 End of explanation # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. annotated_model = tf.keras.Sequential([ tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)), tf.keras.layers.Flatten() ]) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() Explanation: 序贯模型示例 End of explanation # Define the model. base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Save or checkpoint the model. _, keras_model_file = tempfile.mkstemp('.h5') quant_aware_model.save(keras_model_file) # `quantize_scope` is needed for deserializing HDF5 models. with tfmot.quantization.keras.quantize_scope(): loaded_model = tf.keras.models.load_model(keras_model_file) loaded_model.summary() Explanation: 设置检查点和反序列化 您的用例:仅 HDF5 模型格式需要此代码(HDF5 权重或其他格式不需要)。 End of explanation base_model = setup_pretrained_model() quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Typically you train the model here. converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_tflite_model = converter.convert() Explanation: 创建并部署量化模型 通常,请参考将要使用的部署后端的文档。 下面是一个 TFLite 后端的示例。 End of explanation LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig): # Configure how to quantize weights. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))] # Configure how to quantize activations. def get_activations_and_quantizers(self, layer): return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))] def set_quantize_weights(self, layer, quantize_weights): # Add this line for each item returned in `get_weights_and_quantizers` # , in the same order layer.kernel = quantize_weights[0] def set_quantize_activations(self, layer, quantize_activations): # Add this line for each item returned in `get_activations_and_quantizers` # , in the same order. layer.activation = quantize_activations[0] # Configure how to quantize outputs (may be equivalent to activations). def get_output_quantizers(self, layer): return [] def get_config(self): return {} Explanation: 试验量化 您的用例:使用以下 API 意味着没有支持的部署路径。这些功能也是实验性功能,不具备向后兼容性。 tfmot.quantization.keras.QuantizeConfig tfmot.quantization.keras.quantizers.Quantizer tfmot.quantization.keras.quantizers.LastValueQuantizer tfmot.quantization.keras.quantizers.MovingAverageQuantizer 设置:DefaultDenseQuantizeConfig 要进行实验,需要使用 tfmot.quantization.keras.QuantizeConfig,它描述了如何量化层的权重、激活和输出。 以下示例定义了 API 默认值中用于 Dense 层的相同 QuantizeConfig。 在此示例的正向传播过程中,以 layer.kernel 作为输入调用了 get_weights_and_quantizers 中返回的 LastValueQuantizer,从而产生了输出。通过 set_quantize_weights 中定义的逻辑,输出将替换 Dense 层的原始正向传播中的 layer.kernel。同样的构想也适用于激活和输出。 End of explanation quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class CustomLayer(tf.keras.layers.Dense): pass model = quantize_annotate_model(tf.keras.Sequential([ quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope` # as well as the custom Keras layer. with quantize_scope( {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig, 'CustomLayer': CustomLayer}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() Explanation: 量化自定义 Keras 层 本示例使用 DefaultDenseQuantizeConfig 来量化 CustomLayer。 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 CustomLayer 并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 End of explanation quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))] Explanation: 修改量化参数 常见误区:将偏差量化为少于 32 位通常会严重影响模型准确率。 本示例将 Dense 层修改为将 4 位用于其权重,而不是默认的 8 位。模型的其余部分继续使用 API 默认值。 End of explanation model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() Explanation: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 End of explanation quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): def get_activations_and_quantizers(self, layer): # Skip quantizing activations. return [] def set_quantize_activations(self, layer, quantize_activations): # Empty since `get_activaations_and_quantizers` returns # an empty list. return Explanation: 修改要量化的层的部分 本示例将 Dense 层修改为跳过量化激活。模型的其余部分继续使用 API 默认值。 End of explanation model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() Explanation: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 End of explanation quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer): Quantizer which forces outputs to be between -1 and 1. def build(self, tensor_shape, name, layer): # Not needed. No new TensorFlow variables needed. return {} def __call__(self, inputs, training, weights, **kwargs): return tf.keras.backend.clip(inputs, -1.0, 1.0) def get_config(self): # Not needed. No __init__ parameters to serialize. return {} class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): # Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer. return [(layer.kernel, FixedRangeQuantizer())] Explanation: 使用自定义量化算法 tfmot.quantization.keras.quantizers.Quantizer 类是一个可调用对象,可以将任何算法应用于其输入。 在本示例中,输入是权重,我们将 FixedRangeQuantizer call 函数中的数学运算应用于权重。现在,FixedRangeQuantizer 的输出将代替原始权重值传递给使用这些权重的任何对象。 End of explanation model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this `Dense` layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() Explanation: 在“试验量化”用例中,应用的配置是相同的。 将 tfmot.quantization.keras.quantize_annotate_layer 应用于 Dense 层并在 QuantizeConfig 中传递。 通过 tfmot.quantization.keras.quantize_annotate_model 继续使用 API ​​默认值来量化模型的其余部分。 End of explanation
12,628
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Chapter 7 - Sets This chapter will introduce a different kind of container Step2: Curly brackets surround sets, and commas separate the elements in the set A set can be empty (use set() to create it) Sets do not allow duplicates sets are unordered (the order in which you add items is not important) A set can only contain immutable objects (for now that means only strings and integers can be added) A set can not contain mutable objects, hence no lists or sets Please note that sets do not allow duplicates. In the example below, the integer 1 will only be present once in the set. Step3: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it Step4: This also means that you can check if two sets are the same even if you don't know the order in which items were put in Step5: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects Step6: But the following example will result in an error, since we are trying to create a set with a mutable object Step7: 2. How to add items to a set The most common way of adding an item to a set is by using the add method. The add method has one positional parameter, namely what you are going to add to the set, and it returns None. Step8: 3. How to extract/inspect items in a set When you use sets, you usually want to compare the elements of different sets, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like union, intersection, difference, and symmetric difference. Please take a look at this website if you prefer a more visual and more complete explanation. You can ask Python to show you all the set methods by using dir. All the methods that do not start with '__' are relevant for you. Step9: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method. Step10: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them. Step11: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets. Step12: Since sets are unordered, you can not use an index to extract an element from a set. Step13: 4. Using built-in functions on sets The same range of functions that operate on lists also work with sets. We can easily get some simple calculations done with these functions Step14: 5. An overview of set operations There are many more operations which we can perform on sets. Here is an overview of some of them. In order to get used to them, please call the help function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method. Step15: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods Step16: Exercises Exercise 1
Python Code: %%capture !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip !wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip !unzip Data.zip -d ../ !unzip images.zip -d ./ !unzip Extra_Material.zip -d ../ !rm Data.zip !rm Extra_Materil.zip !rm images.zip Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Chapters-colab/Chapter_07_Sets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation a_set = {1, 2, 3} a_set empty_set = set() # you have to use set() to create an empty set! (we will see why later) print(empty_set) Explanation: Chapter 7 - Sets This chapter will introduce a different kind of container: sets. Sets are unordered lists with no duplicate entries. You might wonder why we need different types of containers. We will postpone that discussion until chapter 8. At the end of this chapter, you will be able to: * create a set * add items to a set * extract/inspect items in a set If you want to learn more about these topics, you might find the following links useful: * Python documentation * A tutorial on sets If you have questions about this chapter, please contact us ([email protected]). 1. How to create a set It's quite simple to create a set. End of explanation a_set = {1, 2, 1, 1} print(a_set) Explanation: Curly brackets surround sets, and commas separate the elements in the set A set can be empty (use set() to create it) Sets do not allow duplicates sets are unordered (the order in which you add items is not important) A set can only contain immutable objects (for now that means only strings and integers can be added) A set can not contain mutable objects, hence no lists or sets Please note that sets do not allow duplicates. In the example below, the integer 1 will only be present once in the set. End of explanation a_set = {1, 3, 2} print(a_set) Explanation: Please note that sets are unordered. This means that it can occur that if you print a set, it looks different than how you created it End of explanation {1, 2, 3} == {2, 3, 1} Explanation: This also means that you can check if two sets are the same even if you don't know the order in which items were put in: End of explanation a_set = {1, 'a'} print(a_set) Explanation: Please note that sets can only contain immutable objects. Hence the following examples will work, since we are adding immutable objects End of explanation a_set = {1, []} Explanation: But the following example will result in an error, since we are trying to create a set with a mutable object End of explanation a_set = set() a_set.add(1) print(a_set) a_set = set() a_set = a_set.add(1) print(a_set) Explanation: 2. How to add items to a set The most common way of adding an item to a set is by using the add method. The add method has one positional parameter, namely what you are going to add to the set, and it returns None. End of explanation dir(set) Explanation: 3. How to extract/inspect items in a set When you use sets, you usually want to compare the elements of different sets, for instance, to determine how much overlap there is or how many of the items in set1 are not members of set2. Sets can be used to carry out mathematical set operations like union, intersection, difference, and symmetric difference. Please take a look at this website if you prefer a more visual and more complete explanation. You can ask Python to show you all the set methods by using dir. All the methods that do not start with '__' are relevant for you. End of explanation help(set.union) Explanation: You observe that there are many methods defined for sets! Here we explain the two most common methods. We start with the union method. End of explanation set1 = {1, 2, 3, 4, 5} set2 = {4, 5, 6, 7, 8} the_union = set1.union(set2) print(the_union) set1 = {1, 2, 3, 4, 5} set2 = {4, 5, 6, 7, 8} set3 = {5, 6, 7, 8, 9} the_union = set1.union(set2, set3) print(the_union) Explanation: Python shows dots (...) for the parameters of the union method. Based on the docstring, we learn that we can provide any number of sets, and Python will return the union of them. End of explanation help(set.intersection) set1 = {1, 2, 3, 4, 5} set2 = {4, 5, 6, 7, 8} the_intersection = set1.intersection(set2) print(the_intersection) set1 = {1, 2, 3, 4, 5} set2 = {4, 5, 6, 7, 8} set3 = {5, 8, 9, 10} the_intersection = set1.intersection(set2, set3) print(the_intersection) Explanation: The intersection method has works in a similar manner as the union method, but returns a new set containing only the intersection of the sets. End of explanation a_set = set() a_set.add(1) a_set.add(2) a_set[0] Explanation: Since sets are unordered, you can not use an index to extract an element from a set. End of explanation nums = {3, 41, 12, 9, 74, 15} print(len(nums)) # number of items in a set print(max(nums)) # highest value in a set print(min(nums)) # lowest value in a set print(sum(nums)) # sum of all values in a set Explanation: 4. Using built-in functions on sets The same range of functions that operate on lists also work with sets. We can easily get some simple calculations done with these functions: End of explanation set_a = {1, 2, 3} set_b = {4, 5, 6} an_element = 4 print(set_a) #do some operations set_a.add(an_element) # Add an_element to set_a print(set_a) set_a.update(set_b) # Add the elements of set_b to set_a print(set_a) set_a.pop() # Remove and return an arbitrary set element. How does this compare to the list method pop? print(set_a) set_a.remove(an_element) # Remove an_element from set_a print(set_a) Explanation: 5. An overview of set operations There are many more operations which we can perform on sets. Here is an overview of some of them. In order to get used to them, please call the help function on each of them (e.g., help(set.union)). This will give you the information about the positional parameters, keyword parameters, and what is returned by the method. End of explanation dir(set) Explanation: Before diving into some exercises, you may want to the dir built-in function again to see an overview of all set methods: End of explanation set_1 = {'just', 'some', 'words'} set_2 = {'some', 'other', 'words'} # your code here Explanation: Exercises Exercise 1: Please create an empty set and use the add method to add four items to it: 'a', 'set', 'is', 'born' Exercise 2: Please use a built-in method to count how many items your set has Exercise 3: How would you remove one item from the set? Exercise 4: Please check which items are in both sets: End of explanation
12,629
Given the following text description, write Python code to implement the functionality described below step by step Description: DMP tutorial Introduction Dynamical movement primitives are dynamical systems that provide a means of robust, generalizable trajectory generation. I give an overview of their origins formally on my blog (https Step1: The point attractor Step2: The forcing function The second part of DMPs is the forcing function. For rhythmic DMPs we use an oscillator, and from that oscillator we'll decode a function. Let's look at how to program an oscillator and decode a function from it. First we'll generate the function we'd like from some arbitrary trajectory (can be anything!) Step3: Combining point attractor and forcing function Now that we can generate point attractors and decode rhythmic patterns off of oscillators, we have to put them together. Note that we want to generate a set of forces off of the oscillator, rather than a set of positions. So the function that we want to decode off of the oscillator can be calculated from the desired position trajectory by finding the desired acceleration trajectory, and subtracting out the effects of the point attractors. Once we have this function, we can simply connect the decoded oscillator output to the point attractory dynamics!
Python Code: import numpy as np import nengo model = nengo.Network() with model: # linearly increasing system with an oscillatory biased input ramp_osc = nengo.Ensemble(n_neurons=500, dimensions=2, radius=.01) # recurrent connections nengo.Connection(ramp_osc, ramp_osc, transform=np.eye(2) + \ np.array([[1, -1], [1, 1]])) # set the number of neurons = to the number of basis functions specified ramp = nengo.Ensemble(n_neurons=500, dimensions=1) # make first dimensions of forcing function ensemble an integrator nengo.Connection(ramp, ramp, synapse=.1) # set up the input to the integrating first dimensions nengo.Connection(ramp_osc, ramp, transform=.015, function=lambda x: x[0]+.5) from nengo_gui.ipython import IPythonViz IPythonViz(model, cfg='ramp.viz.cfg') def gen_oscillator(model, speed=.05): with model: # ------------------ Oscillator ------------------- osc = nengo.Ensemble(n_neurons=500, dimensions=2, label='oscillator') # recurrent connections nengo.Connection(osc, osc, transform=np.eye(2) + \ np.array([[1, -1], [1, 1]]) * speed) return osc import numpy as np import nengo model = nengo.Network('Oscillator') with m: osc = gen_oscillator(m, speed=.05) output = nengo.Ensemble(1, 1, neuron_type=nengo.Direct()) nengo.Connection(osc, output, function=lambda x: np.arctan2(x[0], x[1])) from nengo_gui.ipython import IPythonViz IPythonViz(m, cfg='osc.viz.cfg') Explanation: DMP tutorial Introduction Dynamical movement primitives are dynamical systems that provide a means of robust, generalizable trajectory generation. I give an overview of their origins formally on my blog (https://studywolf.wordpress.com/category/robotics/dynamic-movement-primitive/) and here we'll do a quick overview plus look at their implementation in neurons. They have two forms, discrete and rhythmic, and in this tutorial we'll be looking at using DMPs for rhythmic pattern generation. Basics There are two main parts to DMPs, the point attractors and the forcing function. For each degree-of-freedom that you would like to generate a pattern for a separate point attractor is required. In this notebook we'll be looking at generating 2D patterns, so we'll need two point attractors. Let's look at the code for generating point attractors! End of explanation def gen_point_attractor(model, goal, n_neurons=200, alpha=10, beta=10/4.): # create an ensemble with point attractor dynamics synapse = 1 with model: # set up two integrators to represent y and dy y = nengo.Ensemble(n_neurons=n_neurons, dimensions=1, radius=1.5, label='y') dy = nengo.Ensemble(n_neurons=n_neurons, dimensions=1, radius=5, label='dy') nengo.Connection(y, y, synapse=synapse) nengo.Connection(dy, dy, synapse=synapse) nengo.Connection(dy, y, synapse=synapse) # implement ddy = alpha * (beta * (goal - y) - dy) nengo.Connection(goal, dy, transform=alpha*beta, synapse=synapse) nengo.Connection(y, dy, transform=-alpha*beta, synapse=synapse) nengo.Connection(dy, dy, transform=-alpha, synapse=synapse) return y,dy m = nengo.Network('Point attractor') with m: # --------------------- Input -------------------------- goal = nengo.Node(output=[.8, -.8]) # ------------------- Point Attractors -------------------- y1,dy1 = gen_point_attractor(m, goal[0]) y2,dy2 = gen_point_attractor(m, goal[1]) # ------------------ Combine output ---------------------- combine = nengo.Ensemble(n_neurons=500, dimensions=2, radius=np.sqrt(2)) nengo.Connection(y1[0], combine[0], synapse=.01) nengo.Connection(y2[0], combine[1], synapse=.01) from nengo_gui.ipython import IPythonViz IPythonViz(m, cfg='point_attractor.viz.cfg') Explanation: The point attractor End of explanation import numpy as np from scipy import interpolate # our desired path heart_path = np.load('heart_traj.npz')['arr_0'][:,0] * 10 # generate range of values to assign our desired path to x = np.linspace(-np.pi, np.pi, len(heart_path)) # generate function to interpolate the desired trajectory path_gen = interpolate.interp1d(x, heart_path / 10.0) import matplotlib.pyplot as plt %matplotlib inline plt.plot(np.linspace(-np.pi, np.pi, len(heart_path)), path_gen(np.linspace(-np.pi, np.pi, len(heart_path)))) plt.show() import numpy as np import nengo m = nengo.Network('Oscillator') with m: osc = gen_oscillator(m, speed=.05) output1 = nengo.Ensemble(n_neurons=1, dimensions=1, neuron_type=nengo.Direct()) output2 = nengo.Ensemble(n_neurons=1, dimensions=1, neuron_type=nengo.Direct()) # decode out a rhythmic path from our oscillator def force(x, function, gain=1): # calculate the angle theta theta = np.arctan2(x[0], x[1]) # decode our function off of the theta value return function(theta) * gain nengo.Connection(osc, output1, function=lambda x: force(x, path_gen, -1)) nengo.Connection(osc, output2, function=lambda x: force(x, path_gen)) from nengo_gui.ipython import IPythonViz IPythonViz(m, cfg='oscillator.viz.cfg') Explanation: The forcing function The second part of DMPs is the forcing function. For rhythmic DMPs we use an oscillator, and from that oscillator we'll decode a function. Let's look at how to program an oscillator and decode a function from it. First we'll generate the function we'd like from some arbitrary trajectory (can be anything!): End of explanation import numpy as np from scipy import interpolate def gen_forcing_functions(y_des, dt=.001, alpha=10, beta=10/4.): # scale our trajectory and find the center point y_des = y_des.T / 1e5 goal = np.sum(y_des, axis=1) / y_des.shape[1] # interpolate our desired trajectory to smooth out the sampling num_samples = 10 path = np.zeros((y_des.shape[0], num_samples)) x = np.linspace(-np.pi, np.pi, y_des.shape[1]) for d in range(y_des.shape[0]): path_gen = interpolate.interp1d(x, y_des[d]) for ii,t in enumerate(np.linspace(-np.pi, np.pi, num_samples)): path[d, ii] = path_gen(t) y_des = path # calculate velocity of y_des dy_des = np.diff(y_des) / dt # add zero to the beginning of every row dy_des = np.hstack((np.zeros((y_des.shape[0], 1)), dy_des)) # calculate acceleration of y_des ddy_des = np.diff(dy_des) / dt # add zero to the beginning of every row ddy_des = np.hstack((np.zeros((y_des.shape[0], 1)), ddy_des)) forcing_functions = [] for d in range(y_des.shape[0]): # find the force required to move along this trajectory # by subtracting out the effects of the point attractor force = ddy_des[d] - alpha * \ (beta * (goal[d] - y_des[d]) - \ dy_des[d]) # generate another interpolation function we can use # to now train up our decoded oscillator output forcing_functions.append(lambda x, force=force: interpolate.interp1d(np.linspace(-np.pi, np.pi, num_samples), force)(x)) return forcing_functions import nengo m = nengo.Network() with m: # --------------------- Inputs -------------------------- in_goal = nengo.Node(output=[0,0]) # ------------------- Point Attractors -------------------- yz1 = gen_point_attractor(m, in_goal[0], n_neurons=1000) yz2 = gen_point_attractor(m, in_goal[1], n_neurons=1000) # -------------------- Oscillators ---------------------- osc = gen_oscillator(m, speed=.05) # generate our forcing function y_des = np.load('heart_traj.npz')['arr_0'] forcing_functions = gen_forcing_functions(y_des) def force(x, function, gain=1): # calculate the angle theta theta = np.arctan2(x[0], x[1]) # decode our function off of the theta value return function(theta) * gain # connect oscillator to point attractor nengo.Connection(osc, yz1[1], function=lambda x: force(x, forcing_functions[0])) nengo.Connection(osc, yz2[1], function=lambda x: force(x, forcing_functions[1])) # output for easy viewing output = nengo.Ensemble(n_neurons=1, dimensions=2, neuron_type=nengo.Direct()) nengo.Connection(yz1[0], output[0], synapse=.01) nengo.Connection(yz2[0], output[1], synapse=.01) from nengo_gui.ipython import IPythonViz IPythonViz(m, cfg='DMP.viz.cfg') Explanation: Combining point attractor and forcing function Now that we can generate point attractors and decode rhythmic patterns off of oscillators, we have to put them together. Note that we want to generate a set of forces off of the oscillator, rather than a set of positions. So the function that we want to decode off of the oscillator can be calculated from the desired position trajectory by finding the desired acceleration trajectory, and subtracting out the effects of the point attractors. Once we have this function, we can simply connect the decoded oscillator output to the point attractory dynamics! End of explanation
12,630
Given the following text description, write Python code to implement the functionality described below step by step Description: What-If Tool Image Smile Detection In this demo we demonstrate the use of what-if-tool for image recognition models. Our task is to predict if a person is smiling or not. We provide a CNN that is trained on a subset of CelebA dataset and visualize the results on a separate test subset. Copyright 2019 Google LLC. SPDX-License-Identifier Step1: Re-run the below cell until you see the output Successfully installed h5py-2.10.0. Step2: Note Step3: Define helper functions for dataset conversion from csv to tf.Examples Step4: Load the csv file into pandas dataframe and process it for WIT Step5: Load the keras models Step6: Define the custom predict function for WIT Step7: Note that this particular model only uses images as input. Therefore, partial dependence plots are flat for all features. These features are provided for slicing and analysis purposes. Invoke What-If Tool for the data and model {display-mode
Python Code: # Ensure the right version of Tensorflow is installed. !pip freeze | grep tensorflow==2.1 Explanation: What-If Tool Image Smile Detection In this demo we demonstrate the use of what-if-tool for image recognition models. Our task is to predict if a person is smiling or not. We provide a CNN that is trained on a subset of CelebA dataset and visualize the results on a separate test subset. Copyright 2019 Google LLC. SPDX-License-Identifier: Apache-2.0 End of explanation # Install version 2.10 of h5py import sys !{sys.executable} -m pip uninstall -y h5py !{sys.executable} -m pip install 'h5py < 3.0.0' Explanation: Re-run the below cell until you see the output Successfully installed h5py-2.10.0. End of explanation !curl -L https://storage.googleapis.com/what-if-tool-resources/smile-demo/smile-colab-model.hdf5 -o ./smile-model.hdf5 !curl -L https://storage.googleapis.com/what-if-tool-resources/smile-demo/test_subset.zip -o ./test_subset.zip !unzip -qq -o test_subset.zip Explanation: Note: Please ignore any incompatibility ERROR that may appear for the packages visions as it will not affect the lab's functionality. In order to use the correct h5py version, you will need to restart the notebook's kernel. To do this, select Kernel > Restart Kernel from the top menu. Download the pretrained keras model files and subset of celeba images End of explanation import numpy as np import tensorflow as tf import os from PIL import Image from io import BytesIO # Converts a dataframe into a list of tf.Example protos. # If images_path is specified, it assumes that the dataframe has a special # column "image_id" and the path "images_path/image_id" points to an image file. # Given this structure, this function loads and processes the images as png byte_lists # into tf.Examples so that they can be shown in WIT. Note that 'image/encoded' # is a reserved field in WIT for encoded image features. def df_to_examples(df, columns=None, images_path=''): examples = [] if columns == None: columns = df.columns.values.tolist() for index, row in df.iterrows(): example = tf.train.Example() for col in columns: if df[col].dtype is np.dtype(np.int64): example.features.feature[col].int64_list.value.append(int(row[col])) elif df[col].dtype is np.dtype(np.float64): example.features.feature[col].float_list.value.append(row[col]) elif row[col] == row[col]: example.features.feature[col].bytes_list.value.append(row[col].encode('utf-8')) if images_path: fname = row['image_id'] with open(os.path.join(images_path, fname), 'rb') as f: im = Image.open(f) buf = BytesIO() im.save(buf, format= 'PNG') im_bytes = buf.getvalue() example.features.feature['image/encoded'].bytes_list.value.append(im_bytes) examples.append(example) return examples # Converts a dataframe column into a column of 0's and 1's based on the provided test. # Used to force label columns to be numeric for binary classification using a TF estimator. def make_label_column_numeric(df, label_column, test): df[label_column] = np.where(test(df[label_column]), 1, 0) Explanation: Define helper functions for dataset conversion from csv to tf.Examples End of explanation import pandas as pd data = pd.read_csv('celeba/data_test_subset.csv') examples = df_to_examples(data, images_path='celeba/img_test_subset_resized/') Explanation: Load the csv file into pandas dataframe and process it for WIT End of explanation from tensorflow.keras.models import load_model model1 = load_model('smile-model.hdf5') Explanation: Load the keras models End of explanation # This function extracts 'image/encoded' field, which is a reserved key for the # feature that contains encoded image byte list. We read this feature into # BytesIO and decode it back to an image using PIL. # The model expects an array of images that are floats in range 0.0 to 1.0 and # outputs a numpy array of (n_samples, n_labels) def custom_predict(examples_to_infer): def load_byte_img(im_bytes): buf = BytesIO(im_bytes) return np.array(Image.open(buf), dtype=np.float64) / 255. ims = [load_byte_img(ex.features.feature['image/encoded'].bytes_list.value[0]) for ex in examples_to_infer] preds = model1.predict(np.array(ims)) return preds Explanation: Define the custom predict function for WIT End of explanation from witwidget.notebook.visualization import WitWidget, WitConfigBuilder, display num_datapoints = 250 tool_height_in_px = 700 # Decode an image from tf.example bytestring def decode_image(ex): im_bytes = ex.features.feature['image/encoded'].bytes_list.value[0] im = Image.open(BytesIO(im_bytes)) return im # Define the custom distance function that compares the average color of images def image_mean_distance(ex, exs, params): selected_im = decode_image(ex) mean_color = np.mean(selected_im, axis=(0,1)) image_distances = [np.linalg.norm(mean_color - np.mean(decode_image(e), axis=(0,1))) for e in exs] return image_distances # Setup the tool with the test examples and the trained classifier config_builder = WitConfigBuilder(examples[:num_datapoints]).set_custom_predict_fn( custom_predict).set_custom_distance_fn(image_mean_distance) wv = WitWidget(config_builder, height=tool_height_in_px) display(wv) Explanation: Note that this particular model only uses images as input. Therefore, partial dependence plots are flat for all features. These features are provided for slicing and analysis purposes. Invoke What-If Tool for the data and model {display-mode: "form"} End of explanation
12,631
Given the following text description, write Python code to implement the functionality described below step by step Description: https Step1: 3 Step2: 4 Step3: 5
Python Code: # %sh # # download source file # wget https://raw.githubusercontent.com/fivethirtyeight/data/master/college-majors/all-ages.csv # wget https://raw.githubusercontent.com/fivethirtyeight/data/master/college-majors/recent-grads.csv # ls -l import pandas as pd all_ages = pd.read_csv("all-ages.csv") print all_ages.columns print all_ages.head(3) recent_grads = pd.read_csv("recent-grads.csv") print recent_grads.columns print recent_grads.head(3) Explanation: https://www.dataquest.io/mission/113/challenge-summarizing-data/ 2: College Majors And Employment The American Community Survey is a survey run by the US Census Bureau that collects data on everything from the affordability of housing to employment rates for different industries. For this challenge, you'll be using the data derived from the American Community Survey for years 2010-2012. The team at FiveThirtyEight has cleaned the dataset and made it available on their Github repo. Here's a quick overview of the files we'll be working with: all-ages.csv - employment data by major for all ages <br /> recent-grads.csv - employment data by major for just recent college graduates <br /> End of explanation all_ages_major_categories = {} recent_grads_major_categories = {} def calculate_major_cat_totals(df): counts_dictionary = {} for cat in df["Major_category"].value_counts().index: counts_dictionary[cat] = df["Total"][df["Major_category"] == cat].sum() return counts_dictionary all_ages_major_categories = calculate_major_cat_totals(all_ages) recent_grads_major_categories = calculate_major_cat_totals(recent_grads) print all_ages_major_categories print recent_grads_major_categories Explanation: 3: Summarizing Major Categories In both of these datasets, majors are grouped into categories. There are multiple rows with a common value for Major_category but different values for Major. We would like to know the total number of people in each Major_category for both datasets. End of explanation low_wage_percent = recent_grads["Low_wage_jobs"].astype(float).sum() / recent_grads["Total"].sum() print low_wage_percent Explanation: 4: Low Wage Jobs Rates The press likes to talk a lot about how many college grads are unable to get higher wage, skilled jobs and end up working lower wage, unskilled jobs instead. As a data person, it is your job to be skeptical of any broad claims and analyze relevant data to obtain a more nuanced view. Let's run some basic calculations to explore that idea further. End of explanation # All majors, common to both DataFrames majors = recent_grads['Major'].value_counts().index recent_grads_lower_emp_count = 0 all_ages_lower_emp_count = 0 for major in majors: recent_unemp = recent_grads["Unemployment_rate"][recent_grads["Major"] == major].values[0] all_unemp = all_ages["Unemployment_rate"][all_ages["Major"] == major].values[0] if recent_unemp < all_unemp: recent_grads_lower_emp_count += 1 elif recent_unemp > all_unemp: all_ages_lower_emp_count += 1 print "Recent grads fare better: ", recent_grads_lower_emp_count print "All ages fare better: ", all_ages_lower_emp_count Explanation: 5: Comparing Datasets Both all_ages and recent_grads datasets have 173 rows, corresponding to the 173 college major codes. This enables us to do some comparisons between the two datasets and perform some initial calculations to see how similar or different the statistics of recent college graduates are from those of the entire population. End of explanation
12,632
Given the following text description, write Python code to implement the functionality described. Description: Bitwise OR ( ) of all even number from 1 to N Function to return the bitwise OR of all the even numbers upto N ; Initialize result as 2 ; Driver code
Python Code: def bitwiseOrTillN(n ) : result = 2 ; for i in range(4 , n + 1 , 2 ) : result = result | i  return result  n = 10 ; print(bitwiseOrTillN(n ) ) ;
12,633
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting ready to implement the Schelling model Goal for this assignment The goal of this assignment is to finish up the two functions that you started in class on the first day of this project, to ensure that you're ready to hit the ground running when you get back to together with your group. You are welcome to work with your group on this pre-class assignment - just make sure to list who you worked with below. Also, everybody needs to turn in their own solutions! Your name SOLUTIONS Function 1 Step1: Function 2 Step3: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
Python Code: # Put your code here, using additional cells if necessary. import random import math def initialize_list(array_size=32, randseed=8675309): ''' This function optionally takes in an array size and random seed and returns the initial neighborhood that we're going to start from - a string of zeros and ones. If no arguments are given, it defaults to the values specified. ''' random.seed(randseed) initial_list = [] for i in range(array_size): initial_list.append(random.randint(0,1)) return initial_list def neighborhood_print(neighborhood, note=''): ''' This is a convenience function to take our neighborhood list, make a string of stars and zeros out of it, and print the string plus optional text at the end. It's not necessary but it looks pretty. ''' neighborstring='' for i in range(len(neighborhood)): if(neighborhood[i]) > 0: neighborstring += '*' else: neighborstring += '0' # make sure optional text is a string if type(note)!=str: note = str(note) # add an extra space to make it look nice! if note != '': note = ' ' + note neighborstring += note print(neighborstring) my_board = initialize_list() neighborhood_print(my_board) Explanation: Getting ready to implement the Schelling model Goal for this assignment The goal of this assignment is to finish up the two functions that you started in class on the first day of this project, to ensure that you're ready to hit the ground running when you get back to together with your group. You are welcome to work with your group on this pre-class assignment - just make sure to list who you worked with below. Also, everybody needs to turn in their own solutions! Your name SOLUTIONS Function 1: Creating a game board Function 1: Write a function that creates a one-dimensional game board composed of agents of two different types (0 and 1, X and O, stars and pluses... whatever you want), where the agents are assigned to spots randomly with a 50% chance of being either type. As arguments to the function, take in (1) the number of spots in the game board (setting the default to 32) and (2) a random seed that you will use to initialize the board (again with some default number), and return your game board. (Hint: which makes more sense to describe the game board, a list or a Numpy array? What are the tradeoffs?) Show that your function is behaving correctly by printing out the returned game board. End of explanation # Put your code here, using additional cells if necessary. def is_happy(my_list, my_value, my_index): ''' This function assumes that my_list has a value (my_value) popped out of it already, and checkes to see if my_value would be happy in my_list at index my_index. It returns 'True' if happy and 'False' if unhappy under those circumstances. ''' # do some error-checking (is the index within the allowed range?) if my_index < 0 or my_index > len(my_list): print("you've made an indexing error!", my_index) start = my_index-4 # start 4 to the left end = my_index+4 # end 3 to the right b/c we count the value at my_index too # if the starting value is out of bounds, fix it if start < 0: start = 0 # if the ending value is out of bounds, fix it. note that we want to go to # len(list), not len(list)-1, because range() goes to 1 before the end of # the range! if end > len(my_list): end = len(my_list) # keep track of the neighbors that are like me neighbors_like_me = 0 # keep track of total neighbors total_neighbors = 0 # loop over the specified range for i in range(start,end): if my_list[i] == my_value: # if this neighbor is like me, keep track of that neighbors_like_me += 1 total_neighbors+=1 # also keep track of total neighbors # happy if at least half are like me, unhappy otherwise # note: it's *at least* half because we're not double-counting our # own value if neighbors_like_me/total_neighbors >= 0.5: return True else: return False my_board = initialize_list() neighborhood_print(my_board) for i in range(len(my_board)): agent = my_board.pop(i) am_i_happy = is_happy(my_board, agent, i) my_board.insert(i, agent) if am_i_happy==True: print("agent {} at position {} is HAPPY! :-)".format(agent,i)) else: print("agent {} at position {} is UNHAPPY! :-(".format(agent,i)) Explanation: Function 2: deciding if an agent is happy Write a function that takes the game board generated by the function you wrote above and determines whether an agent at position i in the game board of a specified type is happy for a game board of any size and a neighborhood of size N (i.e., from position i-N to i+N), and returns that information. Make sure to check that position i is actually inside the game board (i.e., make sure the request makes sense), and ensure that it behaves correctly for agents near the edges of the game board. Show that your function is behaving correctly by giving having it check every position in the game board you generated previously, and decide whether the agent in each spot is happy or not. Verify by eye that it's behaving correctly. (Hint: You're going to use this later, when you're trying to decide where to put an agent. Should you write the function assuming that the agent is already in the board, or that you're testing to see whether or not you've trying to decide whether to put it there?) End of explanation from IPython.display import HTML HTML( <iframe src="https://goo.gl/forms/M7YCyE1OLzyOK7gH3?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> ) Explanation: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! End of explanation
12,634
Given the following text description, write Python code to implement the functionality described below step by step Description: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small> Challenge Notebook Problem Step1: Unit Test The following unit test is expected to fail until you solve the challenge.
Python Code: class Node(object): def __init__(self, data): # TODO: Implement me pass class Stack(object): def __init__(self, top=None): # TODO: Implement me pass def push(self, data): # TODO: Implement me pass def pop(self): # TODO: Implement me pass def peek(self): # TODO: Implement me pass def is_empty(self): # TODO: Implement me pass Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small> Challenge Notebook Problem: Implement a stack with push, pop, peek, and is_empty methods using a linked list. Constraints Test Cases Algorithm Code Unit Test Pythonic-Code Solution Notebook Constraints None Test Cases Push Push to empty stack Push to non-empty stack Pop Pop on empty stack Pop on single element stack Pop on multiple element stack Peek Peek on empty stack Peek on one or more element stack Is Empty Is empty on empty stack Is empty on one or more element stack Algorithm Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code End of explanation # %load test_stack.py from nose.tools import assert_equal class TestStack(object): # TODO: It would be better if we had unit tests for each # method in addition to the following end-to-end test def test_end_to_end(self): print('Test: Empty stack') stack = Stack() assert_equal(stack.peek(), None) assert_equal(stack.pop(), None) print('Test: One element') top = Node(5) stack = Stack(top) assert_equal(stack.pop(), 5) assert_equal(stack.peek(), None) print('Test: More than one element') stack = Stack() stack.push(1) stack.push(2) stack.push(3) assert_equal(stack.pop(), 3) assert_equal(stack.peek(), 2) assert_equal(stack.pop(), 2) assert_equal(stack.peek(), 1) assert_equal(stack.is_empty(), False) assert_equal(stack.pop(), 1) assert_equal(stack.peek(), None) assert_equal(stack.is_empty(), True) print('Success: test_end_to_end') def main(): test = TestStack() test.test_end_to_end() if __name__ == '__main__': main() Explanation: Unit Test The following unit test is expected to fail until you solve the challenge. End of explanation
12,635
Given the following text description, write Python code to implement the functionality described below step by step Description: String vs. Bytes Text in Python 3 is always Unicode and is represented by the str type, and binary data is represented by the bytes type. They cannot be mixed. Strings can be encoded to bytes, and bytes can be decoded back to strings. Step1: Now encode both strings to bytes. Step2: Decode back to strings. Step3: Big Endian vs Little Endian Step4: struct package This module performs conversions between Python values and C structs represented as Python bytes objects. Step5: struct.pack(fmt, v1, v2, …) Return a bytes object containing the values v1, v2, … packed according to the format string fmt. The arguments must match the values required by the format exactly. "!" means network endianess (big endian) "<" means little endian ">" means big endian "=" means native "h" measn short integer (2 bytes) Step6: struct.unpack(fmt, buffer) Unpack from the buffer buffer (presumably packed by pack(fmt, ...)) according to the format string fmt. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes must match the size required by the format,
Python Code: s = 'Hello world!' print(s) print("length is", len(s)) us = 'Hello 世界!' print(us) print("length is", len(us)) Explanation: String vs. Bytes Text in Python 3 is always Unicode and is represented by the str type, and binary data is represented by the bytes type. They cannot be mixed. Strings can be encoded to bytes, and bytes can be decoded back to strings. End of explanation bs = s.encode('utf-8') print(bs) print("length is", len(bs)) bus = us.encode('utf-8') print(bus) print("length is", len(bus)) Explanation: Now encode both strings to bytes. End of explanation print(bs.decode('utf-8')) print(bus.decode('utf-8')) Explanation: Decode back to strings. End of explanation num = 258 print(num.to_bytes(2, "big")) print(num.to_bytes(2, "little")) print(num.to_bytes(4, "big")) print(num.to_bytes(4, "little")) Explanation: Big Endian vs Little Endian End of explanation import struct Explanation: struct package This module performs conversions between Python values and C structs represented as Python bytes objects. End of explanation x = 256 print("Network endianess") print(struct.pack('!h', x)) print("Little endian") print(struct.pack('<h', x)) print("Big endian") print(struct.pack('>h', x)) print("Native endianess") print(struct.pack('=h', x)) Explanation: struct.pack(fmt, v1, v2, …) Return a bytes object containing the values v1, v2, … packed according to the format string fmt. The arguments must match the values required by the format exactly. "!" means network endianess (big endian) "<" means little endian ">" means big endian "=" means native "h" measn short integer (2 bytes) End of explanation bx = struct.pack('!h', x) print(struct.unpack('!h', bx)) print(struct.unpack('<h', bx)) print(struct.unpack('!h', bx)[0]) print(struct.unpack('<h', bx)[0]) Explanation: struct.unpack(fmt, buffer) Unpack from the buffer buffer (presumably packed by pack(fmt, ...)) according to the format string fmt. The result is a tuple even if it contains exactly one item. The buffer’s size in bytes must match the size required by the format, End of explanation
12,636
Given the following text description, write Python code to implement the functionality described below step by step Description: Capstone 1 Data Wrangling Project Data Acquisition Summary A set of .csv files provided for the Kaggle March Machine Learning Mania contest (hereafter referred to as Kaggle data) were downloaded from the Kaggle website (www.kaggle.com). From a college basketball team ratings website (www.kenpom.com) I downloaded a set of .csv files with the final pre-tournament scores on team-level efficiency metrics (Kenpom data). Using the Python package BeautifulSoup, I wrote a script to scrape over 3,000 tables with player-level data from two separate sports statistics websites, www.espn.com and www.sports-reference.com. Scraping was also used to obtain lists of annual player-level All-American award winners and annual team-level data (strength of schedule, etc.) from www.sports-reference.com. Data Wrangling The variety of data required a sequence of data wrangling processes to produce the final set of features and labels for the machine learning phase. Several of these data wrangling processes are described below. Step1: Fuzzy matching player names Different player-level data were available in the two sets of Roster files, which required linking these files player-to-player prior to computing team features. Step2: After loading the separate sportsreference and espn files, I add some common team identifers to each dataframe (created in a separate process) to make it easier to match the players. Step3: To match the players in the two different files, I first used a merge with an outer join and an indicator to examine the matches and non-matches after the merge. Step4: Here I used the tab function I created earlier to inspect the proportion of names that were perfect matches and linked from this merge. Step5: This merge left around 8% of names unmatched. I needed to separate the nonmatched names, determine why they didn't match, and figure out a way to link them together. To get a snapshot of the causes of the nonmatches, let's look at a snippet of 10 rows. Step6: The same players are present in both files, but they have variations in the name due to punctuation, spelling, or use of nicknames. To match the remaining names, I used the python package fuzzywuzzy. I used the extractOne function (process.extractOne) which compares a given string to a list of strings supplied to the function, and extracts the one "best match" from the list. The function returns a tuple of the best match and a score that reflects the accuracy of the match. Here I test the function using names from the table above. I pass it a string to be matched ('Mo Williams'), and a list of 2 strings, the right match ('Maurice Williams') and a different player ('Deji Ibetayo'). Step7: To match all the names, my first strategy was to use the function out-of-the-box and simply pass it all the nonmatched names, but this didn't work well. Here's an example of why Step8: If the function can find a name that is more syntactically similar, it will extract it and miss the correct match. For my purposes, the function might work better if it had fewer options, perhaps a more restrictive list specific to that player. To solve this problem I wrote a function to identify the player's team and season, and extract only a name from the player's team and season in the espn file. Step9: After running the function, I can inspect the matched names to assess the quality of the matches. Here are the names of players with the highest scores, all those below 65. Step10: These are names with minor differences in spelling or punctuation that are all matched well by the function. Here are the names of players with the lowest scores, all those below 65. Step11: Below 50 look like a reasonable cutoff where pairs may not actually be the same players. Inspecting a handful of these revealed several who only existed in one file but not the other due to being on the official roster but not acquiring any playing time. So they mostly appear to be true nonmatches. Step12: The result is a table containing each player's name in the sportsreference source and the espn source, even when different names are used. I can test the effectiveness of the matched names by attempting to merge the two sources of data Step13: Above I can see that 99.4% of the player data is now linked after this merge. Processing Roster data After matching on player names, the roster data file describes each player’s descriptive information (class, position, height) and in-game performance over the course of a season (minutes played, points scored, rebounds, assists, etc.). Step14: Here are a few sample rows of these data, taken from the 2011-2012 Kentucky Wildcats. Step15: These data represent season-long averages for individual players in several game-play statistics tracked in basketball, such as minutes played, points, rebounds, and assists. Because my goal is to use this data to predict outcomes of NCAA tournament games, I need to transform these data into team-level features. Below are some of the obstacles in this task and the solutions I created. Player height is in a string format, and I need to convert to numeric Step16: Now height is numeric and ready for conversion to some team-level feature. In basketball, taller height is usually an advantage, so I'm interested in obtaining a feature that describes the overall height of the team. However, the simple team average is not a good solution, as backups who rarely play would be weighted the same as players who play most of the game, and my goal is to obtain a feature that describes the height of the team during actual game play. To address this problem, I created function that calculates the team's 'total minutes", then get each player's percentage of minutes, and then create a column that represents each player's height weighted by their minute percentage. I then calculate a team-level feature, which is team total in minutes-weighted height. Step17: I'm also interested in looking at how scoring is distributed across different groupings of players, such as by starters vs bench players, or guards vs. forwards. The player data doesn't describe whether a player is a "starter" (in the group of 5 players who starts the game on the floor) or not, but typically the starters are also the 5 players who also play the most minutes. I use team minutes to calculate whether a player is a starter or not. Step18: Now I can use the starter column to compute some interesting team-level features, such as the percentage of points on the team scored by the bench and the starters. Step19: Another valuable piece of information in the player data is "Class" which describes the player's year in school from first-year players ("freshmen") to fourth-year players ("seniors"). Teams comprised of more experienced players may have an advantage. Step20: Similar to the height data, class is encoded in a string format and I need to convert to numeric before using the data for calcuations. There's also some inconsistency in the labels used that needs to be cleaned up. Step21: The steps above create a numeric experience column ('exp') that describes each player's number of years of college basketball experience. Step22: Now I can compute some interesting team-level features, such as the average experience level for the starting players Step23: After computing all of the features I'm interested in, I merge them together to create the processed team-level file with features computed from the roster-level data. Step24: Matching team names For the data files obtained from Kaggle, consistent use of a unique numeric identifier for all 364 teams allows seamless joins of Kaggle files by team. Step25: The kaggle team file has 364 teams, each with a unique numeric id code and name. To assist in maching with external data, Kaggle also provides a file with several common team name alternatives. Step26: In many cases the external data I obtained used unique names that did not match these alternative spellings. For example, UC-Berkeley, California, and Cal Golden Bears were different source identifiers for the same team. To resolve this problem, I needed to create a table linking the Kaggle numeric identifier to the unique identifier in each additional data source. The list of names from the sportsreference source was scraped from a website where every team in their database is listed. I pull the team name from the URL text. Below is the code used but commented out; I pull the data from a .csv file I saved after the scrape. Step27: The sportsreference source has 477 team names. The kaggle file is smaller (364) as it only includes teams who played in at least one NCAA tournament game since 1985. I only need to match the 364 Kaggle team names as this data will be used to predict outcomes of NCAA tournament games. First I used regular expressions to apply consistent formatting to the kaggle mixed team names and drop duplicates. Step28: The result is a file I can use to attempt a merge with the sportsreference source team names, as the school names are formatted similarly with all lower-case characters and hyphenated word gaps. Step29: The merge matches nearly all of the teams (361/364). Before continuing I save the matches as a dataframe. Step30: To isolate the kaggle team ids that did not match, I re-merge the team ids with the match file, and then keep the 'left only' rows from the join. I also save the 'right-only' team names for fuzzy string matching later. Step31: To match the remaining names, I use the series of sportsreference names in a fuzzy string matching function. Step32: The only remaining non-match (winston-salem-state) was discovered to be an error as it has never qualified for division 1 basketball. I remove it and keep the best match for the other 2 unique team ids, and add to the teams who matched in the merge. Step33: Computation of team travel distance NCAA tournament games occur at neutral sites, with each team required to travel some distance to the game. The Kaggle data includes two separate files with the locations (longitude & latitude) of each tournament site and the location of each team’s campus, along with a file for each tournament game. Step34: After importing the first objective is to integrate the game location with the game results. Step35: This merge produces a file with the two teams, host city, and venue location for all tournament games. Step36: To compute the distance for each team, I merge in the location for each team. I use two separate merges, one for the winning team and one for the losing team. Step37: To compute the distance I need each lattitude and longitude together in a single tuple. Step38: Then I use the great_circle function from the geopy.distance package to compute the distance between game location and team location for winning team and losing team. Step39: Identifying and labeling upsets A perceived “upset” in the NCAA tournament usually refers to the “seed” system used to rank the teams and set the matchups. The 64 teams are divided into four “regions”; in each region teams receive a ranking or “seed” from 1 (best) to 16 (worst). An “upset” occurs when a team with a comparatively higher numerical seed defeats one with a lower seed (12 beats 5, 11 beats 6). The “upset” label is typically reserved for victories by teams with seeds much larger than their opponent. Kaggle provides data on the final scores of NCAA tournament games, and the seed for each team. To identify upsets, I first merged Kaggle seed data with tournament game data to link the teams in each match-up with their respective tournament seed. Step40: Two conditions need to be met for a game to be coded as an upset. Condition 1 is that the game involves opponents with an absolute seed difference greater than 3. Condition 2 is that the team with the higher numeric seed wins the game. To label upsets I created calculated the absolute seed difference and then created two functions to examine these conditions, and used these functions to create indicator columns in the tournament games dataset. Step41: 65% of NCAA tournament games qualified as having "upset potential". Step42: Only 14% of all tournament games resulted in an upset.
Python Code: # importing packages for wrangling tasks import pandas as pd import numpy as np import re from fuzzywuzzy import process from fuzzywuzzy import fuzz from geopy.distance import great_circle # create a function to quickly tabulate a dataframe column def tab(dfcol): t = pd.crosstab(index=dfcol, columns="count") print t print t/t.sum() pd.set_option('display.max_columns', None) Explanation: Capstone 1 Data Wrangling Project Data Acquisition Summary A set of .csv files provided for the Kaggle March Machine Learning Mania contest (hereafter referred to as Kaggle data) were downloaded from the Kaggle website (www.kaggle.com). From a college basketball team ratings website (www.kenpom.com) I downloaded a set of .csv files with the final pre-tournament scores on team-level efficiency metrics (Kenpom data). Using the Python package BeautifulSoup, I wrote a script to scrape over 3,000 tables with player-level data from two separate sports statistics websites, www.espn.com and www.sports-reference.com. Scraping was also used to obtain lists of annual player-level All-American award winners and annual team-level data (strength of schedule, etc.) from www.sports-reference.com. Data Wrangling The variety of data required a sequence of data wrangling processes to produce the final set of features and labels for the machine learning phase. Several of these data wrangling processes are described below. End of explanation # load the individual .csv files as pandas dataframes dpath = "C:\Users\mworley\Dropbox\capstone\data" pstats = pd.read_csv(dpath + r'\external\player_data\player_stats.csv') ros = pd.read_csv(dpath + r'\external\rosters\rosters.csv') espn = pd.read_csv(dpath + r'\external\espn\espn_tabs.csv') # merge the two sportsreference files, player stats (pstats) and positions (ros) sr_players = pd.merge(pstats, ros, how='inner', on=['Player', 'Team', 'Season']) Explanation: Fuzzy matching player names Different player-level data were available in the two sets of Roster files, which required linking these files player-to-player prior to computing team features. End of explanation # load the team identifying data teams_seasons = pd.read_csv(dpath + r'\interim\teams_seasons.csv') team_match = pd.read_csv(dpath + r'\interim\team_match.csv') team_info = pd.merge(teams_seasons, team_match, how='inner', on='team_id') # merge team identifier data with player dataframes sr_players = pd.merge(sr_players, team_info, how='inner', left_on=['Team', 'Season'], right_on=['srname', 'season']) espn_players = pd.merge(espn, team_info, how='inner', left_on=['espn_id', 'Season'], right_on=['espn_id', 'season']) # keep only columns I need to match players sr_players = sr_players.loc[:, ['Player', 'srname', 'Season', 'espn_id']] sr_players.drop_duplicates(inplace=True) espn_players = espn_players.loc[:, ['Player', 'srname', 'Season', 'espn_id']] espn_players.drop_duplicates(inplace=True) # keep only years after 2001 in sportsreference file to match with espn player data sr_players = sr_players[sr_players['Season'] > 2001] Explanation: After loading the separate sportsreference and espn files, I add some common team identifers to each dataframe (created in a separate process) to make it easier to match the players. End of explanation mrg_players = pd.merge(sr_players, espn_players, how='outer', on=['Player', 'srname' , 'Season', 'espn_id'], indicator=True) Explanation: To match the players in the two different files, I first used a merge with an outer join and an indicator to examine the matches and non-matches after the merge. End of explanation tab(mrg_players['_merge']) Explanation: Here I used the tab function I created earlier to inspect the proportion of names that were perfect matches and linked from this merge. End of explanation nomatch = mrg_players[mrg_players['_merge'] != "both"].copy() nomatch.sort_values(['srname', 'Season'], inplace=True) nomatch.head(10) Explanation: This merge left around 8% of names unmatched. I needed to separate the nonmatched names, determine why they didn't match, and figure out a way to link them together. To get a snapshot of the causes of the nonmatches, let's look at a snippet of 10 rows. End of explanation process.extractOne('Mo Williams', ['Maurice Williams', 'Deji Ibetayo']) Explanation: The same players are present in both files, but they have variations in the name due to punctuation, spelling, or use of nicknames. To match the remaining names, I used the python package fuzzywuzzy. I used the extractOne function (process.extractOne) which compares a given string to a list of strings supplied to the function, and extracts the one "best match" from the list. The function returns a tuple of the best match and a score that reflects the accuracy of the match. Here I test the function using names from the table above. I pass it a string to be matched ('Mo Williams'), and a list of 2 strings, the right match ('Maurice Williams') and a different player ('Deji Ibetayo'). End of explanation process.extractOne('Mo Williams', ['Maurice Williams', 'John Williams']) Explanation: To match all the names, my first strategy was to use the function out-of-the-box and simply pass it all the nonmatched names, but this didn't work well. Here's an example of why: End of explanation # create dataframe of non-matched player names, separately for each source nomatch_sr = nomatch[nomatch._merge == "left_only"].copy() nomatch_sr.drop('_merge', inplace=True, axis=1) nomatch_espn = nomatch[nomatch._merge == "right_only"].copy() nomatch_espn.drop('_merge', inplace=True, axis=1) # group by team and season, create dictionary of non-matched espn names to use in the function e = nomatch_espn.groupby(['srname','Season'])['Player'] espn_dict = dict(list(e)) # write the function to selectively match using the player's team and season plist = [] def match_name_team(row): try: p = row['Player'] t = row['srname'] s = row['Season'] l = espn_dict.get((t, s)) n, scr = process.extractOne(p, l) list = (p, t, s, n, scr) plist.append(list) except: pass # apply the function to the nonmatched sportsreference player dataframe nomatch_sr.apply(match_name_team, axis=1) df = pd.DataFrame(plist, columns=('Player', 'srname', 'Season', 'Player_espn', 'score')) Explanation: If the function can find a name that is more syntactically similar, it will extract it and miss the correct match. For my purposes, the function might work better if it had fewer options, perhaps a more restrictive list specific to that player. To solve this problem I wrote a function to identify the player's team and season, and extract only a name from the player's team and season in the espn file. End of explanation df.sort_values('score', ascending=False).head(10) Explanation: After running the function, I can inspect the matched names to assess the quality of the matches. Here are the names of players with the highest scores, all those below 65. End of explanation # inspect low-scoring matches df[df['score'] < 65].sort_values('score', ascending=False) # everything above 50 looks right, how many names are below 50? len(df[df.score < 50]) Explanation: These are names with minor differences in spelling or punctuation that are all matched well by the function. Here are the names of players with the lowest scores, all those below 65. End of explanation # drop matches scoring below 50 df_c50 = df.loc[df.score > 50].copy() # combine the perfect matches and fuzzy matches into one dataframe df_c50.drop('score', inplace=True, axis=1) match = mrg_players[mrg_players['_merge'] == "both"].copy() match.drop(['espn_id', '_merge'], inplace=True, axis=1) match['Player_espn'] = match.Player player_match = match.append(df_c50, ignore_index=True) Explanation: Below 50 look like a reasonable cutoff where pairs may not actually be the same players. Inspecting a handful of these revealed several who only existed in one file but not the other due to being on the official roster but not acquiring any playing time. So they mostly appear to be true nonmatches. End of explanation # re-create sportsreference and espn player dataframes sr_players = pd.merge(pstats, ros, how='inner', on=['Player', 'Team', 'Season']) espn = pd.read_csv(dpath + r'\external\espn\espn_tabs.csv') # merge team identifier data with player dataframes sr_players = pd.merge(sr_players, team_info, how='inner', left_on=['Team', 'Season'], right_on=['srname', 'season']) espn_players = pd.merge(espn, team_info, how='inner', left_on=['espn_id', 'Season'], right_on=['espn_id', 'season']) # keep only years after 2001 in sportsreference file to match with espn player data sr_players = sr_players[sr_players['Season'] > 2001] # merge sportsreference file with the player name matches mrg1 = pd.merge(sr_players, player_match, how='inner', on=['Player', 'srname', 'Season']) mrg2 = pd.merge(mrg1, espn_players, how='outer', left_on=['Player_espn', 'srname', 'Season'], right_on=['Player', 'srname', 'season'], indicator=True) tab(mrg2._merge) Explanation: The result is a table containing each player's name in the sportsreference source and the espn source, even when different names are used. I can test the effectiveness of the matched names by attempting to merge the two sources of data: End of explanation player_match = pd.read_csv(dpath + r'\interim\player_match.csv') players = pd.merge(pstats, ros, how='inner', on=['Player', 'Team', 'Season']) players = pd.merge(players, player_match, how='outer', left_on=['Player', 'Season', 'Team'], right_on=['Player', 'Season', 'srname']) players.drop('srname', inplace=True, axis=1) ncols = ['Player_espn', 'GPes', 'MINes', 'PPGes', 'RPGes', 'APGes', 'SPGes', 'BPGes', 'TPGes', 'FGPCTes', 'FTPCTes', '3PTPCTes', 'Teames', 'espn_id', 'Season'] espn.columns = ncols players = pd.merge(players, espn, how='outer', left_on=['Player_espn', 'Season'], right_on=['Player_espn', 'Season']) players = players.dropna(subset = ['Team']) players = pd.merge(players, team_info, how='inner', left_on=['Team', 'Season'], right_on=['srname', 'season'], indicator=True) Explanation: Above I can see that 99.4% of the player data is now linked after this merge. Processing Roster data After matching on player names, the roster data file describes each player’s descriptive information (class, position, height) and in-game performance over the course of a season (minutes played, points scored, rebounds, assists, etc.). End of explanation mask = (players.team_id == 1246) & (players.Season == 2012) players[mask] Explanation: Here are a few sample rows of these data, taken from the 2011-2012 Kentucky Wildcats. End of explanation players.loc[mask, ['Player', 'Height']] # change series from object data type to string players['Height'] = players['Height'].astype(str) # create a function that converts string height to total inches def inches(height): try: f = int(height.split('-')[0]) i = int(height.split('-')[1]) return f * 12 + i except: return np.nan players['Heightnum'] = players['Height'].apply(inches) players.loc[mask, ['Player', 'Height', 'Heightnum']] Explanation: These data represent season-long averages for individual players in several game-play statistics tracked in basketball, such as minutes played, points, rebounds, and assists. Because my goal is to use this data to predict outcomes of NCAA tournament games, I need to transform these data into team-level features. Below are some of the obstacles in this task and the solutions I created. Player height is in a string format, and I need to convert to numeric End of explanation def team_minutes(group): s = group.name # minutes played data only available after 2001 if s[1] > 2001: group['tmins'] = group['MINes'].sum() return group else: return group players = players.groupby(['team_id', 'Season'], as_index=False).apply(team_minutes) players['pminpct'] = players.MINes / players.tmins players['phgtmins'] = players.pminpct * players.Heightnum players.loc[mask, ['Player', 'Heightnum', 'pminpct', 'phgtmins']] flrmins = players.groupby(['team_id', 'Season'])['phgtmins'].sum().reset_index() flrmins.sort_values(['Season', 'phgtmins'], ascending=False).head(5) Explanation: Now height is numeric and ready for conversion to some team-level feature. In basketball, taller height is usually an advantage, so I'm interested in obtaining a feature that describes the overall height of the team. However, the simple team average is not a good solution, as backups who rarely play would be weighted the same as players who play most of the game, and my goal is to obtain a feature that describes the height of the team during actual game play. To address this problem, I created function that calculates the team's 'total minutes", then get each player's percentage of minutes, and then create a column that represents each player's height weighted by their minute percentage. I then calculate a team-level feature, which is team total in minutes-weighted height. End of explanation def get_starters(group): s = group.name if s[1] > 2001: group.sort_values('MINes', ascending=False, inplace=True) group['starter'] = 'no' i = group.columns.get_loc('starter') group.iloc[0:5, i] = 'yes' return group else: return group players = players.groupby(['team_id', 'Season'], as_index=False).apply(get_starters) mask = (players.team_id == 1246) & (players.Season == 2012) players.loc[mask, ['Player', 'MINes', 'starter', 'PTS']] Explanation: I'm also interested in looking at how scoring is distributed across different groupings of players, such as by starters vs bench players, or guards vs. forwards. The player data doesn't describe whether a player is a "starter" (in the group of 5 players who starts the game on the floor) or not, but typically the starters are also the 5 players who also play the most minutes. I use team minutes to calculate whether a player is a starter or not. End of explanation benscr = players.groupby(['team_id', 'Season', 'starter'])['PTS'] benscr = benscr.sum().unstack('starter').reset_index() benscr['ptspct_bn'] = benscr.no / (benscr.no + benscr.yes) benscr['ptspct_st'] = 1 - benscr.ptspct_bn benscr.drop(['no', 'yes'] , inplace=True, axis=1) benscr[(benscr.team_id==1246) & (benscr.Season == 2012)] Explanation: Now I can use the starter column to compute some interesting team-level features, such as the percentage of points on the team scored by the bench and the starters. End of explanation players.loc[mask, ['Player', 'Class']] Explanation: Another valuable piece of information in the player data is "Class" which describes the player's year in school from first-year players ("freshmen") to fourth-year players ("seniors"). Teams comprised of more experienced players may have an advantage. End of explanation tab(players.Class) players.Class.fillna('', inplace=True) players.Class = map(str.upper, players.Class) expdict = {'SR': '3', 'JR': '2', 'GR': '3', 'SO': '1', 'FR': '0', 'MISSING': ""} players["exp"] = players.Class.map(expdict) players["exp"] = pd.to_numeric(players.exp, errors='coerce') Explanation: Similar to the height data, class is encoded in a string format and I need to convert to numeric before using the data for calcuations. There's also some inconsistency in the labels used that needs to be cleaned up. End of explanation players.loc[mask, ['Player', 'Class', 'exp']] Explanation: The steps above create a numeric experience column ('exp') that describes each player's number of years of college basketball experience. End of explanation strtexp = players.groupby(['team_id', 'Season', 'starter']) strtexp = strtexp['exp'].mean().unstack('starter').reset_index() strtexp.drop('no' , inplace=True, axis=1) strtexp.rename(columns={"yes": "strtexp"}, inplace=True) strtexp.head() Explanation: Now I can compute some interesting team-level features, such as the average experience level for the starting players End of explanation roster_feat = pd.merge(benscr, strtexp, how='outer', on=['team_id', 'Season']) roster_feat.head() Explanation: After computing all of the features I'm interested in, I merge them together to create the processed team-level file with features computed from the roster-level data. End of explanation # read in kaggle team id file dpath = r'C:\Users\mworley\Dropbox\capstone\data' teams = pd.read_csv(dpath + r'\raw\Teams.csv') #C:\Users\mworley\Dropbox\capstone\data\raw\Teams.csv print(len(teams)) teams.head(5) Explanation: Matching team names For the data files obtained from Kaggle, consistent use of a unique numeric identifier for all 364 teams allows seamless joins of Kaggle files by team. End of explanation tm_names = pd.read_csv(dpath + r'\raw\TeamSpellings.csv') tm_names[tm_names.team_id == 1453] Explanation: The kaggle team file has 364 teams, each with a unique numeric id code and name. To assist in maching with external data, Kaggle also provides a file with several common team name alternatives. End of explanation ''' # get names of teams from sports reference url = "http://www.sports-reference.com/cbb/schools/" req = requests.get(url) soup = BeautifulSoup(req.content, 'html.parser') links = [] for link in soup.find_all('a'): links.append(str(link.get('href'))) tlinks = links[31:508] srteams = map(lambda x: x.split('/')[-2], tlinks) srteams = pd.DataFrame(srteams) srteams.columns = ['srname'] #srteams.to_csv(dpath + r'\interim\srnames.csv', index=False) ''' srteams = pd.read_csv(dpath + r'\interim\srteams.csv') print len(srteams) print srteams.head() Explanation: In many cases the external data I obtained used unique names that did not match these alternative spellings. For example, UC-Berkeley, California, and Cal Golden Bears were different source identifiers for the same team. To resolve this problem, I needed to create a table linking the Kaggle numeric identifier to the unique identifier in each additional data source. The list of names from the sportsreference source was scraped from a website where every team in their database is listed. I pull the team name from the URL text. Below is the code used but commented out; I pull the data from a .csv file I saved after the scrape. End of explanation # adjust kaggle mixed team names to optimally match sports reference tables srnames = tm_names.copy() filldash = lambda x: re.sub(r' ', '-', x) srnames['name_spelling'] = srnames['name_spelling'].apply(filldash) srnames.rename(columns={"name_spelling": "srname"}, inplace=True) srnames.drop_duplicates(inplace=True) srnames.head(6) Explanation: The sportsreference source has 477 team names. The kaggle file is smaller (364) as it only includes teams who played in at least one NCAA tournament game since 1985. I only need to match the 364 Kaggle team names as this data will be used to predict outcomes of NCAA tournament games. First I used regular expressions to apply consistent formatting to the kaggle mixed team names and drop duplicates. End of explanation merge_sr = pd.merge(srnames, srteams, how='outer', on='srname', indicator=True) tab(merge_sr._merge) print float(len(merge_sr[merge_sr._merge=='both'])) / 364 Explanation: The result is a file I can use to attempt a merge with the sportsreference source team names, as the school names are formatted similarly with all lower-case characters and hyphenated word gaps. End of explanation match = merge_sr[merge_sr._merge == 'both'].copy() match.drop('_merge', axis=1, inplace=True) Explanation: The merge matches nearly all of the teams (361/364). Before continuing I save the matches as a dataframe. End of explanation # get a dataframe of the mixed names limited to non-matched teams nomatch = pd.merge(srnames, match, how='outer', on=['team_id'], indicator=True) nomatch = nomatch[nomatch._merge=='left_only'] teams = merge_sr.loc[merge_sr._merge == 'right_only', 'srname'] nomatch.head(len(nomatch)) Explanation: To isolate the kaggle team ids that did not match, I re-merge the team ids with the match file, and then keep the 'left only' rows from the join. I also save the 'right-only' team names for fuzzy string matching later. End of explanation # create a function to fuzzy match the nonmatched names def match_srname(name): new_name, score = process.extractOne(name, teams) return new_name, score # run function on kaggle srnames names, scores = zip(*nomatch['srname_x'].apply(match_srname)) nomatch['name'], nomatch['score'] = names, scores nomatch.sort_values(['team_id', 'score'], ascending=False, inplace=True) nomatch.head(len(nomatch)) Explanation: To match the remaining names, I use the series of sportsreference names in a fuzzy string matching function. End of explanation nomatch.drop_duplicates(['team_id'], inplace=True) nomatch = nomatch[nomatch.team_id != 1445] nomatch.drop(['srname_x', 'srname_y', '_merge', 'score'], axis=1, inplace=True) nomatch.rename(columns={'name': 'srname'}, inplace=True) team_match = pd.concat([match, nomatch]) len(team_match) Explanation: The only remaining non-match (winston-salem-state) was discovered to be an error as it has never qualified for division 1 basketball. I remove it and keep the best match for the other 2 unique team ids, and add to the teams who matched in the merge. End of explanation # import the 3 data files tgames = pd.read_csv(dpath + r'\interim\tourney_games.csv') gameloc = pd.read_csv(dpath + r'\raw\TourneyGeog.csv') teamloc = pd.read_csv(dpath + r'\raw\TeamGeog.csv') Explanation: Computation of team travel distance NCAA tournament games occur at neutral sites, with each team required to travel some distance to the game. The Kaggle data includes two separate files with the locations (longitude & latitude) of each tournament site and the location of each team’s campus, along with a file for each tournament game. End of explanation # some operations on the dataframes to enable the merge tgames.columns = map(str.lower, tgames.columns) gameloc.drop('daynum', axis=1, inplace=True) # replace baton rouge longitude which was discovered to be an error gameloc.loc[gameloc.host == 'baton_rouge', ['lng']] = -91.19 gameloc.rename(columns={'wteam': 'w_team_id', 'lteam': 'l_team_id'}, inplace=True) tgames = pd.merge(tgames, gameloc, how='inner', on=['season', 'w_team_id', 'l_team_id']) Explanation: After importing the first objective is to integrate the game location with the game results. End of explanation tgames[['w_team_id', 'l_team_id', 'host', 'lat', 'lng']].head(5) Explanation: This merge produces a file with the two teams, host city, and venue location for all tournament games. End of explanation tgames.rename(columns={'lat': 'glat', 'lng': 'glng'}, inplace=True) tgames = pd.merge(tgames, teamloc, how='inner', left_on='w_team_id', right_on='team_id') tgames.rename(columns={'lat': 'wlat', 'lng': 'wlng'}, inplace=True) tgames = pd.merge(tgames, teamloc, how='inner', left_on='l_team_id', right_on='team_id') tgames.rename(columns={'lat': 'llat', 'lng': 'llng'}, inplace=True) tgames.iloc[0:5, -9:] Explanation: To compute the distance for each team, I merge in the location for each team. I use two separate merges, one for the winning team and one for the losing team. End of explanation tgames['gloc'] = list(zip(tgames.glat, tgames.glng)) tgames['wloc'] = list(zip(tgames.wlat, tgames.wlng)) tgames['lloc'] = list(zip(tgames.llat, tgames.llng)) tgames.iloc[0:5, -3:] Explanation: To compute the distance I need each lattitude and longitude together in a single tuple. End of explanation xl = [] yl = [] for i in range(len(tgames)): x = int(great_circle(tgames['gloc'][i], tgames['wloc'][i]).miles) y = int(great_circle(tgames['gloc'][i], tgames['lloc'][i]).miles) xl.append(x) yl.append(y) tgames['w_dist'] = pd.Series(xl).values tgames['l_dist'] = pd.Series(yl).values tgames.ix[0:5, ['season', 'w_team_id', 'l_team_id', 'w_dist', 'l_dist']] Explanation: Then I use the great_circle function from the geopy.distance package to compute the distance between game location and team location for winning team and losing team. End of explanation # read in data files dpath = "C:\Users\mworley\Dropbox\capstone\data" tgames = pd.read_csv(dpath + r'\raw\TourneyCompactResults.csv') seeds = pd.read_csv(dpath + r'\raw\TourneySeeds.csv') # add team seeds to tourney games data frame seeds['Season/Team'] = [(seas, team) for seas,team in zip(seeds.Season, seeds.Team)] seeds = seeds.set_index('Season/Team').drop(['Season', 'Team'],axis=1).squeeze().to_dict() tgames['Wteam_seed'] = [seeds[(year,team)] for year,team in zip(tgames.Season,tgames.Wteam)] tgames['Lteam_seed'] = [seeds[(year,team)] for year,team in zip(tgames.Season,tgames.Lteam)] tgames['Wteam_seed'] = tgames['Wteam_seed'].str.replace(r'\D+', '').astype('int') tgames['Lteam_seed'] = tgames['Lteam_seed'].str.replace(r'\D+', '').astype('int') tgames.columns = map(str.lower, tgames.columns) Explanation: Identifying and labeling upsets A perceived “upset” in the NCAA tournament usually refers to the “seed” system used to rank the teams and set the matchups. The 64 teams are divided into four “regions”; in each region teams receive a ranking or “seed” from 1 (best) to 16 (worst). An “upset” occurs when a team with a comparatively higher numerical seed defeats one with a lower seed (12 beats 5, 11 beats 6). The “upset” label is typically reserved for victories by teams with seeds much larger than their opponent. Kaggle provides data on the final scores of NCAA tournament games, and the seed for each team. To identify upsets, I first merged Kaggle seed data with tournament game data to link the teams in each match-up with their respective tournament seed. End of explanation tgames['seedif'] = abs(tgames.wteam_seed - tgames.lteam_seed) # label each matchup as a potential upset (1) or not (0) def upset_pot(data): if data.seedif > 3: return 1 else: return 0 # label each matchup as an upset (1) or not (0) def upset_label(data): x = data.seedif if (data.wteam_seed > data.lteam_seed) & (x > 3): return 1 else: return 0 # identify potential upsets # defined as games with seed difference greater than 3 tgames['upsetpot'] = tgames.apply(upset_pot, axis=1) tgames['upset'] = tgames.apply(upset_label, axis=1) tgames[['wteam_seed', 'lteam_seed', 'seedif', 'upsetpot', 'upset']].head() tab(tgames.upsetpot) Explanation: Two conditions need to be met for a game to be coded as an upset. Condition 1 is that the game involves opponents with an absolute seed difference greater than 3. Condition 2 is that the team with the higher numeric seed wins the game. To label upsets I created calculated the absolute seed difference and then created two functions to examine these conditions, and used these functions to create indicator columns in the tournament games dataset. End of explanation tab(tgames.upset) Explanation: 65% of NCAA tournament games qualified as having "upset potential". End of explanation tab(tgames[tgames.upsetpot==1].upset) Explanation: Only 14% of all tournament games resulted in an upset. End of explanation
12,637
Given the following text description, write Python code to implement the functionality described below step by step Description: Demonstrate Seq2Seq Wrapper with CMUDict dataset Step1: Create an instance of the Wrapper Step2: Create data generators Read data_utils.py for more information Step3: Computational graph was built when the model was instantiated Now all we need to do is train the model using processed CMUdict dataset, via data generators Internally a loop is run for epochs times for training Evaluation is done periodically. Train Step4: Restore last saved session from disk Step5: Predict Step6: Let us decode and see the words
Python Code: import tensorflow as tf import numpy as np # preprocessed data from datasets.cmudict import data import data_utils # load data from pickle and npy files data_ctl, idx_words, idx_phonemes = data.load_data(PATH='datasets/cmudict/') (trainX, trainY), (testX, testY), (validX, validY) = data_utils.split_dataset(idx_phonemes, idx_words) # parameters xseq_len = trainX.shape[-1] yseq_len = trainY.shape[-1] batch_size = 128 xvocab_size = len(data_ctl['idx2pho'].keys()) yvocab_size = len(data_ctl['idx2alpha'].keys()) emb_dim = 128 Explanation: Demonstrate Seq2Seq Wrapper with CMUDict dataset End of explanation import seq2seq_wrapper import importlib importlib.reload(seq2seq_wrapper) model = seq2seq_wrapper.Seq2Seq(xseq_len=xseq_len, yseq_len=yseq_len, xvocab_size=xvocab_size, yvocab_size=yvocab_size, ckpt_path='ckpt/cmudict/', emb_dim=emb_dim, num_layers=3 ) Explanation: Create an instance of the Wrapper End of explanation val_batch_gen = data_utils.rand_batch_gen(validX, validY, 16) train_batch_gen = data_utils.rand_batch_gen(trainX, trainY, 128) Explanation: Create data generators Read data_utils.py for more information End of explanation sess = model.train(train_batch_gen, val_batch_gen, sess=sess1) Explanation: Computational graph was built when the model was instantiated Now all we need to do is train the model using processed CMUdict dataset, via data generators Internally a loop is run for epochs times for training Evaluation is done periodically. Train End of explanation sess = model.restore_last_session() Explanation: Restore last saved session from disk End of explanation output = model.predict(sess, val_batch_gen.__next__()[0]) print(output.shape) output Explanation: Predict End of explanation for oi in output: print(data_utils.decode(sequence=oi, lookup=data_ctl['idx2alpha'], separator='')) Explanation: Let us decode and see the words End of explanation
12,638
Given the following text description, write Python code to implement the functionality described below step by step Description: Why Objects? Provide modularity and reuse through hierarchical structures Object oriented programming is a different way of thinking. Programming With Objects Step1: Initial concepts An object is a container of data (attributes) and code (methods) A class is a template for creating objects Reuse is provided by Step2: Attributes Step3: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator. Step4: EXERCISE Step7: A class diagram provides a more compact representation of a class. There are three sections. - Class name - Attributes - Methods Instance methods - functions associated with the objects constructed for a class - provide a way to transform data in objects - use instance attributes (references to variables beginning with "self.") Step9: EXERCISE Step10: Exercise Step11: Subclasses can have their own methods. Exercise Step12: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object. Interaction Diagram for the ATM System An interaction diagram specifies how components interact to achieve a use case. Interactions are from one object to another object, indicating that the first object calls a method in the second object. Rules for drawing lines in an interaction diagram Step13: Look at Objects/ATMDiagrams.pdf for a solution. What Else in Design? Other diagrams
Python Code: from IPython.display import Image Image(filename='Classes_vs_Objects.png') Explanation: Why Objects? Provide modularity and reuse through hierarchical structures Object oriented programming is a different way of thinking. Programming With Objects End of explanation # Definiting a Car class class Car(object): pass Explanation: Initial concepts An object is a container of data (attributes) and code (methods) A class is a template for creating objects Reuse is provided by: reusing the same class to create many objects "inheriting" data and code from other classes End of explanation from IPython.display import Image Image(filename='ClassAttributes.png') Explanation: Attributes End of explanation class Car(object): # The following method is called when the class # is created or "constructed". The variables "self.x" refers # to the variable "x" in a created object. def __init__(self, color, car_type, speed): self.color = color self.car_type = car_type self.speed = speed class Car(object): # The following method is called when the class # is created or "constructed". The variables "self.x" refers # to the variable "x" in a created object. def __init__(self, color, car_type, speed): self.color = color self.car_type = car_type self.speed = speed # Creating an object for a class with arguments in the __init__ method car = Car("Blue", "HatchBack", 100) car.color # Creating an object for a class with arguments in the __init__ method joe_car = Car("Blue", "Sedan", 100) dave_car = Car("Red", "Sports", 150) print ("Type of joe_car is %s. Type of dave_car is %s"% (type(joe_car), type(dave_car))) # Accessed instance attributes joe_car = Car("Blue", "Sedan", 100) print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed))) Explanation: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator. End of explanation from IPython.display import Image Image(filename='InstanceMethods.png') #Class diagram from IPython.display import Image Image(filename='SingleClassDiagram.png', width=200, height=200) Explanation: EXERCISE: Change the constructor for Car to include the attribute "doors". Instance Methods End of explanation class Car(object): def __init__(self, color, car_type, speed): :param str color: :param str car_type: :param int speed: self.color = color self.car_type = car_type self.speed = speed def start(self): print ("%s %s started!" % (self.color, self.car_type)) def stop(self): pass def turn(self, direction): :parm str direction: left or right pass car = Car("Blue", "Sedan", 100) car.start() Explanation: A class diagram provides a more compact representation of a class. There are three sections. - Class name - Attributes - Methods Instance methods - functions associated with the objects constructed for a class - provide a way to transform data in objects - use instance attributes (references to variables beginning with "self.") End of explanation from IPython.display import Image Image(filename='SimpleClassHierarchy.png', width=400, height=400) # Code for inheritance class Sedan(Car): # Sedan inherits from car def __init__(self, color, speed): :param str color: :param int speed: super().__init__(color, "Sedan", speed) def play_cd(self): print ("Playing cd in %s sedan" % self.color) sedan = Sedan("Yellow", 1e6) sedan.play_cd() sedan.car_type joe_car = Sedan("Blue", 100) print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed))) Explanation: EXERCISE: Implement the stop and turn methods. Run the methods. Inheritance Inheritance is a common way that classes reuse data and code from other classes. A child class or derived class gets attributes and methods from its parent class. Programmatically: - Specify inheritance in the class statement - Constructor for derived class (class that inherits) have access to the constructor of its parent. Inheritance is represented in diagrams as an arror from the child class to its parent class. End of explanation from IPython.display import Image Image(filename='ClassInheritance.png', width=400, height=400) Explanation: Exercise: Implement SportsCar and create dave_car from SportsCar. Print attributes of dave_car. End of explanation from IPython.display import Image Image(filename='ATMClassDiagram.png', width=400, height=400) Explanation: Subclasses can have their own methods. Exercise: Add the play_cd() to Sedan and play_bluetooth() method to SportsCar. Construct a test to run these methods. What Else? Class attributes Class methods Object Oriented Design A design methodology must specify: - Components: What they do and how to build them - Interactions: How the components interact to implement use cases Object oriented designed - Components are specified by class diagrams. - Interactions are specified by interaction diagrams. Class diagram for the ATM system End of explanation from IPython.display import Image Image(filename='ATMAuthentication.png', width=800, height=800) Explanation: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object. Interaction Diagram for the ATM System An interaction diagram specifies how components interact to achieve a use case. Interactions are from one object to another object, indicating that the first object calls a method in the second object. Rules for drawing lines in an interaction diagram: - The calling object must know about the called object. - The called object must have the method invoked by the calling object. End of explanation from IPython.display import Image Image(filename='SciSheetsCoreClasses.png', width=300, height=30) Explanation: Look at Objects/ATMDiagrams.pdf for a solution. What Else in Design? Other diagrams: state diagrams, package diagrams, ... Object oriented design patterns Complex Example of Class Hierarchy End of explanation
12,639
Given the following text description, write Python code to implement the functionality described below step by step Description: Getting Started with halomod In this tutorial, you'll get a basic familiarity with the layout of halomod and some of its features. This is in no way meant to be exhaustive! The first thing to note is that halomod is based heavily on hmf, and there are a bunch of docs for that code that may help you with halomod. Most of the functionality of halomod is wrapped up in a few framework classes. Probably the one you'll use most is the TracerHaloModel, which as the name suggests implements halo models for tracer populations (like galaxies). There's a similar framework for pure Dark Matter (DMHaloModel). Let's import that (and a few other things we'll need) Step1: Using the TracerHaloModel As with all frameworks in halomod (and hmf) all defaults are provided for you, so you can simply create the object Step2: Just like that, you have a wide range of quantities available for computation. Note that all the quantities look and feel like they're attributes (i.e. they look like they're just variables pointing to data) but they are properties that lazily compute when they're needed (and are then cached). Let's have a look at the halo mass function Step3: Most often, what is desired is the power spectrum (or auto-correlation function) of the galaxies Step4: You can check all the quantities that are available with Step5: Thus we could estimate the total fraction of galaxies in the sample that are satellites Step6: Or get the effective galaxy bias Step7: Furthermore, some of the properties of the framework are themselves what we call Components. These are entire objects with their own methods for calculating various quantities (some of which have been exposed to the framework interface if they are commonly used). For example, the halo_profile object contains methods for evaluating halo-based properties Step8: Input Parameters There are many options for the TracerHaloModel. One of the motivations for halomod is to make it as feature-complete as possible, especially in terms of the input models (and their flexibility). The documentation for the TracerHaloModel itself does not contain all the possible parameters (as many of them are passed through to super-classes). You can see a full list of available parameters with Step9: Anything listed here can be set at instantiation time. A few common options might be Step10: So then we can compare the correlation functions for each of our defined models Step11: Notice that the first time we accessed hm.corr_auto_tracer it took a few moments to return, because it was computing. Now, however, it will return instantly Step12: Some of the parameters passed into TracerHaloModel are more complex than simply setting a redshift. Many of the parameters themselves define whole Components. Every one of these has two associated parameters Step13: thus, passing hmf_params = {'A' Step14: To ensure it has been properly updated, let's create a new instance Step15: And plot the halo profiles Step16: halomod inherits the caching system of hmf, which means that any updated parameter will automatically invalidate the cache for all dependent quantities, updating them on the next time they are accessed. Whirlwind Tour of Components and Models There are many different kinds of Components that offer several different models each. Let's take a look at some that you could choose from Step17: This also lets you easily define your own models. For example, say we had a crazy idea and thought that a constant concentration (with mass) was a good idea. We could create such a model Step18: Notice that we inherited from CMRelation, which provides a basic set of methods that we don't need to define ourselves, and also provides an interface that we must adhere to. In particular, any parameters that should be changeable by the user should be specified (with defaults) in the _defaults dictionary. Also, a cm method must be implemented which returns the concentration as a function of mass, for a particular redshift. The user-changeable parameters are available as self.params. We can now instantly use this new definition Step19: And we can see what effect this would have on the power spectrum
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from halomod import TracerHaloModel import halomod import hmf print("halomod version: ", halomod.__version__) print("hmf version:", hmf.__version__) Explanation: Getting Started with halomod In this tutorial, you'll get a basic familiarity with the layout of halomod and some of its features. This is in no way meant to be exhaustive! The first thing to note is that halomod is based heavily on hmf, and there are a bunch of docs for that code that may help you with halomod. Most of the functionality of halomod is wrapped up in a few framework classes. Probably the one you'll use most is the TracerHaloModel, which as the name suggests implements halo models for tracer populations (like galaxies). There's a similar framework for pure Dark Matter (DMHaloModel). Let's import that (and a few other things we'll need): End of explanation hm = TracerHaloModel() Explanation: Using the TracerHaloModel As with all frameworks in halomod (and hmf) all defaults are provided for you, so you can simply create the object: End of explanation plt.plot(hm.m, hm.dndm) plt.xscale('log') plt.yscale('log') plt.xlabel("Halo Mass [$h^{-1} M_\odot$]") plt.ylabel(r"dn/dm [$h^2 M_\odot^{-1} {\rm Mpc}^{-3}$]"); Explanation: Just like that, you have a wide range of quantities available for computation. Note that all the quantities look and feel like they're attributes (i.e. they look like they're just variables pointing to data) but they are properties that lazily compute when they're needed (and are then cached). Let's have a look at the halo mass function: End of explanation plt.plot(hm.k_hm, hm.power_auto_tracer, label='Galaxy-Galaxy Power') plt.plot(hm.k_hm, hm.power_1h_auto_tracer, ls='--', label='1-halo term') plt.plot(hm.k_hm, hm.power_2h_auto_tracer, ls='--', label='2-halo term') plt.xscale('log') plt.yscale('log') plt.ylim(1e-5,1e6) plt.legend() plt.xlabel("Wavenumber [h/Mpc]") plt.ylabel(r"Galaxy Power Spectrum [${\rm Mpc^3} h^{-3}$]"); Explanation: Most often, what is desired is the power spectrum (or auto-correlation function) of the galaxies: End of explanation hm.quantities_available() Explanation: You can check all the quantities that are available with End of explanation hm.satellite_fraction Explanation: Thus we could estimate the total fraction of galaxies in the sample that are satellites: End of explanation hm.bias_effective_tracer Explanation: Or get the effective galaxy bias: End of explanation r = np.logspace(-3, 1, 20) for m in [1e10, 1e12, 1e16]: plt.plot(r, hm.halo_profile.rho(r=r, m=m), label=f'm={m:1.2e}') plt.legend() plt.yscale('log') plt.xscale('log') plt.xlabel("Distance from Centre [Mpc/h]") plt.ylabel(r"Halo Density [$h^2 M_\odot {\rm Mpc}^{-3}$]"); Explanation: Furthermore, some of the properties of the framework are themselves what we call Components. These are entire objects with their own methods for calculating various quantities (some of which have been exposed to the framework interface if they are commonly used). For example, the halo_profile object contains methods for evaluating halo-based properties: End of explanation TracerHaloModel.parameter_info() Explanation: Input Parameters There are many options for the TracerHaloModel. One of the motivations for halomod is to make it as feature-complete as possible, especially in terms of the input models (and their flexibility). The documentation for the TracerHaloModel itself does not contain all the possible parameters (as many of them are passed through to super-classes). You can see a full list of available parameters with: End of explanation hm_smt3 = TracerHaloModel( z = 3.0, # Redshift hmf_model = 'SMT', # Sheth-Tormen mass function cosmo_params = { 'Om0': 0.3, 'H0': 70.0 } ) Explanation: Anything listed here can be set at instantiation time. A few common options might be: End of explanation plt.plot(hm.r, hm.corr_auto_tracer, label='Tinker at z=0') plt.plot(hm_smt3.r, hm_smt3.corr_auto_tracer, label='SMT at z=3') plt.xscale('log') plt.yscale('log') plt.xlabel("r [Mpc/h]") plt.ylabel("Correlation Function"); Explanation: So then we can compare the correlation functions for each of our defined models: End of explanation %timeit hm.corr_auto_tracer Explanation: Notice that the first time we accessed hm.corr_auto_tracer it took a few moments to return, because it was computing. Now, however, it will return instantly: End of explanation hm_smt3.hmf.params Explanation: Some of the parameters passed into TracerHaloModel are more complex than simply setting a redshift. Many of the parameters themselves define whole Components. Every one of these has two associated parameters: component_model and component_params. You've already seen one of these -- the hmf_model. There is an associated hmf_params which sets arbitrary model-specific parameters, and should be passed as a dictionary. In fact, you saw one of those too: cosmo_params. Once you've created the object, the actual model instance is available simply as component (so for example, hm.hmf is a full class instance containing methods for calculating $f(\sigma)$). You can check out what parameters are available for a specific model (and their current values) by printing the .params variable of the Component. For example: End of explanation hm.halo_profile_model = 'Hernquist' Explanation: thus, passing hmf_params = {'A':0.3} would set up a component with different parameters, making it easy to explore parameter space (or constrain those parameters via a fitting/MCMC routine!). Updating parameters in-place Once you have a framework created, you can update parameters in-place fully consistently. So, if we wanted to update our halo profile to be a Hernquist model: End of explanation hm_orig = TracerHaloModel() Explanation: To ensure it has been properly updated, let's create a new instance: End of explanation plt.plot(hm._r_table, hm.halo_profile_rho[:,-1], label='Hernquist') plt.plot(hm._r_table, hm_orig.halo_profile_rho[:,-1], label='NFW') plt.xscale('log') plt.yscale('log') plt.legend(loc='lower left') plt.text(3, 1e-2, f"Halo Mass = {hm.m[-1]:1.2e}", fontsize=13) plt.xlabel("Distance from Centre [Mpc/h]") plt.ylabel(r"Halo Density [$h^2 M_\odot {\rm Mpc}^{-3}$]"); Explanation: And plot the halo profiles: End of explanation from halomod.concentration import Bullock01 hm.halo_concentration_model = Bullock01 hm.mdef_model = 'SOCritical' plt.plot(hm.m, hm.cmz_relation) plt.xscale('log') plt.yscale('log') plt.xlabel("Mass [$M_\odot/h$]") plt.ylabel("Halo Concentration"); Explanation: halomod inherits the caching system of hmf, which means that any updated parameter will automatically invalidate the cache for all dependent quantities, updating them on the next time they are accessed. Whirlwind Tour of Components and Models There are many different kinds of Components that offer several different models each. Let's take a look at some that you could choose from: Cosmology: All FLRW cosmologies are supported via astropy. Transfer Functions: several commonly-used forms of the transfer function are provided, including: BBKS, BondEfs, CAMB, EH, FromFile. Growth Factor: default is to solve the standard integral in a flat-LCDM cosmology, though one can also use the output from CAMB which supports arbitrary non-flat FLRW cosmologies. Several other approximations are also implemented (eg. GenMFGrowth and Carroll1992). Filters: filters (or window-functions) are convolved with the density field to define "regions" of space associated with overdensities. The standard filter is the TopHat (in real-space), but you may also choose other filters such as the Gaussian, SharpK or SharpKEllipsoid. Mass Definitions: we provide several standard halo mass definitions (i.e. the definition of what makes a halo a halo). These include FoF, SOMean, SOCritical and SOVirial. Explicitly defining the mass definitions allows conversions to be made between definitions. Fitting Functions: we provide many mass function fits reported in the literature, including favourites such as SMT, PS, Jenkins01 and Tinker08. Halo Bias: Used to bias haloes with respect to the background clustering. Options include standards such as SMT01 and Tinker10. Also provided is a generic interface to use bias functions from the COLOSSUS package. Halo Profiles: halomod implements an extensive system of subclasses for halo density profiles. These will compute the density profile itself, the cumulative mass distribution, the virial radius, the normalized fourier transform of the density profile, and its self-convolution. They all have a consistent API. Models include NFW, Moore, Hernquist and Einasto. Concentration-Mass Relations: To fully specify a halo profile, one must have a model for the halo concentration. We provide several such models, including Bullock01, Duffy08 and Ludlow16. We again provide an interface to use concentration relations from the COLOSSUS package. HOD Models: To link galaxies to the DM haloes, we require a halo occupation distribution. A full-featured system of such models is included, and specific models from certain papers are also included, such as those from Zheng05 and Zehavi05. HOD models are not limited to point-tracers like galaxies -- they are generic enough that smooth occupation distributions can be modelled, for example the occupation of neutral hydrogen. Halo Exclusion: to increase fidelity of the auto-power spectra on transition scales between the 1- and 2-halo terms, various forms of "halo exclusion" have been proposed. We implement simple models such as Sphere exclusion, as well as more complex schemes such as DblSphere, DblEllipsoid and NgMatched (from Tinker+2005). The API Documentation has an exhaustive listing of your options for these components and their models. The key point is that halomod is built to be a system in which these various components can be mixed and matched consistently. Along with these components, there are many ways to use halomod. We've seen the TracerHaloModel, but you may also be interested in the ProjectedCF (projected correlation function), which performs integrals over the line-of-sight, or the AngularCF which produces the angular correlation function. Furthermore, a set of extensions to Warm Dark Matter models is also provided. Defining Your Own Models We've seen that using a new model for a particular Component is as simple as passing its string name. However, you can also pass a class directly. For example, to switch to the Bullock01 concentration-mass relation: End of explanation from halomod.concentration import CMRelation from hmf.halos.mass_definitions import SOCritical class ConstantConcentration(CMRelation): native_mdefs = (SOCritical(),) _defaults = {"amplitude": 3} def cm(self, m, z=0): return self.params['amplitude'] * np.ones_like(m) Explanation: This also lets you easily define your own models. For example, say we had a crazy idea and thought that a constant concentration (with mass) was a good idea. We could create such a model: End of explanation hm.halo_concentration_model = ConstantConcentration plt.plot(hm.m, hm.cmz_relation) plt.xscale('log') plt.yscale('log') plt.xlabel("Mass [$M_\odot/h$]") plt.ylabel("Halo Concentration"); Explanation: Notice that we inherited from CMRelation, which provides a basic set of methods that we don't need to define ourselves, and also provides an interface that we must adhere to. In particular, any parameters that should be changeable by the user should be specified (with defaults) in the _defaults dictionary. Also, a cm method must be implemented which returns the concentration as a function of mass, for a particular redshift. The user-changeable parameters are available as self.params. We can now instantly use this new definition: End of explanation plt.plot(hm.k_hm, hm.power_auto_matter, label='Constant Concentration') plt.plot(hm_orig.k_hm, hm_orig.power_auto_matter, label='Duffy08 Concentration') plt.xscale('log') plt.yscale('log') plt.xlim(3e-3, 100) plt.ylim(1e-1, 1e5) plt.legend() plt.xlabel("Wavenumber [h/Mpc]") plt.ylabel(r"Galaxy Power Spectrum [${\rm Mpc^3} h^{-3}$]"); Explanation: And we can see what effect this would have on the power spectrum: End of explanation
12,640
Given the following text description, write Python code to implement the functionality described below step by step Description: Stochastic Variational Optimization with SVIGP Pytorch adaptation of test_svi Notebook by Mark van der Wilk, 2016, edits by James Hensman, 2016 Pytorch version by Thomas Viehmann Step1: Stochastical estimation of ELBO The minibatch estimate should be an unbiased estimator of the ground_truth. Here we show a histogram of the value from different evaluations, together with its mean and the ground truth. The small difference between the mean of the minibatch estimations and the ground truth shows that the minibatch estimator is working as expected. Step2: Minibatches speed up computation The use of using minibatches is that it decreases the time needed to make an optimisation step, since estmating the objective is cheaper. Here we plot the change in time required with the size of the minibatch. We see that smaller minibatches result in a cheaper estimate of the objective. Step3: Running stochastic optimization
Python Code: import sys, os import numpy import time sys.path.append(os.path.join(os.getcwd(),'..')) import candlegp from matplotlib import pyplot import torch from torch.autograd import Variable %matplotlib inline pyplot.style.use('ggplot') import IPython M = 50 def func(x): return torch.sin(x * 3*3.14) + 0.3*torch.cos(x * 9*3.14) + 0.5 * torch.sin(x * 7*3.14) X = torch.rand(10000, 1).double() * 2 - 1 Y = func(X) + torch.randn(10000, 1).double() * 0.2 pyplot.plot(X.numpy(), Y.numpy(), 'x') D = X.size(1) Xt = torch.linspace(-1.1, 1.1, 100).double().unsqueeze(1) Yt = func(Xt) k = candlegp.kernels.RBF(D,variance=torch.DoubleTensor([1.0])).double() Z = X[:M].clone() m = candlegp.models.SVGP(Variable(X), Variable(Y.unsqueeze(1)), likelihood=candlegp.likelihoods.Gaussian(ttype=torch.DoubleTensor), kern=k, Z=Z) m Explanation: Stochastic Variational Optimization with SVIGP Pytorch adaptation of test_svi Notebook by Mark van der Wilk, 2016, edits by James Hensman, 2016 Pytorch version by Thomas Viehmann End of explanation # ground_truth = m.compute_log_likelihood() # seems to take too long evals = [] for i in range(100): if i % 10 == 9: print ('.', end='') idxes = torch.randperm(X.size(0))[:100] evals.append(m.compute_log_likelihood(Variable(X[idxes]), Variable(Y[idxes])).data[0]) pyplot.hist(evals) #pyplot.axvline(ground_truth) Explanation: Stochastical estimation of ELBO The minibatch estimate should be an unbiased estimator of the ground_truth. Here we show a histogram of the value from different evaluations, together with its mean and the ground truth. The small difference between the mean of the minibatch estimations and the ground truth shows that the minibatch estimator is working as expected. End of explanation mbps = numpy.logspace(-2, -0.8, 7) times = [] objs = [] for mbp in mbps: minibatch_size = int(len(X) * mbp) print (minibatch_size) start_time = time.time() evals = [] for i in range(20): idxes = torch.randperm(X.size(0))[:minibatch_size] evals.append(m.compute_log_likelihood(Variable(X[idxes]), Variable(Y[idxes])).data[0]) objs.append(evals) # plt.hist(objs, bins = 100) # plt.axvline(ground_truth, color='r') times.append(time.time() - start_time) f, (ax1, ax2) = pyplot.subplots(1, 2, figsize=(16, 6)) ax1.plot(mbps, times, 'x-') ax1.set_xlabel("Minibatch proportion") ax1.set_ylabel("Time taken") ax2.plot(mbps, numpy.array(objs), 'kx') ax2.set_xlabel("Minibatch proportion") ax2.set_ylabel("ELBO estimates") Explanation: Minibatches speed up computation The use of using minibatches is that it decreases the time needed to make an optimisation step, since estmating the objective is cheaper. Here we plot the change in time required with the size of the minibatch. We see that smaller minibatches result in a cheaper estimate of the objective. End of explanation pX = Variable(torch.linspace(-1, 1, 100).unsqueeze(1).double()) pY, pYv = m.predict_y(pX) pyplot.plot(X.numpy(), Y.numpy(), 'x') line, = pyplot.plot(pX.data.numpy(), pY.data.numpy(), lw=1.5) col = line.get_color() pyplot.plot(pX.data.numpy(), (pY+2*pYv**0.5).data.numpy(), col, lw=1.5) pyplot.plot(pX.data.numpy(), (pY-2*pYv**0.5).data.numpy(), col, lw=1.5) pyplot.plot(m.Z.get().data.numpy(), numpy.zeros(m.Z.shape), 'k|', mew=2) pyplot.title("Predictions before training") logt = [] logf = [] st = time.time() minibatch_size = 100 m.Z.requires_grad = True opt = torch.optim.Adam(m.parameters(), lr=0.01) m.Z.requires_grad = False for i in range(2000): if i % 50 == 49: print (i) idxes = torch.randperm(X.size(0))[:minibatch_size] opt.zero_grad() obj = m(Variable(X[idxes]), Variable(Y[idxes])) logf.append(obj.data[0]) obj.backward() opt.step() logt.append(time.time() - st) if i%50 == 49: IPython.display.clear_output(True) pyplot.plot(-numpy.array(logf)) pyplot.xlabel('iteration') pyplot.ylabel('ELBO') pyplot.show() pX = Variable(torch.linspace(-1, 1, 100).unsqueeze(1).double()) pY, pYv = m.predict_y(pX) pyplot.plot(X.numpy(), Y.numpy(), 'x') line, = pyplot.plot(pX.data.numpy(), pY.data.numpy(), lw=1.5) col = line.get_color() pyplot.plot(pX.data.numpy(), (pY+2*pYv**0.5).data.numpy(), col, lw=1.5) pyplot.plot(pX.data.numpy(), (pY-2*pYv**0.5).data.numpy(), col, lw=1.5) pyplot.plot(m.Z.get().data.numpy(), numpy.zeros(m.Z.shape), 'k|', mew=2) pyplot.title("Predictions after training") Explanation: Running stochastic optimization End of explanation
12,641
Given the following text description, write Python code to implement the functionality described below step by step Description: This is for the Thorlabs PDA36A at the 20dB setting. Step1: Note that we need at least 10mV to even resolve a signal on the scope. So the NEP is only part of the story Step2: This setting (20dB) leaves plenty of bandwidth but can't resolve our signal (1.3uW). It's limited to double our signal. Now for the 30dB setting Step3: On 30dB we start to hit the bandwidth limit but can resolve down to 0.8 uW (half our signal). Even so, we'd only get 10mV out of it. Now the APD120A Step4: This can resolve down to 8nW so our 1.3uW would be ~1000x larger. Final question
Python Code: # Enter the specs of the detector nep = 2.34e-12 # in Watts per root hz BW = 10e6 # Bandwidth in Hz gain = 0.75e4 # gain in V/A responsivity = 0.5 # Amps per Watt (assume 800 nm) pmin = nep * np.sqrt(BW) volts_min = pmin * responsivity * gain print("voltage generated by p_min:",volts_min) Explanation: This is for the Thorlabs PDA36A at the 20dB setting. End of explanation scope_floor_factor = 0.010/volts_min # the power has to be scope_floor_factor times larger in order to generate 10mV: pmin * scope_floor_factor Explanation: Note that we need at least 10mV to even resolve a signal on the scope. So the NEP is only part of the story: End of explanation nep = 1.21e-12 BW = 260e3 gain = 2.38e4 responsivity = 0.5 pmin = nep * np.sqrt(BW) volts_min = pmin * responsivity * gain print(volts_min) scope_floor_factor = 0.010/volts_min # resolvable power: pmin * scope_floor_factor Explanation: This setting (20dB) leaves plenty of bandwidth but can't resolve our signal (1.3uW). It's limited to double our signal. Now for the 30dB setting: End of explanation nep = 0.2e-12 BW = 50e6 gain = 50000 responsivity = 25 pmin = nep * np.sqrt(BW) volts_min = pmin * responsivity * gain print(volts_min) scope_floor_factor = 0.010/volts_min # resolvable power: pmin * scope_floor_factor Explanation: On 30dB we start to hit the bandwidth limit but can resolve down to 0.8 uW (half our signal). Even so, we'd only get 10mV out of it. Now the APD120A: End of explanation 1.3e-6 * 50000 * 25 # watts times volts/amp times amps/watt gives volts: Explanation: This can resolve down to 8nW so our 1.3uW would be ~1000x larger. Final question: what is the voltage generated by hitting it with a 1.3uW pulse? End of explanation
12,642
Given the following text description, write Python code to implement the functionality described below step by step Description: Writing PB in file The API allows to write all the files the command line tools can. This includes the outputs of PBassign. The functions to handle several file formats are available in the Step1: Fasta files The most common way to save PB sequences is to write them in a fasta file. PBxplore allows two ways to write fasta files. The sequences can be written either all at once or one at a time. To write a batch of sequences at once, we need a list of sequences and a list of the corresponding sequence names. The writing function here is Step2: Sequences can be written once at a time using the Step3: By default, the lines in fasta files are wrapped at 60 caracters as defined in Step4: Dihedral angles One needs the phi and psi dihedral angles to assign protein block sequences. Having these angles, it is sometime convenient to store them in a file. This can be done easily. Step5: Note it's better to write the dihedral for each PDB/frame due to the high memory cost to store all of them in a list. The output is formated with one line per residue. The first columns repeat the name given for the chain, then is the residue id followed by the phi and the psi angle. If an angle is not defined, 'None' is written instead. Step7: Read fasta files We want to read sequences that we wrote in files. PBxplore provides a function to read fasta files Step8: If the sequences we want to read are spread amongst several fasta files, then we can use the
Python Code: from __future__ import print_function, division from pprint import pprint import os import pbxplore as pbx Explanation: Writing PB in file The API allows to write all the files the command line tools can. This includes the outputs of PBassign. The functions to handle several file formats are available in the :mod:pbxplore.io module. End of explanation names = [] pb_sequences = [] pdb_path = os.path.join(pbx.DEMO_DATA_PATH, '2LFU.pdb') for chain_name, chain in pbx.chains_from_files([pdb_path]): dihedrals = chain.get_phi_psi_angles() pb_seq = pbx.assign(dihedrals) names.append(chain_name) pb_sequences.append(pb_seq) pprint(names) pprint(pb_sequences) with open('output.fasta', 'w') as outfile: pbx.io.write_fasta(outfile, pb_sequences, names) !cat output.fasta !rm output.fasta Explanation: Fasta files The most common way to save PB sequences is to write them in a fasta file. PBxplore allows two ways to write fasta files. The sequences can be written either all at once or one at a time. To write a batch of sequences at once, we need a list of sequences and a list of the corresponding sequence names. The writing function here is :func:pbxplore.io.write_fasta. End of explanation pdb_path = os.path.join(pbx.DEMO_DATA_PATH, '2LFU.pdb') with open('output.fasta', 'w') as outfile: for chain_name, chain in pbx.chains_from_files([pdb_path]): dihedrals = chain.get_phi_psi_angles() pb_seq = pbx.assign(dihedrals) pbx.io.write_fasta_entry(outfile, pb_seq, chain_name) !cat output.fasta !rm output.fasta Explanation: Sequences can be written once at a time using the :func:pbxplore.io.write_fasta_entry function. End of explanation print(pb_sequences[0]) with open('output.fasta', 'w') as outfile: for width in (60, 70, 80): pbx.io.write_fasta_entry(outfile, pb_sequences[0], 'width={} blocks'.format(width), width=width) !cat output.fasta !rm output.fasta Explanation: By default, the lines in fasta files are wrapped at 60 caracters as defined in :const:pbxplore.io.fasta.FASTA_WIDTH. Both :func:pbxplore.io.write_fasta and :func:pbxplore.io.write_fasta_entry have a width optionnal argument that allows to control the wrapping. End of explanation pdb_path = os.path.join(pbx.DEMO_DATA_PATH, '2LFU.pdb') with open('output.phipsi', 'w') as outfile: for chain_name, chain in pbx.chains_from_files([pdb_path]): dihedral = chain.get_phi_psi_angles() for res in sorted(dihedral): phi = "{:8.2f}".format(dihedral[res]["phi"]) if dihedral[res]["phi"] else " None" psi = "{:8.2f}".format(dihedral[res]["psi"]) if dihedral[res]["psi"] else " None" print("{} {:6d} {} {} ".format(chain_name, res, phi, psi), file=outfile) Explanation: Dihedral angles One needs the phi and psi dihedral angles to assign protein block sequences. Having these angles, it is sometime convenient to store them in a file. This can be done easily. End of explanation !head output.phipsi !tail output.phipsi !rm output.phipsi Explanation: Note it's better to write the dihedral for each PDB/frame due to the high memory cost to store all of them in a list. The output is formated with one line per residue. The first columns repeat the name given for the chain, then is the residue id followed by the phi and the psi angle. If an angle is not defined, 'None' is written instead. End of explanation def pdb_to_fasta_pb(pdb_path, fasta_path): Write a fasta file with all the PB sequences from a PDB with open(fasta_path, 'w') as outfile: for chain_name, chain in pbx.chains_from_files([pdb_path]): dihedrals = chain.get_phi_psi_angles() pb_seq = pbx.assign(dihedrals) pbx.io.write_fasta_entry(outfile, pb_seq, chain_name) # Write a fasta file pdb_path = os.path.join(pbx.DEMO_DATA_PATH, '2LFU.pdb') pdb_to_fasta_pb(pdb_path, 'output.fasta') # Read a list of headers and a list of sequences from a fasta file names, sequences = pbx.io.read_fasta('output.fasta') print('names:') pprint(names) print('sequences:') pprint(sequences) !rm output.fasta Explanation: Read fasta files We want to read sequences that we wrote in files. PBxplore provides a function to read fasta files: the :func:pbxplore.io.read_fasta function. End of explanation # Write several fasta files pdb_to_fasta_pb(os.path.join(pbx.DEMO_DATA_PATH, '1BTA.pdb'), '1BTA.fasta') pdb_to_fasta_pb(os.path.join(pbx.DEMO_DATA_PATH, '2LFU.pdb'), '2FLU.fasta') # Read the fasta files names, sequences = pbx.io.read_several_fasta(['1BTA.fasta', '2FLU.fasta']) # Print the first entries print('names:') pprint(names[:5]) print('sequences:') pprint(sequences[:5]) !rm 1BTA.fasta 2FLU.fasta Explanation: If the sequences we want to read are spread amongst several fasta files, then we can use the :func:pbxplore.io.read_several_fasta function that takes a list of fasta file path as argument instead of a single path. End of explanation
12,643
Given the following text description, write Python code to implement the functionality described below step by step Description: This jupyter notebooks provides the code for classifying signals using the Continuous Wavelet Transform and Convolutional Neural Networks. To get some more background information, please have a look at the accompanying blog-post Step1: 1. Loading the UCI HAR dataset into an numpy ndarray Download dataset from https Step2: 2. Applying a CWT to UCI HAR signals and saving the resulting scaleogram into an numpy ndarray Step3: 3. Training a Convolutional Neural Network
Python Code: import pywt import numpy as np import matplotlib.pyplot as plt from collections import defaultdict, Counter import keras from keras.layers import Dense, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.models import Sequential from keras.callbacks import History history = History() Explanation: This jupyter notebooks provides the code for classifying signals using the Continuous Wavelet Transform and Convolutional Neural Networks. To get some more background information, please have a look at the accompanying blog-post: http://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/ End of explanation activities_description = { 1: 'walking', 2: 'walking upstairs', 3: 'walking downstairs', 4: 'sitting', 5: 'standing', 6: 'laying' } def read_signals(filename): with open(filename, 'r') as fp: data = fp.read().splitlines() data = map(lambda x: x.rstrip().lstrip().split(), data) data = [list(map(float, line)) for line in data] return data def read_labels(filename): with open(filename, 'r') as fp: activities = fp.read().splitlines() activities = list(map(lambda x: int(x)-1, activities)) return activities def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation, :, :] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels INPUT_FOLDER_TRAIN = './data/train/InertialSignals/' INPUT_FOLDER_TEST = './data/test/InertialSignals/' INPUT_FILES_TRAIN = ['body_acc_x_train.txt', 'body_acc_y_train.txt', 'body_acc_z_train.txt', 'body_gyro_x_train.txt', 'body_gyro_y_train.txt', 'body_gyro_z_train.txt', 'total_acc_x_train.txt', 'total_acc_y_train.txt', 'total_acc_z_train.txt'] INPUT_FILES_TEST = ['body_acc_x_test.txt', 'body_acc_y_test.txt', 'body_acc_z_test.txt', 'body_gyro_x_test.txt', 'body_gyro_y_test.txt', 'body_gyro_z_test.txt', 'total_acc_x_test.txt', 'total_acc_y_test.txt', 'total_acc_z_test.txt'] LABELFILE_TRAIN = './data/train/y_train.txt' LABELFILE_TEST = './data/test/y_test.txt' train_signals, test_signals = [], [] for input_file in INPUT_FILES_TRAIN: signal = read_signals(INPUT_FOLDER_TRAIN + input_file) train_signals.append(signal) train_signals = np.transpose(np.array(train_signals), (1, 2, 0)) for input_file in INPUT_FILES_TEST: signal = read_signals(INPUT_FOLDER_TEST + input_file) test_signals.append(signal) test_signals = np.transpose(np.array(test_signals), (1, 2, 0)) train_labels = read_labels(LABELFILE_TRAIN) test_labels = read_labels(LABELFILE_TEST) [no_signals_train, no_steps_train, no_components_train] = np.shape(train_signals) [no_signals_test, no_steps_test, no_components_test] = np.shape(test_signals) no_labels = len(np.unique(train_labels[:])) print("The train dataset contains {} signals, each one of length {} and {} components ".format(no_signals_train, no_steps_train, no_components_train)) print("The test dataset contains {} signals, each one of length {} and {} components ".format(no_signals_test, no_steps_test, no_components_test)) print("The train dataset contains {} labels, with the following distribution:\n {}".format(np.shape(train_labels)[0], Counter(train_labels[:]))) print("The test dataset contains {} labels, with the following distribution:\n {}".format(np.shape(test_labels)[0], Counter(test_labels[:]))) uci_har_signals_train, uci_har_labels_train = randomize(train_signals, np.array(train_labels)) uci_har_signals_test, uci_har_labels_test = randomize(test_signals, np.array(test_labels)) Explanation: 1. Loading the UCI HAR dataset into an numpy ndarray Download dataset from https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones End of explanation scales = range(1,128) waveletname = 'morl' train_size = 5000 train_data_cwt = np.ndarray(shape=(train_size, 127, 127, 9)) for ii in range(0,train_size): if ii % 1000 == 0: print(ii) for jj in range(0,9): signal = uci_har_signals_train[ii, :, jj] coeff, freq = pywt.cwt(signal, scales, waveletname, 1) coeff_ = coeff[:,:127] train_data_cwt[ii, :, :, jj] = coeff_ test_size = 500 test_data_cwt = np.ndarray(shape=(test_size, 127, 127, 9)) for ii in range(0,test_size): if ii % 100 == 0: print(ii) for jj in range(0,9): signal = uci_har_signals_test[ii, :, jj] coeff, freq = pywt.cwt(signal, scales, waveletname, 1) coeff_ = coeff[:,:127] test_data_cwt[ii, :, :, jj] = coeff_ Explanation: 2. Applying a CWT to UCI HAR signals and saving the resulting scaleogram into an numpy ndarray End of explanation x_train = train_data_cwt y_train = list(uci_har_labels_train[:train_size]) x_test = test_data_cwt y_test = list(uci_har_labels_test[:test_size]) img_x = 127 img_y = 127 img_z = 9 num_classes = 6 batch_size = 16 epochs = 10 # reshape the data into a 4D tensor - (sample_number, x_img_size, y_img_size, num_channels) # because the MNIST is greyscale, we only have a single channel - RGB colour images would have 3 input_shape = (img_x, img_y, img_z) # convert the data to the right type #x_train = x_train.reshape(x_train.shape[0], img_x, img_y, img_z) #x_test = x_test.reshape(x_test.shape[0], img_x, img_y, img_z) x_train = x_train.astype('float32') x_test = x_test.astype('float32') print('x_train shape:', x_train.shape) print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices - this is for use in the # categorical_crossentropy loss below y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) model = Sequential() model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=input_shape)) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(Conv2D(64, (5, 5), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(1000, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(), metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test), callbacks=[history]) train_score = model.evaluate(x_train, y_train, verbose=0) print('Train loss: {}, Train accuracy: {}'.format(train_score[0], train_score[1])) test_score = model.evaluate(x_test, y_test, verbose=0) print('Test loss: {}, Test accuracy: {}'.format(test_score[0], test_score[1])) fig, axarr = plt.subplots(figsize=(12,6), ncols=2) axarr[0].plot(range(1, 11), history.history['acc'], label='train score') axarr[0].plot(range(1, 11), history.history['val_acc'], label='test score') axarr[0].set_xlabel('Number of Epochs', fontsize=18) axarr[0].set_ylabel('Accuracy', fontsize=18) axarr[0].set_ylim([0,1]) axarr[1].plot(range(1, 11), history.history['acc'], label='train score') axarr[1].plot(range(1, 11), history.history['val_acc'], label='test score') axarr[1].set_xlabel('Number of Epochs', fontsize=18) axarr[1].set_ylabel('Accuracy', fontsize=18) axarr[1].set_ylim([0.9,1]) plt.legend() plt.show() Explanation: 3. Training a Convolutional Neural Network End of explanation
12,644
Given the following text description, write Python code to implement the functionality described below step by step Description: Graficos de aprovados e reprovados Step1: Preparação para determinar Correlação entre média de notas e distância até a UF Step2: Determinar Correlação entre média de notas e distância até a UF Covariância -> (Σ[(xi-xmed)*(yi-ymed)])/n-1 Variância de cada variável (X,Y) -> Σ(xi-xmed)² e Σ(yi-ymed)² Correlação -> Cov(X,Y)/[√Va(X) * √Va(Y)] Step3: Ocorrência dos dados Step4: Tabela de Ocorrências Step5: Cálculo do Xmed e Ymed (Valor médio de todos os pontos de X e Y) Step6: Cálculo (Xi - Xmed) e (Yi - Ymed) i = índice dos valores de X e Y Step7: Calcula a covariância Step8: Calcula Variância de X e Y Step9: Executar passo 3 Calcular Correlação
Python Code: import numpy as np import scipy.special from bokeh.layouts import gridplot from bokeh.plotting import figure, show, output_file def cria_graficos_barras_apro_repro(array, disciplina, titulo_grafico): dados = [] anos= [2014, 2015, 2016] periodos = [1, 2] for ano in anos: for periodo in periodos: for index in range(len(array)): if (array[index]['ANO'] == ano and array[index]['PERIODO'] == periodo) and array[index]['DISCIPLINA'] == disciplina: dados.append(array[index]['PORCENTAGEM']) data = { 'semestres': ['2014.1', '2014.2', '2015.1', '2015.2', '2016.1', '2016.2'], 'porcentage': dados } # table-like data results in reconfiguration of the chart with no data manipulation bar2 = Bar(data, values='porcentage', label=['semestres'], title=titulo_grafico, plot_width=400) output_file("stacked_bar.html") show(row(bar2)) cria_graficos_barras_apro_repro(aprovados, 0, 'Aprovados disciplina 0') cria_graficos_barras_apro_repro(aprovados, 1, 'Aprovados disciplina 1') cria_graficos_barras_apro_repro(aprovados, 2, 'Aprovados disciplina 2') cria_graficos_barras_apro_repro(aprovados, 3, 'Aprovados disciplina 3') cria_graficos_barras_apro_repro(aprovados, 5, 'Aprovados disciplina 5') cria_graficos_barras_apro_repro(aprovados, 6, 'Aprovados disciplina 6') cria_graficos_barras_apro_repro(reprovados, 0, 'Reprovados disciplina 0') cria_graficos_barras_apro_repro(reprovados, 1, 'Reprovados disciplina 1') cria_graficos_barras_apro_repro(reprovados, 2, 'Reprovados disciplina 2') cria_graficos_barras_apro_repro(reprovados, 3, 'Reprovados disciplina 3') cria_graficos_barras_apro_repro(reprovados, 5, 'Reprovados disciplina 5') cria_graficos_barras_apro_repro(reprovados, 6, 'Reprovados disciplina 6') Explanation: Graficos de aprovados e reprovados End of explanation # Import pandas import pandas as pd import geocoder as gc from tqdm import tqdm from geopy.geocoders import Nominatim from geopy.distance import vincenty # Import BoxPlot, output_notebook, and show from bokeh.charts from bokeh.charts import BoxPlot, Donut, Bar, Histogram, output_notebook, show from bokeh.charts.attributes import cat, color from bokeh.charts.operations import blend from bokeh.layouts import gridplot, row from bokeh.models import HoverTool from bokeh.models.widgets import Panel, Tabs from bokeh.plotting import ColumnDataSource #Verificar distancias distancias = df[df["status"] == "ATIVO"].copy() distancias["LAT"], distancias["LON"], distancias["KM"] = [0,0,0] distancias = distancias.reset_index() uf = gc.google("59064741").latlng for i in tqdm(range(len(distancias))): ## trocar o range por range(len(distancias)) st = distancias.loc[i,'CEP'] g = gc.google(st) if g.lat == None: distancias.loc[i, "LAT"] = 0 elif g.lng == None: distancias.loc[i, "LON"] = 0 else: distancias.loc[i, "LON"] = g.lng distancias.loc[i, "LAT"] = g.lat print("Completo") distancias.to_csv('LatLong_Alunos3.csv', encoding="utf-8") ##Salva tabela criada #UFRN 59064-741 distancias.head() distancias.reset_index() distancias = pd.read_csv('LatLong_Alunos3.csv', encoding="utf-8", index_col=0) for atual in tqdm(range(len(distancias))): ## trocar o range por range(len(distancias)) lt = distancias.loc[atual, "LAT"] ln = distancias.loc[atual, "LON"] if lt != 0. and ln != 0.: compare = (lt, ln) #print(vincenty(uf, compare).km) distancias.loc[atual, "KM"] = vincenty(uf, compare).km distancias.to_csv('LatLong_Alunos.csv', encoding="utf-8") distancias.head() Explanation: Preparação para determinar Correlação entre média de notas e distância até a UF End of explanation #Considera apenas as aprovações -> Validação 1 distancias = distancias[distancias["status.disciplina"] == 'Aprovado'] #Removendo distâncias desnecessárias -> Validação 2 e 3 distancias = distancias[distancias['KM'] != 0] distancias = distancias[distancias['CEP'] != 0] #Validação 4 distancias = distancias[distancias['KM'] < 30] distancias #Lista com ID dos alunos validados alunos_validos = distancias.a_ID.unique() alunos_validos Explanation: Determinar Correlação entre média de notas e distância até a UF Covariância -> (Σ[(xi-xmed)*(yi-ymed)])/n-1 Variância de cada variável (X,Y) -> Σ(xi-xmed)² e Σ(yi-ymed)² Correlação -> Cov(X,Y)/[√Va(X) * √Va(Y)] End of explanation #Valores 'chave' da análise valor_x_list = [] valor_y_list = [] for i in range(len(alunos_validos)): #Seleciona todas as ocorrências do aluno com aquele ID aluno = distancias[distancias["a_ID"] == alunos_validos[i]] #Calcula média das notas para aquele aluno media = aluno['nota'].mean() distancia_UF = aluno['KM'].mean() #Adiciona resultado à lista de distribuição X if( media <= 6.0 ): valor_x_list.append(0) elif( media > 6.0 and media <= 7.0 ): valor_x_list.append(1) elif( media > 7.0 and media <= 8.0 ): valor_x_list.append(2) else: valor_x_list.append(3) #Adiciona resultado à lista de distribuição Y if( distancia_UF <= 1.5 ): valor_y_list.append(0) elif( distancia_UF > 1.5 and distancia_UF <= 4.0 ): valor_y_list.append(1) elif( distancia_UF > 4.0 and distancia_UF <= 8.0 ): valor_y_list.append(2) else: valor_y_list.append(3) Explanation: Ocorrência dos dados End of explanation #Tabela de Ocorrências distribuicao = pd.DataFrame(columns=('Valor X_Nota', 'Valor Y_Distancia', 'Xi - Xmed', 'Yi - Ymed' , 'Prod', '(Xi - Xmed)^2', '(Yi - Ymed)^2' ) ) distribuicao["Valor X_Nota"] = valor_x_list distribuicao["Valor Y_Distancia"] = valor_y_list distribuicao #distancias["LAT"], distancias["LON"], distancias["KM"] = [0,0,0] Explanation: Tabela de Ocorrências End of explanation xmed = distribuicao['Valor X_Nota'].mean() ymed = distribuicao['Valor Y_Distancia'].mean() Explanation: Cálculo do Xmed e Ymed (Valor médio de todos os pontos de X e Y) End of explanation dif_X = [] dif_Y = [] prod = [] for i in range(len(alunos_validos)): #Calcula diferença para cada valor de X e salva numa lista difX = valor_x_list[i] - xmed dif_X.append(difX) #Calcula diferença para cada valor de Y e salva numa lista difY = valor_y_list[i] - ymed dif_Y.append(difY) #Calcula produto entre valores prod_Difs = difX*difY prod.append(prod_Difs) #Adiciona na tabela distribuicao["Xi - Xmed"] = dif_X distribuicao["Yi - Ymed"] = dif_Y distribuicao["Prod"] = prod distribuicao Explanation: Cálculo (Xi - Xmed) e (Yi - Ymed) i = índice dos valores de X e Y End of explanation passo1 = distribuicao['Prod'].sum() covXY = passo1/(len(alunos_validos)-1) covXY dif_X_quadrada = [] dif_Y_quadrada = [] for i in range(len(alunos_validos)): #Eleva cada diferença ao quadrado difX_quad = dif_X[i]*dif_X[i] dif_X_quadrada.append(difX_quad) #Eleva cada diferença ao quadrado difY_quad = dif_Y[i]*dif_Y[i] dif_Y_quadrada.append(difY_quad) #Adiciona na tabela distribuicao["(Xi - Xmed)^2"] = dif_X_quadrada distribuicao["(Yi - Ymed)^2"] = dif_Y_quadrada distribuicao Explanation: Calcula a covariância End of explanation passo21 = distribuicao['(Xi - Xmed)^2'].sum() passo22 = distribuicao['(Yi - Ymed)^2'].sum() #Variância de X varX = passo21/(len(alunos_validos)-1) #Variância de Y varY = passo22/(len(alunos_validos)-1) varX varY Explanation: Calcula Variância de X e Y End of explanation from math import sqrt desvio_padraoX = sqrt(varX) desvio_padraoY = sqrt(varY) #Calculo da correlação corrXY = covXY/(desvio_padraoX*desvio_padraoY) corrXY Explanation: Executar passo 3 Calcular Correlação End of explanation
12,645
Given the following text description, write Python code to implement the functionality described below step by step Description: Numpy Exercise 4 Imports Step1: Complete graph Laplacian In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules. A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node. Here is $K_5$ Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple. The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy. Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy. Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
Python Code: import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns Explanation: Numpy Exercise 4 Imports End of explanation import networkx as nx K_5=nx.complete_graph(5) nx.draw(K_5) Explanation: Complete graph Laplacian In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules. A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node. Here is $K_5$: End of explanation def complete_deg(n): Return the integer valued degree matrix D for the complete graph K_n. return np.diag([n-1]*n) D = complete_deg(5) assert D.shape==(5,5) assert D.dtype==np.dtype(int) assert np.all(D.diagonal()==4*np.ones(5)) assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int)) Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple. The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy. End of explanation def complete_adj(n): Return the integer valued adjacency matrix A for the complete graph K_n. # YOUR CODE HERE # raise NotImplementedError() return np.ones((n,n), dtype=int)-np.diag([1]*n) A = complete_adj(5) assert A.shape==(5,5) assert A.dtype==np.dtype(int) assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int)) Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy. End of explanation # YOUR CODE HERE # raise NotImplementedError() np.linalg.eigvals(complete_deg(1)-complete_adj(1)) np.linalg.eigvals(complete_deg(2)-complete_adj(2)) Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. End of explanation
12,646
Given the following text description, write Python code to implement the functionality described below step by step Description: Let's change gears and talk about Game of thrones or shall I say Network of Thrones. It is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon). If you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books. Andrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT). The dataset is publicly avaiable for the 5 books at https Step1: Let's load in the datasets Step2: The resulting DataFrame book1 has 5 columns Step3: Once we have the data loaded as a pandas DataFrame, it's time to create a network. We create a graph for each book. It's possible to create one MultiGraph instead of 5 graphs, but it is easier to play with different graphs. Step4: Let's populate the graph with edges from the pandas DataFrame. Step5: Let's have a look at these edges. Step6: Finding the most important node i.e character in these networks. Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning. First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality. Using this measure, let's extract the top ten important characters from the first book (book[0]) and the fifth book (book[4]). Step7: Exercise Create a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure. [5 mins] Step8: Let's do this for Betweeness centrality and check if this makes any difference Haha, evil laugh Step9: PageRank The billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Step10: Is there a correlation between these techniques? Exercise Find the correlation between these four techniques. pagerank betweenness_centrality weighted_degree degree centrality Step11: Evolution of importance of characters over the books According to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;) Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book. We create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion. We can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book Step12: Exercise Plot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality. Step13: So what's up with Stannis Baratheon? Step14: Community detection in Networks A network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. We will use louvain community detection algorithm to find the modules in our graph. Step15: Exercise Find the most important node in the partitions according to degree centrality of the nodes. Step16: A bit about power law in networks
Python Code: import pandas as pd import networkx as nx import matplotlib.pyplot as plt import community import numpy as np import warnings warnings.filterwarnings('ignore') %matplotlib inline Explanation: Let's change gears and talk about Game of thrones or shall I say Network of Thrones. It is suprising right? What is the relationship between a fatansy TV show/novel and network science or python(it's not related to a dragon). If you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is the hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books. Andrew J. Beveridge, an associate professor of mathematics at Macalester College, and Jie Shan, an undergraduate created a network from the book A Storm of Swords by extracting relationships between characters to find out the most important characters in the book(or GoT). The dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions. Credits: Blog: https://networkofthrones.wordpress.com Math Horizons Article: https://www.maa.org/sites/default/files/pdf/Mathhorizons/NetworkofThrones%20%281%29.pdf End of explanation book1 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book1-edges.csv') book2 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book2-edges.csv') book3 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book3-edges.csv') book4 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book4-edges.csv') book5 = pd.read_csv('datasets/game_of_thrones_network/asoiaf-book5-edges.csv') Explanation: Let's load in the datasets End of explanation book1 Explanation: The resulting DataFrame book1 has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. A network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number. End of explanation G_book1 = nx.Graph() G_book2 = nx.Graph() G_book3 = nx.Graph() G_book4 = nx.Graph() G_book5 = nx.Graph() Explanation: Once we have the data loaded as a pandas DataFrame, it's time to create a network. We create a graph for each book. It's possible to create one MultiGraph instead of 5 graphs, but it is easier to play with different graphs. End of explanation for row in book1.iterrows(): G_book1.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book']) for row in book2.iterrows(): G_book2.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book']) for row in book3.iterrows(): G_book3.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book']) for row in book4.iterrows(): G_book4.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book']) for row in book5.iterrows(): G_book5.add_edge(row[1]['Source'], row[1]['Target'], weight=row[1]['weight'], book=row[1]['book']) books = [G_book1, G_book2, G_book3, G_book4, G_book5] Explanation: Let's populate the graph with edges from the pandas DataFrame. End of explanation list(G_book1.edges(data=True))[16] list(G_book1.edges(data=True))[400] Explanation: Let's have a look at these edges. End of explanation deg_cen_book1 = nx.degree_centrality(books[0]) deg_cen_book5 = nx.degree_centrality(books[4]) sorted(deg_cen_book1.items(), key=lambda x:x[1], reverse=True)[0:10] sorted(deg_cen_book5.items(), key=lambda x:x[1], reverse=True)[0:10] # Plot a histogram of degree centrality plt.hist(list(nx.degree_centrality(G_book4).values())) plt.show() Explanation: Finding the most important node i.e character in these networks. Is it Jon Snow, Tyrion, Daenerys, or someone else? Let's see! Network Science offers us many different metrics to measure the importance of a node in a network as we saw in the first part of the tutorial. Note that there is no "correct" way of calculating the most important node in a network, every metric has a different meaning. First, let's measure the importance of a node in a network by looking at the number of neighbors it has, that is, the number of nodes it is connected to. For example, an influential account on Twitter, where the follower-followee relationship forms the network, is an account which has a high number of followers. This measure of importance is called degree centrality. Using this measure, let's extract the top ten important characters from the first book (book[0]) and the fifth book (book[4]). End of explanation def weighted_degree(G, weight): result = dict() for node in G.nodes(): weight_degree = 0 for n in G.edges([node], data=True): weight_degree += ____________ result[node] = weight_degree return result plt.hist(___________) plt.show() sorted(weighted_degree(G_book1, 'weight').items(), key=lambda x:x[1], reverse=True)[0:10] Explanation: Exercise Create a new centrality measure, weighted_degree(Graph, weight) which takes in Graph and the weight attribute and returns a weighted degree dictionary. Weighted degree is calculated by summing the weight of the all edges of a node and find the top five characters according to this measure. [5 mins] End of explanation # First check unweighted, just the structure sorted(nx.betweenness_centrality(G_book1).items(), key=lambda x:x[1], reverse=True)[0:10] # Let's care about interactions now sorted(nx.betweenness_centrality(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10] Explanation: Let's do this for Betweeness centrality and check if this makes any difference Haha, evil laugh End of explanation # by default weight attribute in pagerank is weight, so we use weight=None to find the unweighted results sorted(nx.pagerank_numpy(G_book1, weight=None).items(), key=lambda x:x[1], reverse=True)[0:10] sorted(nx.pagerank_numpy(G_book1, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10] Explanation: PageRank The billion dollar algorithm, PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. End of explanation cor = pd.DataFrame.from_records([______, _______, _______, ______]) cor.T cor.T.______() Explanation: Is there a correlation between these techniques? Exercise Find the correlation between these four techniques. pagerank betweenness_centrality weighted_degree degree centrality End of explanation evol = [nx.degree_centrality(book) for book in books] evol_df = pd.DataFrame.from_records(evol).fillna(0) evol_df[['Eddard-Stark', 'Tyrion-Lannister', 'Jon-Snow']].plot() set_of_char = set() for i in range(5): set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index)) set_of_char Explanation: Evolution of importance of characters over the books According to degree centrality the most important character in the first book is Eddard Stark but he is not even in the top 10 of the fifth book. The importance changes over the course of five books, because you know stuff happens ;) Let's look at the evolution of degree centrality of a couple of characters like Eddard Stark, Jon Snow, Tyrion which showed up in the top 10 of degree centrality in first book. We create a dataframe with character columns and index as books where every entry is the degree centrality of the character in that particular book and plot the evolution of degree centrality Eddard Stark, Jon Snow and Tyrion. We can see that the importance of Eddard Stark in the network dies off and with Jon Snow there is a drop in the fourth book but a sudden rise in the fifth book End of explanation evol_df[__________].plot(figsize=(29,15)) evol = [____________ for graph in books] evol_df = pd.DataFrame.from_records(evol).fillna(0) set_of_char = set() for i in range(5): set_of_char |= set(list(evol_df.T[i].sort_values(ascending=False)[0:5].index)) evol_df[___________].plot(figsize=(19,10)) Explanation: Exercise Plot the evolution of weighted degree centrality of the above mentioned characters over the 5 books, and repeat the same exercise for betweenness centrality. End of explanation nx.draw(nx.barbell_graph(5, 1), with_labels=True) sorted(nx.degree_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5] sorted(nx.betweenness_centrality(G_book5).items(), key=lambda x:x[1], reverse=True)[:5] Explanation: So what's up with Stannis Baratheon? End of explanation partition = community.best_partition(G_book1) size = float(len(set(partition.values()))) pos = nx.spring_layout(G_book1) count = 0. for com in set(partition.values()) : count = count + 1. list_nodes = [nodes for nodes in partition.keys() if partition[nodes] == com] nx.draw_networkx_nodes(G_book1, pos, list_nodes, node_size = 20, node_color = str(count / size)) nx.draw_networkx_edges(G_book1, pos, alpha=0.5) plt.show() d = {} for character, par in partition.items(): if par in d: d[par].append(character) else: d[par] = [character] d nx.draw(nx.subgraph(G_book1, d[3])) nx.draw(nx.subgraph(G_book1, d[1])) nx.density(G_book1) nx.density(nx.subgraph(G_book1, d[4])) nx.density(nx.subgraph(G_book1, d[4]))/nx.density(G_book1) Explanation: Community detection in Networks A network is said to have community structure if the nodes of the network can be easily grouped into (potentially overlapping) sets of nodes such that each set of nodes is densely connected internally. We will use louvain community detection algorithm to find the modules in our graph. End of explanation max_d = {} deg_book1 = nx.degree_centrality(G_book1) for ______ in d: temp = 0 for _______ in d[group]: if deg_book1[_______] > temp: max_d[______] = _______ temp = deg_book1[_______] max_d Explanation: Exercise Find the most important node in the partitions according to degree centrality of the nodes. End of explanation G_random = nx.erdos_renyi_graph(100, 0.1) nx.draw(G_random) G_ba = nx.barabasi_albert_graph(100, 2) nx.draw(G_ba) # Plot a histogram of degree centrality plt.hist(list(nx.degree_centrality(G_random).values())) plt.show() plt.hist(list(nx.degree_centrality(G_ba).values())) plt.show() G_random = nx.erdos_renyi_graph(2000, 0.2) G_ba = nx.barabasi_albert_graph(2000, 20) d = {} for i, j in dict(nx.degree(G_random)).items(): if j in d: d[j] += 1 else: d[j] = 1 x = np.log2(list((d.keys()))) y = np.log2(list(d.values())) plt.scatter(x, y, alpha=0.9) plt.show() d = {} for i, j in dict(nx.degree(G_ba)).items(): if j in d: d[j] += 1 else: d[j] = 1 x = np.log2(list((d.keys()))) y = np.log2(list(d.values())) plt.scatter(x, y, alpha=0.9) plt.show() Explanation: A bit about power law in networks End of explanation
12,647
Given the following text description, write Python code to implement the functionality described below step by step Description: Analysis of local base-steps parameters This tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link. Download the input files that are used in the tutorial from this link. Two following input files are required in this tutorial L-BPS_cdna.dat (do_x3dna output from the trajectory, which contains the DNA bound with the protein) L-BPS_odna.dat (do_x3dna output from the trajectory, which only contains the free DNA) These two file should be present inside tutorial_data of the current/present working directory. The Python APIs should be only used when do_x3dna is executed with -ref option. Detailed documentation is provided here. Importing Python Modules numpy Step1: Initializing DNA object and storing data to it DNA object is initialized by using the total number of base-pairs One base-step is formed by two adjacent base-pairs. Therefore, total number of base-steps is less than one of total number of base-pairs. Six base-pair parameters (shift, slide, rise, tilt, roll and twist) can be read and stored in DNA object from the input file using function set_base_step_parameters(...). To speed up processing and analysis, data can be stored in a HDF5 file by including HDF5 file name as a argument during initialization. Same file can be used to store and retrieve all other parameters. Step2: Local base-step parameter of a base-pair directly from dictionary The DNA.data is a python dictionary which contains all the data as a Python Dictionary. For a base-step, parameter as a function of time can be directly extracted. Step3: Local base-step parameters as a function of time (manually) A specific local base-step parameters for the given base-pairs range can be extracted from the DNA obejct using function dnaMD.DNA.get_parameters(...). The extracted parameters of the given base-step can be plotted as a function of time The extracted parameters (average) for the DNA segment can be plotted as a function of time Following example shows Twist vs Time plots. These example also shows that how to extract the parameters value from the DNA object. Other properties could be extracted and plotted using similar steps. Step4: Local base-step parameters as a function of time (using provided functions) Above examples show the method to extract the values from the DNA object. However, dnaMD.DNA.time_vs_parameter(...) function could be use to get parameter values as a function of time for the given base-pairs/step or segment Step5: Distribution of local base-steps parameters during MD simulations As shown in above plot of Time vs Rise, comparison between bound and free DNA is very difficult. Therefore, to compare the parameters of either different DNAs or same DNAs in different environment or different segment of same DNAs, the distribution of parameters over the MD trajectory are sometime useful. The distribution could be calculated using the function dnaMD.DNA.parameter_distribution(...) as shown in the following examples. The normalized distribution is calculated using numpy.histogram(...). Step6: Local base-step parameters as a function of base-steps What is the average values of a given parameter for either each base-step or a DNA segment? To address this question, average values of a given parameter with its error could be calculated for either each base-step or a DNA segment using a function dnaMD.DNA.get_mean_error(...). This average values could be also use to compare two DNA. Standard error could be calculated using block averaging method as derived in this publication. To use this method, g_analyze of GROMACS package should be present in $PATH environment variable. Step7: Deviation in parameters of bound DNA with respect to free DNA As discussed in the above section, average parameters with standard error can be calculated for both bound and free DNA. Additionally, deviation in bound DNA with respect to the free DNA could be calculated using function dnaMD.localDeformationVsBPS(...) as shown in the following example.
Python Code: import numpy as np import matplotlib.pyplot as plt import dnaMD %matplotlib inline Explanation: Analysis of local base-steps parameters This tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link. Download the input files that are used in the tutorial from this link. Two following input files are required in this tutorial L-BPS_cdna.dat (do_x3dna output from the trajectory, which contains the DNA bound with the protein) L-BPS_odna.dat (do_x3dna output from the trajectory, which only contains the free DNA) These two file should be present inside tutorial_data of the current/present working directory. The Python APIs should be only used when do_x3dna is executed with -ref option. Detailed documentation is provided here. Importing Python Modules numpy: Required for the calculations involving large arrays matplotlib: Required to plot the results dnaMD: Python module to analyze DNA/RNA structures from the do_x3dna output files. End of explanation ## Initialization pdna = dnaMD.DNA(60) #Initialization for 60 base-pairs DNA bound with the protein fdna = dnaMD.DNA(60) #Initialization for 60 base-pairs free DNA ## If HDF5 file is used to store/save data use these: # pdna = dnaMD.DNA(60, filename='cdna.h5') #Initialization for 60 base-pairs DNA bound with the protein # fdna = dnaMD.DNA(60, filename='odna.h5') #Initialization for 60 base-pairs free DNA ## Loading data from input files in respective DNA object # Number of base-steps = Number of base-pairs - one # Number of base-steps in a 60 base-pairs DNA = 59 # "bp=[1, 59]" will load local base-pair parameters of 1 to 59 base-steps # " parameters = 'All' " will load all six parameters (shift, slide, rise, tilt, roll and twist) pdna.set_base_step_parameters('tutorial_data/L-BPS_cdna.dat', bp_step=[1, 59], parameters='all', step_range=True) fdna.set_base_step_parameters('tutorial_data/L-BPS_odna.dat', bp_step=[1, 59], parameters='all', step_range=True) Explanation: Initializing DNA object and storing data to it DNA object is initialized by using the total number of base-pairs One base-step is formed by two adjacent base-pairs. Therefore, total number of base-steps is less than one of total number of base-pairs. Six base-pair parameters (shift, slide, rise, tilt, roll and twist) can be read and stored in DNA object from the input file using function set_base_step_parameters(...). To speed up processing and analysis, data can be stored in a HDF5 file by including HDF5 file name as a argument during initialization. Same file can be used to store and retrieve all other parameters. End of explanation # Extracting "Twist" of 22nd bp twist_20bp = pdna.data['bps']['22']['twist'] #Twist vs Time for 22nd bp plt.title('22nd bp') plt.plot(pdna.time, twist_20bp) plt.xlabel('Time (ps)') plt.ylabel('Twist ( $^o$)') plt.show() Explanation: Local base-step parameter of a base-pair directly from dictionary The DNA.data is a python dictionary which contains all the data as a Python Dictionary. For a base-step, parameter as a function of time can be directly extracted. End of explanation # Extracting "Twist" of 20 to 30 base-steps twist, bp_idx = pdna.get_parameters('twist',[20,30], bp_range=True) # Twist vs Time for 22nd base-step plt.title('22nd bp') plt.plot(pdna.time, twist[2]) # index is 2 for 22nd base-step: (20 + 2) plt.xlabel('Time (ps)') plt.ylabel('Twist ( $^o$)') plt.show() # Average Twist vs Time for segment 20-30 base-step avg_twist = np.mean(twist, axis=0) # Calculation of mean using mean function of numpy plt.title('20-30 bp segment') plt.plot(pdna.time, avg_twist) plt.xlabel('Time (ps)') plt.ylabel('Twist ( $^o$)') plt.show() # Average Twist vs Time for segment 24-28 base-step # index of 24th base-step is 4 (20 + 4). index of 28th base-step is 8 (20 + 8) avg_twist = np.mean(twist[4:8], axis=0) plt.title('24-28 bp segment') plt.plot(pdna.time, avg_twist) plt.xlabel('Time (ps)') plt.ylabel('Twist ( $^o$)') plt.show() Explanation: Local base-step parameters as a function of time (manually) A specific local base-step parameters for the given base-pairs range can be extracted from the DNA obejct using function dnaMD.DNA.get_parameters(...). The extracted parameters of the given base-step can be plotted as a function of time The extracted parameters (average) for the DNA segment can be plotted as a function of time Following example shows Twist vs Time plots. These example also shows that how to extract the parameters value from the DNA object. Other properties could be extracted and plotted using similar steps. End of explanation # Slide vs Time for 22nd bp plt.title('Slide for 22nd bp') time, value = pdna.time_vs_parameter('slide', [22]) plt.plot(time, value) plt.xlabel('Time (ps)') plt.ylabel('Slide ($\AA$)') plt.show() # Rise vs Time for 25-40 bp segment plt.title('Rise for 25-40 bp segment') # Bound DNA # Rise is the distance between two base-pairs, so for a given segment it is sum over the base-steps time, value = pdna.time_vs_parameter('rise', [25, 40], merge=True, merge_method='sum') plt.plot(time, value, label='bound DNA', c='k') # balck color => bound DNA # Free DNA time, value = fdna.time_vs_parameter('rise', [25, 40], merge=True, merge_method='sum') plt.plot(time, value, label='free DNA', c='r') # red color => free DNA plt.xlabel('Time (ps)') plt.ylabel('Rise ( $\AA$)') plt.legend() plt.show() Explanation: Local base-step parameters as a function of time (using provided functions) Above examples show the method to extract the values from the DNA object. However, dnaMD.DNA.time_vs_parameter(...) function could be use to get parameter values as a function of time for the given base-pairs/step or segment End of explanation #### Rise distribution for 20-45 bp segment plt.title('Rise distribution for 20-45 bp segment') ### Bound DNA ### ## calculation of parameter distribution for the segment values, density = pdna.parameter_distribution('rise', [20, 45], bins=20, merge=True, merge_method='sum') ## plot distribution plt.plot(values, density, label='bound DNA', c='k') # balck color => bound DNA ### Free DNA ### ## calculation of parameter distribution for the segment values, density = fdna.parameter_distribution('rise', [20, 45], bins=20, merge=True, merge_method='sum') ## plot distribution plt.plot(values, density, label='free DNA', c='r') # red color => free DNA plt.xlabel('Rise ( $\AA$)') plt.ylabel('Density') plt.legend() plt.show() #### Twist distribution for 25-40 bp segment plt.title('Twist distribution for 25-40 bp segment') ### Bound DNA ### ## calculation of parameter distribution for the segment # Twist is the twist angle between two base-pairs, so for overall twist of a given segment # it is considered here as sum over the base-steps values, density = pdna.parameter_distribution('twist', [25, 40], bins=20, merge=True, merge_method='sum') ## plot distribution plt.plot(values, density, label='bound DNA', c='k') # balck color => bound DNA ### Free DNA ### ## calculation of parameter distribution for the segment values, density = fdna.parameter_distribution('twist', [25, 40], bins=20, merge=True, merge_method='sum') ## plot distribution plt.plot(values, density, label='free DNA', c='r') # red color => free DNA plt.xlabel('Twist ( $^o$)') plt.ylabel('Density') plt.legend() plt.show() Explanation: Distribution of local base-steps parameters during MD simulations As shown in above plot of Time vs Rise, comparison between bound and free DNA is very difficult. Therefore, to compare the parameters of either different DNAs or same DNAs in different environment or different segment of same DNAs, the distribution of parameters over the MD trajectory are sometime useful. The distribution could be calculated using the function dnaMD.DNA.parameter_distribution(...) as shown in the following examples. The normalized distribution is calculated using numpy.histogram(...). End of explanation ######## Average Rise values as a function of base-steps ######## plt.title('Average Rise for each base-pairs') ### Calculating Average Rise values for 5 to 56 base-steps DNA bound with protein bp, rise, error = pdna.get_mean_error([5, 56], 'rise', err_type='block', bp_range=True) # plot these values plt.errorbar(bp, rise, yerr=error, ecolor='k', elinewidth=1, color='k', lw=0, marker='o', mfc='k', mew=1, ms=4, label='bound DNA' ) ### Calculating Average Rise values for 5 to 56 base-steps DNA bp, rise, error = fdna.get_mean_error([5, 56], 'rise', err_type='block', bp_range=True) # plot these values plt.errorbar(bp, rise, yerr=error, ecolor='r', elinewidth=1, color='r', lw=0, marker='x', mfc='r', mew=1, ms=4, label='free DNA' ) plt.ylabel('Rise ($\AA$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.legend() plt.show() ######## Average Rise values as a function of DNA segments ######## plt.title('Average Rise for DNA segments') ### Calculating Average Rise values for 5 to 56 base-steps DNA bound with protein ### DNA segments are assumed to made up of 4 base-steps (merge_bp=4) bp, rise, error = pdna.get_mean_error([5,56], 'rise', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(bp, rise,yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4, label='bound DNA' ) ### Calculating Average Rise values for 5 to 56 base-steps DNA ### DNA segments are assumed to made up of 5 base-steps (merge_bp=4) bp, rise, error = fdna.get_mean_error([5,56], 'rise', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(bp, rise, yerr=error, ecolor='r', elinewidth=1, color='r', lw=1, marker='x', mfc='r', mew=1, ms=4, label='free DNA' ) plt.ylabel('Rise ( $\AA$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.ylim(13.0, 14.0) plt.legend() plt.show() Explanation: Local base-step parameters as a function of base-steps What is the average values of a given parameter for either each base-step or a DNA segment? To address this question, average values of a given parameter with its error could be calculated for either each base-step or a DNA segment using a function dnaMD.DNA.get_mean_error(...). This average values could be also use to compare two DNA. Standard error could be calculated using block averaging method as derived in this publication. To use this method, g_analyze of GROMACS package should be present in $PATH environment variable. End of explanation #### Deviation in shift, slide, rise, tilt, roll and twist #### Deviation = Bound DNA(parameter) - Free DNA(parameter) ### Deviation in Shift fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'shift', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Shift ($\AA$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.show() ### Deviation in Slide fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'slide', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Slide ($\AA$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.show() ### Deviation in Rise fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'rise', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Rise ($\AA$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.show() ### Deviation in Tilt fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'tilt', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Tilt ( $^o$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.show() ### Deviation in Roll fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'roll', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Roll ( $^o$)') plt.xlabel('base-pair number') plt.xlim(0,61) plt.show() ### Deviation in Twist fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56], 'twist', err_type='block', bp_range=True, merge_bp=4, merge_method='sum') # plot these values plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4) # plot line at zero plt.plot([0,61], [0.0, 0.0], '--k') plt.ylabel('Deviation in Twist ( $^o$)') plt.xlabel('base-step number') plt.xlim(0,61) plt.show() Explanation: Deviation in parameters of bound DNA with respect to free DNA As discussed in the above section, average parameters with standard error can be calculated for both bound and free DNA. Additionally, deviation in bound DNA with respect to the free DNA could be calculated using function dnaMD.localDeformationVsBPS(...) as shown in the following example. End of explanation
12,648
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-1', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: CSIR-CSIRO Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:54 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
12,649
Given the following text description, write Python code to implement the functionality described below step by step Description: ionize Tutorial ionize is a Python module for calculating the properties of ions in aqueous solution. To load the library, simply import ionize. Step1: Ion The basic building block of an ionize simulation is an ionic species, modeled by the Ion class. Call ionize.Ion(name, z, pKa, absolute_mobility). name is the name of the ion, typically as a string. z is a list containing the charge states of the ion. pKa is a list of the pKas of the charge states, with the same order as the list z. absolute_mobility is a list containing the absolute, infinite dilution mobilities of each charge state, ordered the same as the other two lists, in units of m<sup>2</sup>V<sup>-1</sup>s<sup>-1</sup>. Step2: Once an ion species is initialized, you can call the properties of the ion, typically as a function of pH, ionic strength, and temperature, in that order. Step3: Note the difference between ionic_strength parameters here. If ionic_strength is 0, the numerical value of 0 is used in each calculation. However, it is impossible to have a solution of pH 0 with ionic_strength of 0. When the default value of None is used for ionic_strength, ionize uses the minimum ionic strength at the selected pH. Using the ionize database Individually initializing ions is error-prone and time-consuming. To simplify the process, load ions from the database by initializing the database, and accessing the database like a dictionary. Step4: search_ion() You can also search for ions in the database by name using Database().search(). Call it by specifying a search_string. search() will print the names of all ions that contain the search_string. search will not return a list of strings, so load the ion when you find what you want. Step5: Other db functions You can get the database data as a dictionary using the data method. Step6: Solution Getting the properties of a single ionic species in solution is useful, but the real challenge of dealing with aqueous solutions of ions is finding properties based on the equilibrium state of multiple ionic species. ionize can perform those calculations using the Solution class. Solution objects are initialized using ionize.Solution(ions, concentrations), where ions is a list of Ion objects and concentration is a list concentrations of the ions, with concentrations in molar. Step7: Solutions can be initialized with ion names instead of ions. If so, the Solution calls load_ion to determine the ion identities. Step8: We can iterate through solutions to quickly calculate the pH of a titration between two ions Step9: A Solution can also be initialized without ions, e.g. as water. Step10: A Solution can also be added and multiplied through operator overloading. This can be useful when calculating the results of diltuions, as below. Step11: Solutions can be titrated to a specified pH. To do so, make a solution, and then specify a titrant, a property, and a target. Step12: Temperature Effects Both Ion objects and Solution objects take T as an optional argument for temperature. Temperature should be specified in degrees C. Ion objects adjust their absolute mobility and pKa attributes based on temperature. They also make adjustments to their ionic strength correction algorithms based on temperature. The type of temperature adjustment data depends on the specific ion. For small ions, emperical data from literature is included. For organic molecules, &Delta;H and &Delta;Cp values may be provided. All ions also correct their mobilities for viscosity. Step13: Solution objects send their temperature correction parameters to the object that they contain. In addition, they use the temperature input to correct their ionic strength correction parameters. Step14: Conservation Functions Conservation functions are spatially invariant quantities that remain constant as a solution undergoes electrophoresis. They are useful in calculating ion concentrations in zones formed during electrophoresis. The Kohlrausch Regulating Function (KRF) The most basic conservation function is the KRF. This function is only valid for strongly ionized species, when water dissociation doesn't play a strong role. Solutions can calculate their own KRF values. They throw a warning if they contain species that are not strongly ionized. Step15: The Alberty Conservation Function The Alberty conservation function is useful for weakly ionized monovalent species, when water dissocation doesn't play a strong role. Step16: The Jovin Conservation Function The Jovin conservation function is applicable under the same conditions that the Alberty conservation function is. It is often used as a compliment. Step17: The Gas Conservation Functions Step18: Serialization, Saving, and Loading You can also save and load ions and solutions in JSON format.
Python Code: from __future__ import print_function, absolute_import, division import ionize # We'll also import numpy to set up some of our inputs. # And pprint to prettily print some lists. import numpy import pprint # And set up inline plotting. from matplotlib.pyplot import * %matplotlib inline # Prettify numpy printing numpy.set_printoptions(precision=3) Explanation: ionize Tutorial ionize is a Python module for calculating the properties of ions in aqueous solution. To load the library, simply import ionize. End of explanation # Initialize an ion and print it. acid = ionize.Ion('myAcid', [-1], [5], [-25e-9]) base = ionize.Ion('myBase', [1], [8], [20e-9]) print(acid) # The string includes only the class and name. print(repr(base)) # The representation contains enough information to reconstruct the ion. Explanation: Ion The basic building block of an ionize simulation is an ionic species, modeled by the Ion class. Call ionize.Ion(name, z, pKa, absolute_mobility). name is the name of the ion, typically as a string. z is a list containing the charge states of the ion. pKa is a list of the pKas of the charge states, with the same order as the list z. absolute_mobility is a list containing the absolute, infinite dilution mobilities of each charge state, ordered the same as the other two lists, in units of m<sup>2</sup>V<sup>-1</sup>s<sup>-1</sup>. End of explanation print('myAcid Ka at (I=0 M) =', acid.acidity()) print('myAcid Ka at (I=0.5 M) =', acid.acidity(ionic_strength=0.5)) pH = numpy.linspace(0,14) for I in [None, 0., 0.001, 0.01, 0.1]: mu = [base.mobility(p, I) for p in pH] if I is not None: label = 'I={} M'.format(I) else: label = 'I=None' plot(pH, mu, label=label) xlabel('pH'); xlim(0, 14) ylabel('effective mobility (m^2/v/s)'); ylim(-.1e-8, 2.1e-8) legend() show() Explanation: Once an ion species is initialized, you can call the properties of the ion, typically as a function of pH, ionic strength, and temperature, in that order. End of explanation db = ionize.Database() histidine = db['histidine'] print(repr(histidine)) for ionic_strength in (None, 0): mu_histidine = [histidine.mobility(p, ionic_strength=ionic_strength) for p in pH] plot(pH, mu_histidine, label="I={}".format(ionic_strength)) xlabel('pH'); xlim([0, 14]) ylabel('effective mobility (m^2/v/s)') legend() show() Explanation: Note the difference between ionic_strength parameters here. If ionic_strength is 0, the numerical value of 0 is used in each calculation. However, it is impossible to have a solution of pH 0 with ionic_strength of 0. When the default value of None is used for ionic_strength, ionize uses the minimum ionic strength at the selected pH. Using the ionize database Individually initializing ions is error-prone and time-consuming. To simplify the process, load ions from the database by initializing the database, and accessing the database like a dictionary. End of explanation print("Search results for 'amino'\n--------------------------") pprint.pprint(db.search('amino')) print("\nSearch results for 'chloric'\n----------------------------") pprint.pprint(db.search('chloric')) print("\nSearch results for 'per'\n------------------------") pprint.pprint(db.search('per')) print('\nOh, copper is what I was looking for.') print(db.load('copper')) Explanation: search_ion() You can also search for ions in the database by name using Database().search(). Call it by specifying a search_string. search() will print the names of all ions that contain the search_string. search will not return a list of strings, so load the ion when you find what you want. End of explanation print(len(db.data), 'ions in database.') Explanation: Other db functions You can get the database data as a dictionary using the data method. End of explanation hcl=database.load('hydrochloric acid') tris=database.load('tris') buffer=ionize.Solution([tris, hcl], [0.1, 0.085]) print 'pH =', buffer.pH print 'I =', buffer.ionic_strength, 'M' print 'conductivity =', buffer.conductivity(), 'S/m' print 'buffering capacity =', buffer.buffering_capacity(), 'M' print 'debye length =', buffer.debye(), 'm' Explanation: Solution Getting the properties of a single ionic species in solution is useful, but the real challenge of dealing with aqueous solutions of ions is finding properties based on the equilibrium state of multiple ionic species. ionize can perform those calculations using the Solution class. Solution objects are initialized using ionize.Solution(ions, concentrations), where ions is a list of Ion objects and concentration is a list concentrations of the ions, with concentrations in molar. End of explanation print [ion.name for ion in ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).ions] print ionize.Solution(['bis-tris', 'acetic acid'], [0.1, 0.03]).concentration(database.load('acetic acid')) Explanation: Solutions can be initialized with ion names instead of ions. If so, the Solution calls load_ion to determine the ion identities. End of explanation c_tris = 0.1 c_hcl = numpy.linspace(0.0, 0.2, 50) t_pH = [ionize.Solution(['tris', 'hydrochloric acid'], [c_tris, c_h], temperature=25).pH for c_h in c_hcl] plot(c_hcl/c_tris, t_pH) xlabel('[HCl]/[Tris]') ylabel('pH') show() Explanation: We can iterate through solutions to quickly calculate the pH of a titration between two ions End of explanation water = ionize.Solution() print 'I =', water.ionic_strength, 'M' print 'pH =', water.pH print 'conductivity =', water.conductivity(), 'S/m' Explanation: A Solution can also be initialized without ions, e.g. as water. End of explanation print 'Stock:', buffer dilution = 0.5 * buffer + 0.5 * water print 'Dilution:', dilution Explanation: A Solution can also be added and multiplied through operator overloading. This can be useful when calculating the results of diltuions, as below. End of explanation buff = ionize.Solution([tris], 0.1) print buff.titrate('hydrochloric acid', 8.2) print buff.titrate('hydrochloric acid', 3) print buff.conductivity() print repr(buff.titrate('hydrochloric acid', 3, titration_property = 'conductivity')) print repr(buff.titrate('hydrochloric acid', 8)) Explanation: Solutions can be titrated to a specified pH. To do so, make a solution, and then specify a titrant, a property, and a target. End of explanation silver = database.load('silver') tris = database.load('tris') T = numpy.linspace(20.0, 80.0) mu_silver = [silver.absolute_mobility(Tp) for Tp in T] mu_tris = [tris.absolute_mobility(Tp) for Tp in T] pKa_silver = [silver.pKa(0, Tp) for Tp in T] pKa_tris = [tris.pKa(0, Tp) for Tp in T] figure() plot(T, mu_silver, label = 'Silver') plot(T, mu_tris, label = 'Tris') legend(loc = 'upper left') xlabel('Temperature ($^{\circ}$C)'); ylabel('Absolute mobility ($m^2V^{-1}s^{-1}$)') show() figure() plot(T, pKa_silver, label = 'Silver') plot(T, pKa_tris, label = 'Tris') legend(loc = 'lower left') xlabel('Temperature ($^{\circ}$C)'); ylabel('pKa') show() Explanation: Temperature Effects Both Ion objects and Solution objects take T as an optional argument for temperature. Temperature should be specified in degrees C. Ion objects adjust their absolute mobility and pKa attributes based on temperature. They also make adjustments to their ionic strength correction algorithms based on temperature. The type of temperature adjustment data depends on the specific ion. For small ions, emperical data from literature is included. For organic molecules, &Delta;H and &Delta;Cp values may be provided. All ions also correct their mobilities for viscosity. End of explanation buffer_ref = ionize.Solution(['tris', 'hydrochloric acid'], [.200, .100], temperature=25.) mu_ref = buffer_ref.ions[1].mobility() mup = [] pH = [] I = [] mu=[] cond = [] for Tp in T: buffer = ionize.Solution([tris, hcl], [.200, .100], temperature=Tp) mu.append(buffer.ions[1].mobility()) mup.append(buffer.ions[1].mobility()/mu_ref) pH.append(buffer.pH) I.append(buffer.ionic_strength) cond.append(buffer.conductivity()) # mup.append(hcl.nightingale_function(Tp)) cond_norm = [c / cond[0] for c in cond] figure() plot(T, pH); xlabel('Temperature ($^{\circ}$C)'); ylabel('pH') show() figure() plot(T, mup, label='chloride'); xlabel('Temperature ($^{\circ}$C)'); ylabel('$\mu$(T)/$\mu$(T$_o$)'); legend(loc='upper left') show() Explanation: Solution objects send their temperature correction parameters to the object that they contain. In addition, they use the temperature input to correct their ionic strength correction parameters. End of explanation saltwater = ionize.Solution(['sodium', 'hydrochloric acid'], [0.1, 0.1]) print saltwater.kohlrausch() print buffer_ref.ions print buffer_ref.kohlrausch() Explanation: Conservation Functions Conservation functions are spatially invariant quantities that remain constant as a solution undergoes electrophoresis. They are useful in calculating ion concentrations in zones formed during electrophoresis. The Kohlrausch Regulating Function (KRF) The most basic conservation function is the KRF. This function is only valid for strongly ionized species, when water dissociation doesn't play a strong role. Solutions can calculate their own KRF values. They throw a warning if they contain species that are not strongly ionized. End of explanation tcap = ionize.Solution(['tris', 'caproic acid'], [0.1, 0.05]) print tcap.alberty() tcit = ionize.Solution(['tris', 'citric acid'], [0.1, 0.05]) print tcit.alberty() Explanation: The Alberty Conservation Function The Alberty conservation function is useful for weakly ionized monovalent species, when water dissocation doesn't play a strong role. End of explanation print tcap.jovin() print tcit.jovin() Explanation: The Jovin Conservation Function The Jovin conservation function is applicable under the same conditions that the Alberty conservation function is. It is often used as a compliment. End of explanation print tcap.gas() print tcit.gas() Explanation: The Gas Conservation Functions End of explanation # %load_ext snakeviz # %%snakeviz # database = ionize.Database() # pH = np.linspace(0, 14) # for ion in database: # for p in pH: # ion.mobility(p) database import itertools concentrations = np.linspace(0, 0.14) ref_mob = 50.e-9 z = [1, 2] for zp, zm in itertools.product(z, repeat=2): positive_ion = ionize.Ion('positive', [zp], [14], [ref_mob]) negative_ion = ionize.Ion('negative', [-zm], [0], [-ref_mob]) mob = [] i = [] for c in concentrations: sol = ionize.Solution([positive_ion, negative_ion], [c/zp, c/zm]) mob.append(sol.ions[0].actual_mobility() / ref_mob ) i.append(sol.ionic_strength) plot(i, mob, label='-{}:{}'.format(zm, zp)) ylim(0, 1) # xlim(0, .14) legend(loc='lower left') xlabel('Concentration (M)') ylabel('$\mu$/$\mu_o$') show() Explanation: Serialization, Saving, and Loading You can also save and load ions and solutions in JSON format. End of explanation
12,650
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction Maps allow us to transform data in a DataFrame or Series one value at a time for an entire column. However, often we want to group our data, and then do something specific to the group the data is in. As you'll learn, we do this with the groupby() operation. We'll also cover some additional topics, such as more complex ways to index your DataFrames, along with how to sort your data. To start the exercise for this topic, please click here. Groupwise analysis One function we've been using heavily thus far is the value_counts() function. We can replicate what value_counts() does by doing the following Step1: groupby() created a group of reviews which allotted the same point values to the given wines. Then, for each of these groups, we grabbed the points() column and counted how many times it appeared. value_counts() is just a shortcut to this groupby() operation. We can use any of the summary functions we've used before with this data. For example, to get the cheapest wine in each point value category, we can do the following Step2: You can think of each group we generate as being a slice of our DataFrame containing only data with values that match. This DataFrame is accessible to us directly using the apply() method, and we can then manipulate the data in any way we see fit. For example, here's one way of selecting the name of the first wine reviewed from each winery in the dataset Step3: For even more fine-grained control, you can also group by more than one column. For an example, here's how we would pick out the best wine by country and province Step4: Another groupby() method worth mentioning is agg(), which lets you run a bunch of different functions on your DataFrame simultaneously. For example, we can generate a simple statistical summary of the dataset as follows Step5: Effective use of groupby() will allow you to do lots of really powerful things with your dataset. Multi-indexes In all of the examples we've seen thus far we've been working with DataFrame or Series objects with a single-label index. groupby() is slightly different in the fact that, depending on the operation we run, it will sometimes result in what is called a multi-index. A multi-index differs from a regular index in that it has multiple levels. For example Step6: Multi-indices have several methods for dealing with their tiered structure which are absent for single-level indices. They also require two levels of labels to retrieve a value. Dealing with multi-index output is a common "gotcha" for users new to pandas. The use cases for a multi-index are detailed alongside instructions on using them in the MultiIndex / Advanced Selection section of the pandas documentation. However, in general the multi-index method you will use most often is the one for converting back to a regular index, the reset_index() method Step7: Sorting Looking again at countries_reviewed we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a groupby, the order of the rows is dependent on the values in the index, not in the data. To get data in the order want it in we can sort it ourselves. The sort_values() method is handy for this. Step8: sort_values() defaults to an ascending sort, where the lowest values go first. However, most of the time we want a descending sort, where the higher numbers go first. That goes thusly Step9: To sort by index values, use the companion method sort_index(). This method has the same arguments and default order Step10: Finally, know that you can sort by more than one column at a time
Python Code: #$HIDE_INPUT$ import pandas as pd reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) pd.set_option("display.max_rows", 5) reviews.groupby('points').points.count() Explanation: Introduction Maps allow us to transform data in a DataFrame or Series one value at a time for an entire column. However, often we want to group our data, and then do something specific to the group the data is in. As you'll learn, we do this with the groupby() operation. We'll also cover some additional topics, such as more complex ways to index your DataFrames, along with how to sort your data. To start the exercise for this topic, please click here. Groupwise analysis One function we've been using heavily thus far is the value_counts() function. We can replicate what value_counts() does by doing the following: End of explanation reviews.groupby('points').price.min() Explanation: groupby() created a group of reviews which allotted the same point values to the given wines. Then, for each of these groups, we grabbed the points() column and counted how many times it appeared. value_counts() is just a shortcut to this groupby() operation. We can use any of the summary functions we've used before with this data. For example, to get the cheapest wine in each point value category, we can do the following: End of explanation reviews.groupby('winery').apply(lambda df: df.title.iloc[0]) Explanation: You can think of each group we generate as being a slice of our DataFrame containing only data with values that match. This DataFrame is accessible to us directly using the apply() method, and we can then manipulate the data in any way we see fit. For example, here's one way of selecting the name of the first wine reviewed from each winery in the dataset: End of explanation reviews.groupby(['country', 'province']).apply(lambda df: df.loc[df.points.idxmax()]) Explanation: For even more fine-grained control, you can also group by more than one column. For an example, here's how we would pick out the best wine by country and province: End of explanation reviews.groupby(['country']).price.agg([len, min, max]) Explanation: Another groupby() method worth mentioning is agg(), which lets you run a bunch of different functions on your DataFrame simultaneously. For example, we can generate a simple statistical summary of the dataset as follows: End of explanation countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len]) countries_reviewed mi = countries_reviewed.index type(mi) Explanation: Effective use of groupby() will allow you to do lots of really powerful things with your dataset. Multi-indexes In all of the examples we've seen thus far we've been working with DataFrame or Series objects with a single-label index. groupby() is slightly different in the fact that, depending on the operation we run, it will sometimes result in what is called a multi-index. A multi-index differs from a regular index in that it has multiple levels. For example: End of explanation countries_reviewed.reset_index() Explanation: Multi-indices have several methods for dealing with their tiered structure which are absent for single-level indices. They also require two levels of labels to retrieve a value. Dealing with multi-index output is a common "gotcha" for users new to pandas. The use cases for a multi-index are detailed alongside instructions on using them in the MultiIndex / Advanced Selection section of the pandas documentation. However, in general the multi-index method you will use most often is the one for converting back to a regular index, the reset_index() method: End of explanation countries_reviewed = countries_reviewed.reset_index() countries_reviewed.sort_values(by='len') Explanation: Sorting Looking again at countries_reviewed we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a groupby, the order of the rows is dependent on the values in the index, not in the data. To get data in the order want it in we can sort it ourselves. The sort_values() method is handy for this. End of explanation countries_reviewed.sort_values(by='len', ascending=False) Explanation: sort_values() defaults to an ascending sort, where the lowest values go first. However, most of the time we want a descending sort, where the higher numbers go first. That goes thusly: End of explanation countries_reviewed.sort_index() Explanation: To sort by index values, use the companion method sort_index(). This method has the same arguments and default order: End of explanation countries_reviewed.sort_values(by=['country', 'len']) Explanation: Finally, know that you can sort by more than one column at a time: End of explanation
12,651
Given the following text description, write Python code to implement the functionality described below step by step Description: Flux Variability Anlysis (FVA) Load a few packages and functions. Step1: First we load a model from the BiGG database (and make a copy of it). Step2: Run flux variablity analysis Calculate all flux ranges of all reactions in the model. Step3: Inspect the result. Step4: Get an overview of a few key statistics of the resulting flux ranges. Step5: Visualize the flux ranges. Step6: Visualize the flux ranges on a pathway map of E. coli's central carbon metabolism. Step7: Those reactions showing up in red are futile cyles. Step8: Run flux variability analysis for optimally growing E. coli (Optimal) Flux Balance Analysis solutions are not necessariliy unique. Flux Variablity Analysis is a good tool for estimating the space of alternative optimal solutions. Step9: This is actually such a common task that flux_variability_analysis provides an option for fixing the objective's flux at a certain percentage. Step10: Turns out that in this small core metabolic model, the optimal solution is actually unique! Step11: Exercises Exercise 1 Explore how relaxing the constraint on the growth rate affects the solution space Step12: Solution 2 Step13: Solution 3
Python Code: import pandas pandas.options.display.max_rows = 12 import escher from cameo import models, flux_variability_analysis, fba Explanation: Flux Variability Anlysis (FVA) Load a few packages and functions. End of explanation model = models.bigg.e_coli_core.copy() Explanation: First we load a model from the BiGG database (and make a copy of it). End of explanation result = flux_variability_analysis(model) Explanation: Run flux variablity analysis Calculate all flux ranges of all reactions in the model. End of explanation result.data_frame Explanation: Inspect the result. End of explanation result.data_frame.describe() Explanation: Get an overview of a few key statistics of the resulting flux ranges. End of explanation result.plot(index=result.data_frame.index, height=1200) Explanation: Visualize the flux ranges. End of explanation abs_flux_ranges = abs(result.data_frame.lower_bound - result.data_frame.upper_bound).to_dict() escher.Builder('e_coli_core.Core metabolism', reaction_data=abs_flux_ranges).display_in_notebook() Explanation: Visualize the flux ranges on a pathway map of E. coli's central carbon metabolism. End of explanation result.data_frame[result.data_frame.upper_bound > 500] result_no_cyles = flux_variability_analysis(model, remove_cycles=True) abs_flux_ranges = abs(result_no_cyles.data_frame.lower_bound - result_no_cyles.data_frame.upper_bound).to_dict() escher.Builder('e_coli_core.Core metabolism', reaction_data=abs_flux_ranges).display_in_notebook() Explanation: Those reactions showing up in red are futile cyles. End of explanation fba(model) model_optimal = model.copy() model_optimal.reactions.BIOMASS_Ecoli_core_w_GAM.lower_bound = 0.8739215069684299 result_max_obj = flux_variability_analysis(model_optimal, remove_cycles=True) result_max_obj.plot(index=result_max_obj.data_frame.index, height=1200) Explanation: Run flux variability analysis for optimally growing E. coli (Optimal) Flux Balance Analysis solutions are not necessariliy unique. Flux Variablity Analysis is a good tool for estimating the space of alternative optimal solutions. End of explanation result_max_obj = flux_variability_analysis(model, fraction_of_optimum=1., remove_cycles=True) result_max_obj.plot(index=result_max_obj.data_frame.index, height=1200) Explanation: This is actually such a common task that flux_variability_analysis provides an option for fixing the objective's flux at a certain percentage. End of explanation sum(abs(result_max_obj.data_frame.lower_bound - result_max_obj.data_frame.upper_bound)) Explanation: Turns out that in this small core metabolic model, the optimal solution is actually unique! End of explanation percentage = (0.7 / model.solve().f) * 100 percentage result_80perc_max_obj = flux_variability_analysis(model, fraction_of_optimum=percentage/100, remove_cycles=True) result_80perc_max_obj.plot(index=result_80perc_max_obj.data_frame.index, height=1200) Explanation: Exercises Exercise 1 Explore how relaxing the constraint on the growth rate affects the solution space: 1. Modify the code to explore flux ranges for $\mu \gt 0.7 \ h^{-1}$ 1. Plot the sum of flux ranges over a range of percentages. Exercise 2 Using FVA, determine all blocked reactions ($v = 0$) in the model. Solutions Solution 1 End of explanation flux_sums = [] optimum_percentages = range(50, 105, 5) for i in optimum_percentages: df = flux_variability_analysis(model, fraction_of_optimum=i/100, remove_cycles=True).data_frame flux_sum = sum(abs(df.lower_bound - df.upper_bound)) print("{}%: ".format(i), flux_sum) flux_sums.append(flux_sum) import matplotlib.pyplot as plt plt.plot(optimum_percentages, flux_sums) plt.xlabel('Optimum (%)') plt.ylabel('Flux sum [mmol gDW^-1 h^-1]') plt.show() Explanation: Solution 2 End of explanation result = flux_variability_analysis(model, remove_cycles=True) result.data_frame[(result.data_frame.lower_bound == 0) & (result.data_frame.upper_bound == 0)] Explanation: Solution 3 End of explanation
12,652
Given the following text description, write Python code to implement the functionality described below step by step Description: Implementing the EffTox Dose-Finding Design in the Matchpoint Trials This tutorial complements the manuscript Implementing the EffTox Dose-Finding Design in the Matchpoint Trial (Brock et al.,in submission). Please consult the paper for the clinical background, the methodology details, and full explanation of the terminology. Dose Ambivalence In this notebook, we illustrate the phenomenon of dose ambivalence using the EffTox design in the seamless phase I/II dose-finding clinical trial, Matchpoint. Step1: The above parameters are explained in the manuscript. Step2: The EffTox class is an object-oriented implementation of the trial design by Thall & Cook (Thall, P. F., & Cook, J. D. (2004). Dose-Finding Based on Efficacy-Toxicity Trade-Offs. Biometrics, 60(3), 684–693.) Dose ambivalence after 3NTE Outcomes for a patient are represented by a three item tuple, where Step3: So, using seed 123, dose-level 3 is recommended to be given to the next patient after oberving 3NTE in the first cohort of patients. Fair enough. Step4: Wait...using seed 321, that advice is now dose-level 4. I need a single answer. What should I do? Let's define a simple function to calculate next dose based on some outcomes Step5: And then run that a number of times. For indication, 100 iterations will suffice (it takes a wee while...). In practice, you might use more iterations.
Python Code: import numpy as np from scipy.stats import norm from clintrials.dosefinding.efftox import EffTox, LpNormCurve real_doses = [7.5, 15, 30, 45] trial_size = 30 cohort_size = 3 first_dose = 3 prior_tox_probs = (0.025, 0.05, 0.1, 0.25) prior_eff_probs = (0.2, 0.3, 0.5, 0.6) tox_cutoff = 0.40 eff_cutoff = 0.45 tox_certainty = 0.05 eff_certainty = 0.03 mu_t_mean, mu_t_sd = -5.4317, 2.7643 beta_t_mean, beta_t_sd = 3.1761, 2.7703 mu_e_mean, mu_e_sd = -0.8442, 1.9786 beta_e_1_mean, beta_e_1_sd = 1.9857, 1.9820 beta_e_2_mean, beta_e_2_sd = 0, 0.2 psi_mean, psi_sd = 0, 1 efftox_priors = [ norm(loc=mu_t_mean, scale=mu_t_sd), norm(loc=beta_t_mean, scale=beta_t_sd), norm(loc=mu_e_mean, scale=mu_e_sd), norm(loc=beta_e_1_mean, scale=beta_e_1_sd), norm(loc=beta_e_2_mean, scale=beta_e_2_sd), norm(loc=psi_mean, scale=psi_sd), ] Explanation: Implementing the EffTox Dose-Finding Design in the Matchpoint Trials This tutorial complements the manuscript Implementing the EffTox Dose-Finding Design in the Matchpoint Trial (Brock et al.,in submission). Please consult the paper for the clinical background, the methodology details, and full explanation of the terminology. Dose Ambivalence In this notebook, we illustrate the phenomenon of dose ambivalence using the EffTox design in the seamless phase I/II dose-finding clinical trial, Matchpoint. End of explanation hinge_points = [(0.4, 0), (1, 0.7), (0.5, 0.4)] metric = LpNormCurve(hinge_points[0][0], hinge_points[1][1], hinge_points[2][0], hinge_points[2][1]) et = EffTox(real_doses, efftox_priors, tox_cutoff, eff_cutoff, tox_certainty, eff_certainty, metric, trial_size, first_dose) Explanation: The above parameters are explained in the manuscript. End of explanation outcomes = [(3, 0, 0), (3, 1, 0), (3, 0, 1)] et.reset() np.random.seed(123) et.update(outcomes) Explanation: The EffTox class is an object-oriented implementation of the trial design by Thall & Cook (Thall, P. F., & Cook, J. D. (2004). Dose-Finding Based on Efficacy-Toxicity Trade-Offs. Biometrics, 60(3), 684–693.) Dose ambivalence after 3NTE Outcomes for a patient are represented by a three item tuple, where: first item is 1-based dose-index give (i.e. 3 is dose-level 3); second item is 1 if toxicity happened, else 0; third item is 1 if efficacy happened, else 0. Outcomes for several patients are represented as lists: End of explanation et.reset() np.random.seed(321) et.update(outcomes) Explanation: So, using seed 123, dose-level 3 is recommended to be given to the next patient after oberving 3NTE in the first cohort of patients. Fair enough. End of explanation def get_next_dose(trial, outcomes, **kwargs): trial.reset() next_dose = trial.update(outcomes, **kwargs) return next_dose Explanation: Wait...using seed 321, that advice is now dose-level 4. I need a single answer. What should I do? Let's define a simple function to calculate next dose based on some outcomes: End of explanation np.random.seed(123) replicates = [get_next_dose(et, outcomes, n=10**5) for i in range(100)] doses, freq = np.unique(replicates, return_counts=True) list(zip(doses, 1.0 * freq / len(replicates))) Explanation: And then run that a number of times. For indication, 100 iterations will suffice (it takes a wee while...). In practice, you might use more iterations. End of explanation
12,653
Given the following text description, write Python code to implement the functionality described below step by step Description: Test datasets http Step1: General guides to Bayesian regression http
Python Code: import pandas as pd import statsmodels.api as sm # Normal response variable stackloss_conversion = sm.datasets.get_rdataset("stackloss", "datasets") #print (stackloss_conversion.__doc__) # Lognormal response variable engel_food = sm.datasets.engel.load_pandas() #print (engel_food.data) # Binary response variable titanic_survival = sm.datasets.get_rdataset("Titanic", "datasets") #print (titanic_survival.__doc__) # Continuous 0-1 response variable duncan_prestige = sm.datasets.get_rdataset("Duncan", "car") #print (duncan_prestige.__doc__) # Categorical response variable iris_flowers = sm.datasets.get_rdataset("iris") #print (iris_flowers.__doc__) Explanation: Test datasets http://statsmodels.sourceforge.net/0.6.0/datasets/index.html End of explanation # Showing plots inline, rather than in a new window %matplotlib inline # Modules from pymc3 import * import numpy as np from ggplot import * # Generating data size = 200 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) # y = a + b*x true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) # Plotting data sim_data = pd.DataFrame({"x" : x, "y" : y}) sim_plot = ggplot(sim_data, aes(x="x", y="y")) + geom_point() +\ geom_abline(intercept=true_intercept, slope=true_slope) print(sim_plot) with Model() as model: # specify glm and pass in data. The resulting linear model, its likelihood and # and all its parameters are automatically added to our model. glm.glm('y ~ x', data) step = NUTS() # Instantiate MCMC sampling algorithm trace = sample(2000, step, progressbar=False) # draw 2000 posterior samples using NUTS sampling plt.figure(figsize=(7, 7)) traceplot(trace) plt.tight_layout(); plt.figure(figsize=(7, 7)) plt.plot(x, y, 'x', label='data') glm.plot_posterior_predictive(trace, samples=100, label='posterior predictive regression lines') plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y') plt.title('Posterior predictive regression lines') plt.legend(loc=0) plt.xlabel('x') plt.ylabel('y'); Explanation: General guides to Bayesian regression http://twiecki.github.io/blog/2015/11/10/mcmc-sampling/ PyMC https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers http://conference.scipy.org/scipy2014/schedule/presentation/1662/ End of explanation
12,654
Given the following text description, write Python code to implement the functionality described below step by step Description: Coroutines for IO-bound tasks In this notebook, we'll weave together our new (Tweet Parser)[https Step1: We can define a few constants here that will be used throughout our example. Step2: This function is a little helper for programatically generating valid queries for terms with the Gnip api. Step3: Lets say you want to get a collection of tweets matching some criteria - this is an extremely common task. The process might look something like this Step4: Easy peasy. What if you have a bunch of queries to match (this is a bit contrived, but serves a purpose). You might define all your queries as such and run a for loop to query all of them. Step5: Works great, but notice that there seems to be linear scaling for the time it takes to run this. Given that this is a trivial amount of computation and a task that is almost entirely taken up by system calls / IO, it's a perfect opportunity to add parallism to the mix and speed it up. IO-bound parallism is commonly handled with a technique called asyncronous programming, in which the semantics coroutine, event loop, user-level thread, task, future, etc. are introduced. In modern python (>3.5), the language has builtins for using coroutines, exposed via the asyncio module and the keywords async and await. Several libraries have been introduced that make use of coroutines internally, such as aiohttp, which is mostly a coroutine verison of requests. Let's look at what the basic coroutine version of our above simple example would look like in aiohttp Step6: It's a lot more code that our simple requests example and doesn't work any more quickly, though this is expected since the time is really response time to and from Gnip. Let's try again with our longer set of queries, redefining the methods to handle this more naturally.
Python Code: from IPython.display import HTML HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/dD9NgzLhbBM" frameborder="0" allowfullscreen></iframe>') %load_ext autoreload %autoreload 2 %matplotlib inline import itertools as it from functools import partial import seaborn as sns import pandas as pd import requests from tweet_parser.tweet import Tweet import sec # you will not have this python file; I use it to keep `secrets` like passwords hidden Explanation: Coroutines for IO-bound tasks In this notebook, we'll weave together our new (Tweet Parser)[https://github.com/tw-ddis/tweet_parser] and some python asyncio magic. Let's set up the environment and demonstrate a motivating example. End of explanation username = "[email protected]" AUTH = requests.auth.HTTPBasicAuth(username, sec.GNIP_API_PW) GNIP_BASE_URL = "https://gnip-api.twitter.com/search/30day/accounts/shendrickson/peabody.json?" Explanation: We can define a few constants here that will be used throughout our example. End of explanation def gen_query_url(url, terms, max_results=100): if isinstance(terms, str): terms = terms.split() return ''.join([url, "query=", "%20".join(terms), "&maxResults={}".format(max_results)]) Explanation: This function is a little helper for programatically generating valid queries for terms with the Gnip api. End of explanation query = gen_query_url(GNIP_BASE_URL, ["just", "bought", "a", "house"]) print(query) import requests def sync_tweets(query): return requests.get(url=query, auth=AUTH).json()['results'] %%time tweets = [Tweet(i) for i in sync_tweets(query)] print(tweets[0].text) Explanation: Lets say you want to get a collection of tweets matching some criteria - this is an extremely common task. The process might look something like this: End of explanation formed_query = partial(gen_query_url, url=GNIP_BASE_URL, max_results=100) queries = [formed_query(terms=[i]) for i in ["eclipse", "nuclear", "korea", "cats", "ai", "memes", "googlebro"]] queries %%time tweets = [Tweet(i) for i in it.chain.from_iterable([sync_tweets(query) for query in queries])] Explanation: Easy peasy. What if you have a bunch of queries to match (this is a bit contrived, but serves a purpose). You might define all your queries as such and run a for loop to query all of them. End of explanation import asyncio import aiohttp import async_timeout async def fetch_tweets_coroutine(url): async with aiohttp.ClientSession() as session: async with session.get(url, auth=aiohttp.BasicAuth(AUTH.username, AUTH.password)) as response: return await response.json() %%time loop = asyncio.get_event_loop() tweets = [Tweet(i) for i in loop.run_until_complete(fetch_tweets_coroutine(query))['results']] print(tweets[0].user_id, tweets[0].text) Explanation: Works great, but notice that there seems to be linear scaling for the time it takes to run this. Given that this is a trivial amount of computation and a task that is almost entirely taken up by system calls / IO, it's a perfect opportunity to add parallism to the mix and speed it up. IO-bound parallism is commonly handled with a technique called asyncronous programming, in which the semantics coroutine, event loop, user-level thread, task, future, etc. are introduced. In modern python (>3.5), the language has builtins for using coroutines, exposed via the asyncio module and the keywords async and await. Several libraries have been introduced that make use of coroutines internally, such as aiohttp, which is mostly a coroutine verison of requests. Let's look at what the basic coroutine version of our above simple example would look like in aiohttp: End of explanation async def fetch_tweets_fancy(session, url): async with session.get(url, auth=aiohttp.BasicAuth(AUTH.username, AUTH.password)) as response: # print("collecting query: {}".format(url)) _json = await response.json() return [Tweet(t) for t in _json["results"]] async def collect_queries(queries): tasks = [] async with aiohttp.ClientSession() as session: for query in queries: task = asyncio.ensure_future(fetch_tweets_fancy(session, query)) tasks.append(task) responses = await asyncio.gather(*tasks) return responses formed_query = partial(gen_query_url, url=GNIP_BASE_URL, max_results=100) queries = [formed_query(terms=[i]) for i in ["eclipse", "nuclear", "korea", "cats", "ai", "memes"]] %%time loop = asyncio.get_event_loop() future = asyncio.ensure_future(collect_queries(queries)) res = list(it.chain.from_iterable(loop.run_until_complete(future))) print(res[0].text) print(len(res)) Explanation: It's a lot more code that our simple requests example and doesn't work any more quickly, though this is expected since the time is really response time to and from Gnip. Let's try again with our longer set of queries, redefining the methods to handle this more naturally. End of explanation
12,655
Given the following text description, write Python code to implement the functionality described below step by step Description: Ex2 Step1: This example is a lot more tricky to fit, because the responses contain a few "bumps" and noise from the measurement. In such a case, finding a good number of initial poles can take a few iterations. Load the Network from a Touchstone file and create the Vector Fitting instance Step2: First attempt Step3: The function plot_convergence() can be helpful to examine the convergence and see if something was going wrong. Step4: Checking the results by comparing the model responses to the original sampled data indicates a successful fit, which is also indicated by a small rms error (less than 0.05) Step5: It is a good idea to also check the model response well outside the original frequency range. Step6: Second attempt Step7: This fit took more iterations, but it converged nevertheless and it matches the network data very well inside the fitting band. Again, a small rms error is achieved Step8: This looks good, so let's export the model as a SPICE subcircuit. For example Step9: Even though the pole relocation process oscillated between two (or more?) solutions and did not converge, the fit was still successful, because the solutions themselves did converge
Python Code: import skrf import numpy as np import matplotlib.pyplot as mplt Explanation: Ex2: Measured 190 GHz Active 2-Port The Vector Fitting feature is demonstrated using a 2-port S-matrix of an active circuit measured from 140 GHz to 220 GHz. Additional explanations and background information can be found in the Vector Fitting tutorial. End of explanation nw = skrf.network.Network('./190ghz_tx_measured.S2P') vf = skrf.VectorFitting(nw) Explanation: This example is a lot more tricky to fit, because the responses contain a few "bumps" and noise from the measurement. In such a case, finding a good number of initial poles can take a few iterations. Load the Network from a Touchstone file and create the Vector Fitting instance: End of explanation vf.vector_fit(n_poles_real=4, n_poles_cmplx=4) Explanation: First attempt: Perform the fit using 4 real poles and 4 complex-conjugate poles: (Note: In a previous version of this example, the order of the two attempts was reversed. Also see the comment at the end.) End of explanation vf.plot_convergence() Explanation: The function plot_convergence() can be helpful to examine the convergence and see if something was going wrong. End of explanation vf.get_rms_error() # plot frequency responses fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() Explanation: Checking the results by comparing the model responses to the original sampled data indicates a successful fit, which is also indicated by a small rms error (less than 0.05): End of explanation freqs = np.linspace(0, 500e9, 501) # plot model response from dc to 500 GHz fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() Explanation: It is a good idea to also check the model response well outside the original frequency range. End of explanation vf.vector_fit(n_poles_real=3, n_poles_cmplx=4) vf.plot_convergence() Explanation: Second attempt: Maybe an even better fit without that large dc "spike" can be achieved, so let's try again. Unwanted spikes at frequencies outside the fitting band are often caused by unnecessary or badly configured poles. Predictions about the fitting quality outside the fitting band are somewhat speculative and are not exactly controllable without additional samples at those frequencies. Still, let's try to decrease the number of real starting poles to 3 and see if the dc spike is removed: (Note: One could also reduce the real poles and/or increase the complex-conjugate poles. Also see the comment at the end.) End of explanation vf.get_rms_error() fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() Explanation: This fit took more iterations, but it converged nevertheless and it matches the network data very well inside the fitting band. Again, a small rms error is achieved: End of explanation vf.vector_fit(n_poles_real=0, n_poles_cmplx=5) vf.plot_convergence() Explanation: This looks good, so let's export the model as a SPICE subcircuit. For example: vf.write_spice_subcircuit_s('/home/vinc/Desktop/190ghz_tx.sp') The subcircuit can then be simulated in SPICE with the same AC simulation setup as in the ring slot example: <img src="./ngspice_190ghz_tx_sp_mag.svg" /> <img src="./ngspice_190ghz_tx_sp_smith.svg" /> <div id="comment"></div> Comment on starting poles: During the pole relocation process (first step in the fitting process), the starting poles are sucessively moved to frequencies where they can best match all target responses. Additionally, the type of poles can change from real to complex-conjugate: two real poles can become one complex-conjugate pole (and vise versa). As a result, there are multiple combinations of starting poles which can produce the same final set of poles. However, certain setups will converge faster than others, which also depends on the initial pole spacing. In extreme cases, the algorithm can even be "undecided" if two real poles behave exactly like one complex-conjugate pole and it gets "stuck" jumping back and forth without converging to a final solution. Equivalent setups for the first attempt with n_poles_real=3, n_poles_cmplx=4 (i.e. 3+4): 1+5 3+4 5+3 7+2 9+1 11+0 Equivalent setups for the second attempt with n_poles_real=4, n_poles_cmplx=4 (i.e. 4+4): 0+6 2+5 4+4 6+3 8+2 10+1 12+0 Examples for problematic setups that do not converge properly due to an oscillation between two (equally good) solutions: 0+5 <--> 2+4 <--> ... 0+7 <--> 2+5 <--> ... End of explanation vf.get_rms_error() fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() Explanation: Even though the pole relocation process oscillated between two (or more?) solutions and did not converge, the fit was still successful, because the solutions themselves did converge: End of explanation
12,656
Given the following text description, write Python code to implement the functionality described below step by step Description: DI Her Step1: As always, let's do imports and initialize a logger and a new bundle. Step2: System Parameters We'll adopt and set parameters from the following sources Step3: Datasets Let's compute an LC and RV dataset sampled at 200 points in phase (with some aliasing). Step4: Compute Step5: Plotting
Python Code: #!pip install -I "phoebe>=2.3,<2.4" Explanation: DI Her: Misaligned Binary In this example, we'll reproduce Figure 8 in the misalignment release paper (Horvat et al. 2018). <img src="horvat+18_fig8.png" alt="Figure 8" width="400px"/> Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger('error') b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation Nt = 2000 b.set_value('t0_supconj@orbit', 2442233.3481) b.set_value('vgamma@system', 9.1) # [km/s] (Albrecht et al. 2009) b.set_value('ntriangles@primary', Nt) b.set_value('ntriangles@secondary', Nt) mass1 = 5.1 # [M_sun] (Albrecht et al. 2009) mass2 = 4.4 # [M_sun] (Albrecht et al. 2009) P = 10.550164 # [d] (Albrecht et al. 2009) mu_sun = 1.32712440018e20 # = G M_sun [m3 s^-2], Wiki Standard_gravitational_parameter R_sun = 695700000 # [m] Wiki Sun sma = (mu_sun*(mass1 + mass2)*(P*86400/(2*np.pi))**2)**(1./3)/R_sun # Kepler equation incl = 89.3 # deg (Albrecht et al. 2009) vp_sini = 109 # [km/s] (Albrecht et al. 2009) vs_sini = 117 # [km/s] (Albrecht et al. 2009) Rp = 2.68 # [R_sun] (Albrecht et al. 2009) Rs = 2.48 # [R_sun] (Albrecht et al. 2009) sini = np.sin(np.pi*incl/180) vp = vp_sini*86400/sini # [km/s] vs = vs_sini*86400/sini # [km/s] Pp = 2*np.pi*Rp*R_sun/1000/vp Ps = 2*np.pi*Rs*R_sun/1000/vs Fp = P/Pp Fs = P/Ps b.set_value('q', mass2/mass1) b.set_value('incl@binary', incl) # (Albrecht et al. 2009) b.set_value('sma@binary', sma) # calculated b.set_value('ecc@binary', 0.489) # (Albrecht et al. 2009) b.set_value('per0@binary', 330.2) # (Albrecht et al. 2009) b.set_value('period@binary', P) # calculated b.set_value('syncpar@primary', Fp) # calculated b.set_value('syncpar@secondary', Fs) # calculated b.set_value('requiv@primary', Rp) # !!! requiv (Albrecht et al. 2009) b.set_value('requiv@secondary', Rs) # !!! requiv (Albrecht et al. 2009) b.set_value('teff@primary', 17300) # Wiki DI_Herculis b.set_value('teff@secondary', 15400) # Wiki DI_Herculis b.set_value('gravb_bol@primary', 1.) b.set_value('gravb_bol@secondary', 1.) # beta = 72 deg (Albrecht et al. 2009) dOmega_p = 72 di_p = 62 - incl b.set_value('pitch@primary', di_p) # di b.set_value('yaw@primary', dOmega_p) # dOmega # beta = - 84 deg (Albrecht et al. 2009) dOmega_s = -84 di_s = 100 - incl b.set_value('pitch@secondary', di_s) # di b.set_value('yaw@secondary', dOmega_s) # dOmega b.set_value_all('atm','extern_planckint') b.set_value_all('irrad_method', 'none') Explanation: System Parameters We'll adopt and set parameters from the following sources: * Albrecht et al. (2009), Nature: https://arxiv.org/pdf/0909.2861 * https://en.wikipedia.org/wiki/DI_Herculis * Claret et al (2010) https://arxiv.org/pdf/1002.2949.pdf End of explanation n = 200 times = b.to_time(np.linspace(-0.05, 1.05, n)) b.add_dataset('lc', times=times, dataset='lc01', ld_mode='manual', ld_func='logarithmic', ld_coeffs = [0.5,0.5]) b.add_dataset('rv', times=times, dataset='rv01', ld_mode='manual', ld_func='logarithmic', ld_coeffs = [0.5,0.5]) Explanation: Datasets Let's compute an LC and RV dataset sampled at 200 points in phase (with some aliasing). End of explanation b.run_compute(ltte=False) Explanation: Compute End of explanation afig, mplfig = b.plot(kind='lc', show=True) afig, mplfig = b.plot(kind='rv', show=True) Explanation: Plotting End of explanation
12,657
Given the following text description, write Python code to implement the functionality described below step by step Description: Examples and Exercises from Think Stats, 2nd Edition http Step1: I'll start with the data from the BRFSS again. Step2: Here are the mean and standard deviation of female height in cm. Step3: NormalPdf returns a Pdf object that represents the normal distribution with the given parameters. Density returns a probability density, which doesn't mean much by itself. Step4: thinkplot provides Pdf, which plots the probability density with a smooth curve. Step5: Pdf provides MakePmf, which returns a Pmf object that approximates the Pdf. Step6: If you have a Pmf, you can also plot it using Pdf, if you have reason to think it should be represented as a smooth curve. Step7: Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE). If you run this a few times, you'll see how much variation there is in the estimate. Step8: Moments Raw moments are just sums of powers. Step9: The first raw moment is the mean. The other raw moments don't mean much. Step10: The central moments are powers of distances from the mean. Step11: The first central moment is approximately 0. The second central moment is the variance. Step12: The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel. Step13: The third standardized moment is skewness. Step14: Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median. Step15: But in this case the mean is greater than the median, which indicates skew to the right. Step16: Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust. Step17: Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right. Step18: Birth weights Let's look at the distribution of birth weights again. Step19: Based on KDE, it looks like the distribution is skewed to the left. Step20: The mean is less than the median, which is consistent with left skew. Step21: And both ways of computing skew are negative, which is consistent with left skew. Step22: Adult weights Now let's look at adult weights from the BRFSS. The distribution looks skewed to the right. Step23: The mean is greater than the median, which is consistent with skew to the right. Step24: And both ways of computing skewness are positive. Step26: Exercises The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is. The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http Step27: Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound?
Python Code: from __future__ import print_function, division %matplotlib inline import numpy as np import brfss import thinkstats2 import thinkplot Explanation: Examples and Exercises from Think Stats, 2nd Edition http://thinkstats2.com Copyright 2016 Allen B. Downey MIT License: https://opensource.org/licenses/MIT End of explanation df = brfss.ReadBrfss(nrows=None) Explanation: I'll start with the data from the BRFSS again. End of explanation female = df[df.sex==2] female_heights = female.htm3.dropna() mean, std = female_heights.mean(), female_heights.std() mean, std Explanation: Here are the mean and standard deviation of female height in cm. End of explanation pdf = thinkstats2.NormalPdf(mean, std) pdf.Density(mean + std) Explanation: NormalPdf returns a Pdf object that represents the normal distribution with the given parameters. Density returns a probability density, which doesn't mean much by itself. End of explanation thinkplot.Pdf(pdf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) Explanation: thinkplot provides Pdf, which plots the probability density with a smooth curve. End of explanation pmf = pdf.MakePmf() thinkplot.Pmf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) Explanation: Pdf provides MakePmf, which returns a Pmf object that approximates the Pdf. End of explanation thinkplot.Pdf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) Explanation: If you have a Pmf, you can also plot it using Pdf, if you have reason to think it should be represented as a smooth curve. End of explanation thinkplot.Pdf(pdf, label='normal') sample = np.random.normal(mean, std, 500) sample_pdf = thinkstats2.EstimatedPdf(sample, label='sample') thinkplot.Pdf(sample_pdf, label='sample KDE') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186]) Explanation: Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE). If you run this a few times, you'll see how much variation there is in the estimate. End of explanation def RawMoment(xs, k): return sum(x**k for x in xs) / len(xs) Explanation: Moments Raw moments are just sums of powers. End of explanation RawMoment(female_heights, 1), RawMoment(female_heights, 2), RawMoment(female_heights, 3) def Mean(xs): return RawMoment(xs, 1) Mean(female_heights) Explanation: The first raw moment is the mean. The other raw moments don't mean much. End of explanation def CentralMoment(xs, k): mean = RawMoment(xs, 1) return sum((x - mean)**k for x in xs) / len(xs) Explanation: The central moments are powers of distances from the mean. End of explanation CentralMoment(female_heights, 1), CentralMoment(female_heights, 2), CentralMoment(female_heights, 3) def Var(xs): return CentralMoment(xs, 2) Var(female_heights) Explanation: The first central moment is approximately 0. The second central moment is the variance. End of explanation def StandardizedMoment(xs, k): var = CentralMoment(xs, 2) std = np.sqrt(var) return CentralMoment(xs, k) / std**k Explanation: The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel. End of explanation StandardizedMoment(female_heights, 1), StandardizedMoment(female_heights, 2), StandardizedMoment(female_heights, 3) def Skewness(xs): return StandardizedMoment(xs, 3) Skewness(female_heights) Explanation: The third standardized moment is skewness. End of explanation def Median(xs): cdf = thinkstats2.Cdf(xs) return cdf.Value(0.5) Explanation: Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median. End of explanation Mean(female_heights), Median(female_heights) Explanation: But in this case the mean is greater than the median, which indicates skew to the right. End of explanation def PearsonMedianSkewness(xs): median = Median(xs) mean = RawMoment(xs, 1) var = CentralMoment(xs, 2) std = np.sqrt(var) gp = 3 * (mean - median) / std return gp Explanation: Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust. End of explanation PearsonMedianSkewness(female_heights) Explanation: Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right. End of explanation import first live, firsts, others = first.MakeFrames() Explanation: Birth weights Let's look at the distribution of birth weights again. End of explanation birth_weights = live.totalwgt_lb.dropna() pdf = thinkstats2.EstimatedPdf(birth_weights) thinkplot.Pdf(pdf, label='birth weight') thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PDF') Explanation: Based on KDE, it looks like the distribution is skewed to the left. End of explanation Mean(birth_weights), Median(birth_weights) Explanation: The mean is less than the median, which is consistent with left skew. End of explanation Skewness(birth_weights), PearsonMedianSkewness(birth_weights) Explanation: And both ways of computing skew are negative, which is consistent with left skew. End of explanation adult_weights = df.wtkg2.dropna() pdf = thinkstats2.EstimatedPdf(adult_weights) thinkplot.Pdf(pdf, label='Adult weight') thinkplot.Config(xlabel='Adult weight (kg)', ylabel='PDF') Explanation: Adult weights Now let's look at adult weights from the BRFSS. The distribution looks skewed to the right. End of explanation Mean(adult_weights), Median(adult_weights) Explanation: The mean is greater than the median, which is consistent with skew to the right. End of explanation Skewness(adult_weights), PearsonMedianSkewness(adult_weights) Explanation: And both ways of computing skewness are positive. End of explanation def InterpolateSample(df, log_upper=6.0): Makes a sample of log10 household income. Assumes that log10 income is uniform in each range. df: DataFrame with columns income and freq log_upper: log10 of the assumed upper bound for the highest range returns: NumPy array of log10 household income # compute the log10 of the upper bound for each range df['log_upper'] = np.log10(df.income) # get the lower bounds by shifting the upper bound and filling in # the first element df['log_lower'] = df.log_upper.shift(1) df.loc[0, 'log_lower'] = 3.0 # plug in a value for the unknown upper bound of the highest range df.loc[41, 'log_upper'] = log_upper # use the freq column to generate the right number of values in # each range arrays = [] for _, row in df.iterrows(): vals = np.linspace(row.log_lower, row.log_upper, row.freq) arrays.append(vals) # collect the arrays into a single sample log_sample = np.concatenate(arrays) return log_sample import hinc income_df = hinc.ReadData() log_sample = InterpolateSample(income_df, log_upper=6.0) log_cdf = thinkstats2.Cdf(log_sample) thinkplot.Cdf(log_cdf) thinkplot.Config(xlabel='Household income (log $)', ylabel='CDF') sample = np.power(10, log_sample) cdf = thinkstats2.Cdf(sample) thinkplot.Cdf(cdf) thinkplot.Config(xlabel='Household income ($)', ylabel='CDF') Explanation: Exercises The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is. The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded hinc06.xls, which is an Excel spreadsheet with information about household income, and converted it to hinc06.csv, a CSV file you will find in the repository for this book. You will also find hinc2.py, which reads this file and transforms the data. The dataset is in the form of a series of income ranges and the number of respondents who fell in each range. The lowest range includes respondents who reported annual household income “Under \$5000.” The highest range includes respondents who made “\$250,000 or more.” To estimate mean and other statistics from these data, we have to make some assumptions about the lower and upper bounds, and how the values are distributed in each range. hinc2.py provides InterpolateSample, which shows one way to model this data. It takes a DataFrame with a column, income, that contains the upper bound of each range, and freq, which contains the number of respondents in each frame. It also takes log_upper, which is an assumed upper bound on the highest range, expressed in log10 dollars. The default value, log_upper=6.0 represents the assumption that the largest income among the respondents is $10^6$, or one million dollars. InterpolateSample generates a pseudo-sample; that is, a sample of household incomes that yields the same number of respondents in each range as the actual data. It assumes that incomes in each range are equally spaced on a log10 scale. End of explanation # Solution goes here # Solution goes here # Solution goes here Explanation: Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound? End of explanation
12,658
Given the following text description, write Python code to implement the functionality described below step by step Description: Imports Step1: Pyplot is the Matplotlib plotting backend and the inline magic to see the graph directly in the notebook Step2: Or you can use pylab, which simplifies all the calling to matplotlib and numpy a little Step3: We can define a default size for all plots that will be generated by matplotlib Step4: Introduction to plotting with matplotlib 2D plotting library which produces high quality figures Full integration in jupyter Can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, ... with just a few lines of code For the power user, you have full control of line styles, font properties, axes properties, ... See many examples of plots in pyplot gallery The documentation of pyplot is extensive but a little hard to understand Before we start Many named colors are available (keyword Step5: the stylesheet can also be defined by default Step6: Let's use ggplot style (R style) for this notebook Step7: Line plot Plot lines and/or markers to the Axes Requires 2 lists of coordinates for the x and the y axis (OR only 1 list for the Y axis and X will be automatically created) Step8: Scatter plot Make a scatter plot of x vs y, where x and y are sequence-like objects of the same length. Requires 2 lists of coordinates for the x and the y axis Step9: Bar plot Make a bar plot with rectangles Required a list of coordinates for the left side of the bars, a list of height, and the width of the bars Now plot the data as a bar plot Step10: Histogram Compute and draw the histogram of x Requires a list of values and a number of bins to split the data into possible types of histogram to draw (histtype) Step11: Customize the plotting area The plotting area can be customized easily as shown below Step12: The figure area can also be divided to plot several graphs side by side with the subplot command
Python Code: # Panda will be usefull for quick data parsing import pandas as pd import numpy as np # Small trick to get a larger display from IPython.core.display import display, HTML display(HTML("<style>.container { width:90% !important; }</style>")) Explanation: Imports End of explanation import matplotlib.pyplot as pl %matplotlib inline Explanation: Pyplot is the Matplotlib plotting backend and the inline magic to see the graph directly in the notebook End of explanation import pylab as pl %pylab inline Explanation: Or you can use pylab, which simplifies all the calling to matplotlib and numpy a little End of explanation pylab.rcParams['figure.figsize'] = (20,7) Explanation: We can define a default size for all plots that will be generated by matplotlib End of explanation pl.rcParams['figure.figsize'] = 20, 7 pl.rcParams['font.family'] = 'sans-serif' pl.rcParams['font.sans-serif'] = ['DejaVu Sans'] Explanation: Introduction to plotting with matplotlib 2D plotting library which produces high quality figures Full integration in jupyter Can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, ... with just a few lines of code For the power user, you have full control of line styles, font properties, axes properties, ... See many examples of plots in pyplot gallery The documentation of pyplot is extensive but a little hard to understand Before we start Many named colors are available (keyword: color) As well as color palettes (keyword: colormap) There are 4 different styles for lines (keyword: linestyle) And many different marker types for plot points (keyword: marker) pyplot also provides stylesheet to yield high quality rendering effortlessly In Jupyter, we can change the default parameter with pl.rcparam End of explanation pl.style.available Explanation: the stylesheet can also be defined by default End of explanation pl.style.use('ggplot') Explanation: Let's use ggplot style (R style) for this notebook End of explanation # Create random datasets with numpy random module x = np.arange(50) y = np.random.rand(50) #Plot y using default line style and color x is automatically inferred pl.plot(y) # Plot x and y without line and purple diamon markers pl.plot(x, y+1, marker ='d', linewidth=0, color="purple") # Plot x and y using dotted line and pl.plot(x, y+2, color = 'dodgerblue', linestyle='--') # Plot x and y using blue circle markers pl.plot(x, y+3, color='green', linewidth=2, marker='>', linestyle="-.") # Plot x and y using blue circle markers pl.plot(x, y+4, color='green', linewidth=4, marker='o', linestyle="-") Explanation: Line plot Plot lines and/or markers to the Axes Requires 2 lists of coordinates for the x and the y axis (OR only 1 list for the Y axis and X will be automatically created) End of explanation pl.scatter (np.random.randn(200),np.random.randn(200), color="coral") pl.scatter (np.random.randn(100)+2,np.random.randn(100)+3, color="lightgreen") pl.scatter (np.random.randn(100)-2,np.random.randn(100)*4, color="dodgerblue") Explanation: Scatter plot Make a scatter plot of x vs y, where x and y are sequence-like objects of the same length. Requires 2 lists of coordinates for the x and the y axis End of explanation # Create random datasets with numpy random module x = np.arange(10) # If the x coordinates are similar the bar are merged at the same position h1 = np.random.rand(10) pl.bar(left=x, height=h1, width=0.2, color="dodgerblue") # To create a stacked graph, the bottom position of the series need to correspond to the previous series h2 = np.random.rand(10) pl.bar(left=x, height=h2, bottom= h1, width=0.2, color="lightblue") # Offset the x coordinate to add a new series and customize color and aspect h3 = np.random.rand(10) pl.bar(left=x+0.2, height=h3, width=0.2, color ='salmon', linewidth=2, edgecolor="red") # Add yerr bars h4 = np.random.rand(10) pl.bar(left=x+0.4, height=h4, width=0.2, color ='green', yerr=np.random.randn(10)/10, ecolor="black") Explanation: Bar plot Make a bar plot with rectangles Required a list of coordinates for the left side of the bars, a list of height, and the width of the bars Now plot the data as a bar plot End of explanation # Generate a list of 2* 1000 values following a normal distibution n, bins, patches = pl.hist(x=x, bins=30, histtype='bar') print (n) print (bins) # Generate a list of 2* 1000 values following a normal distibution # Contrary to the first plot, this time, series are stacked x = np.random.randn(1000, 2) n, bins, patches = pl.hist(x=x, bins=30, histtype='barstacked') # Generate a list of 1000 values following a normal distibution # The plot is cummulative and step style x = np.random.randn(1000) n, bins, patches = pl.hist(x=x, bins=30, histtype='step', cumulative=True) # Generate a list of 2* 1000 values following a normal distibution # The plot is rotated to horizontal orientation and represented in stepfilled style x = np.random.randn(1000) n, bins, patches = pl.hist(x=x, bins=30, histtype='stepfilled', orientation="horizontal") Explanation: Histogram Compute and draw the histogram of x Requires a list of values and a number of bins to split the data into possible types of histogram to draw (histtype): bar : a traditional bar-type histogram. If multiple data are given the bars are aranged side by side. barstacked : a bar-type histogram where multiple data are stacked on top of each other. step : a lineplot that is by default unfilled. stepfilled : a lineplot that is by default filled. The return value is a tuple containing the following: * n = The values of the histogram bins after eventual normalisation * bins = The edges of the bins * patches = List of individual patches used to create the histogram End of explanation # Size of the ploting area pl.figure(figsize=(15,10)) # Customize X and Y limits pl.xlim(-1,10) pl.ylim(-0.5,1.5) # Add X label, y label and a title pl.xlabel("this is my x label", fontsize=15) pl.ylabel("this is my Y label", fontsize=15) pl.title("this is my title", fontsize=20) # Add a grid pl.grid(True, color="grey", linewidth=0.5, linestyle="--") # finally plot the graphs pl.plot(np.arange(10), np.random.rand(10), color="coral", marker=">", label = "series1") pl.plot(np.arange(10), np.random.rand(10), color="dodgerblue", marker="<", label = "series2") #Add the legend outside of the plotting area pl.legend(bbox_to_anchor=(1, 1), loc=2, frameon=False, fontsize=15) Explanation: Customize the plotting area The plotting area can be customized easily as shown below End of explanation pl.figure() # First plot in the left half pl.subplot(121) pl.plot(np.arange(10), np.random.rand(10), label="1") pl.plot(np.arange(10), np.random.rand(10), label="2") pl.title("Series1") pl.legend() # First plot in the right half pl.subplot(122) pl.plot(np.arange(10), np.random.rand(10), label="3") pl.plot(np.arange(10), np.random.rand(10), label="4") pl.title("Series2") pl.legend() pl.figure(figsize=(15,15)) # First plot in the top left corner pl.subplot(221) pl.plot(np.arange(10), np.random.rand(10)) # First plot in the top right corner #pl.subplot(222) #pl.plot(np.arange(10), np.random.rand(10)) # First plot in the bottom left corner plt.subplot(223) pl.plot(np.arange(10), np.random.rand(10)) # First plot in the bottom right corner plt.subplot(224) pl.plot(np.arange(10), np.random.rand(10)) Explanation: The figure area can also be divided to plot several graphs side by side with the subplot command End of explanation
12,659
Given the following text description, write Python code to implement the functionality described below step by step Description: Problem There's a fundamental problem with what I'm trying to do. It's that stupid connect1!!! What I should do is grep for all occurences of mesh_id = int(ftype[-1]) - 1 because any time that code occurs, it's going to disrupt what I'm trying to do. For now, I'm going to try this one more substituion in cartesian_coordinates.py and then it's time to call it a day. Step1: Ok, here is some stuff going down Step2: And here's a regular function Step3: Psych! A function with a regular return returns as soon it hits return the first time.
Python Code: %debug Explanation: Problem There's a fundamental problem with what I'm trying to do. It's that stupid connect1!!! What I should do is grep for all occurences of mesh_id = int(ftype[-1]) - 1 because any time that code occurs, it's going to disrupt what I'm trying to do. For now, I'm going to try this one more substituion in cartesian_coordinates.py and then it's time to call it a day. End of explanation def createGenerator(): mylist = range(3) for i in mylist: yield i*i my_generator_function = createGenerator() print(my_generator_function) Explanation: Ok, here is some stuff going down: We start off by creating a YTSlice object. We call YTSlice.get_data() since apparently YTSlice inherits from YTSelectionContainer The first thing the get_data() routine does is call: self.index._identify_base_chunk(self) Here's the hierarchy of indexes: ExodusIIUnstructuredIndex(/frontends/exodusii/data_structures) -> UnstructuredIndex(/geometry/unstructured_mesh_handler) -> Index(/geometry/geometry_handler) The _identify_base_chunk routine is implemented in the UnstructuredIndex class In that method, the first thing done is: dobj._chunk_info = self.meshes. Recall that dobj refers to the YTSlice object and self currently is the UnstructuredIndex object The next important piece of code executed is: dobj._current_chunk = list(self._chunk_all(dobj))[0] This calls the _chunk_all method implemented in the UnstructuredIndex class The _chunk_all method creates a generator. In this case list(self._chunk_all(dobj)) is a list of length 1, presumably because we're using an "all" method. _chunk_all creates a a YTDataChunk instance through yield YTDataChunk(dobj, "all", oobjs, dobj.size, cache) Note that oobjs were created like this: oobjs = getattr(dobj._current_chunk, "objs", dobj._chunk_info). So essentially oobjs = dobj._chunk_info = self.meshes The YTDataChunk __init__ routine looks like this: def __init__(self, dobj, chunk_type, objs, data_size = None, field_type = None, cache = False, fast_index = None): self.dobj = dobj self.chunk_type = chunk_type self.objs = objs self.data_size = data_size self._field_type = field_type self._cache = cache self._fast_index = fast_index So translating: dobj is the YTSlice object, chunk_type = "all" and the objs are equal to the list of UnstructuredMeshes To summarize, the YTSlice _current_chunk attribute is equal to a YTDataChunk instance with attributes of self.dobj equal to the YTSlice instance and self.objs equal to the list of UnstructuredMeshes. However, what we actually pass to the the all (or maybe) important io._read_fluid_selection routine is chunks created from the _chunk_io(dobj) routine. The _chunk_io function also creates a generator; this time a generator that if it was converted into a list would be of length equal to the number of meshes. In more detail it (UnstructuredIndex._chunk_io) passes (in list form): chunks = [YTDataChunk_0 with dobj = YTSlice and objs = [UnstructuredMesh0], YTDataChunk_1 with dobj = YTSlice and objs = [UnstructuredMesh1], ..., YTDataChunk_N-1 with dobj = YTSlice and objs = [UnstructuredMeshN-1] While UnstructuredIndex._chunk_all passes (in list form): chunks = [YTDataChunk with dobj = YTSlice and objs = [UnstructuredMesh0, UnstructuredMesh1, ..., UnstructuredMeshN-1]] Exploring Generators Just when you think you're starting to make serious progress, you run into another new fundamental concept! Here's a generator function I think: End of explanation def createFunction(): mylist = range(3) for i in mylist: return i*i my_regular_function = createFunction() print(my_regular_function) Explanation: And here's a regular function: End of explanation for i in my_generator_function: print(i) list_from_gen = list(createGenerator()) print(list_from_gen) print(len(list_from_gen)) slice.show() print(array1) print(array2) slice = yt.SlicePlot(ds2, 'z', [('all','diffused')]) array1 = ad2[('connect1','diffused')] array2 = ad2[('connect2','diffused')] print(array1.shape) print(array2.shape) my_io_handler = yt.frontends.exodus_ii.IOHandlerExodusII(ds2) variables = my_io_handler.handler.variables print(variables) ci = variables['connect1'][:] - 1 ci2 = variables['connect2'][:] - 1 print(ci) print(ci.shape) print(ci2.shape) import numpy as np all_array = np.concatenate((ci, ci2)) print(all_array.shape) print(variables['vals_nod_var1'][-1][all_array]) print(all_array) newarray = ci + ci2 print(newarray.shape) type(ci) print(variables['vals_nod_var1'].shape) print(variables['vals_nod_var1'][1]) print(variables['vals_nod_var1'][-1][:]) print(variables['vals_nod_var1'][-1][:].shape) print(variables['vals_nod_var1'][-1]) print(variables['vals_nod_var1'][-1].shape) print(variables['vals_nod_var1'][-1][ci]) print(variables['vals_nod_var1'][-1][ci].shape) print(variables['vals_nod_var1'][ci]) print(variables['vals_nod_var1'][ci].shape) print(my_io_handler.ds.step) Explanation: Psych! A function with a regular return returns as soon it hits return the first time. End of explanation
12,660
Given the following text description, write Python code to implement the functionality described below step by step Description: Notebook-3 Step1: Multiplication and Division Step2: A challenge for you! Do you think the results of these two operations will be identical? If not, why? Decide what you think the answer is before running the code! Step3: The results are different due to the order in which Python runs the operations. Anyway, remember the importance of parentheses in determining 'precedence' – which operations get done first. You'll see that the code above encompases each formula in print() command on each line; this simply allows us to print out the results of both calculations when the code runs. We'll come back to this later. Step4: is not the same as Step5: Exponents Powers If you use two asterisks instead of one then you'll be performing an exponentiation Step6: That's the same as Step7: So 2**4 is how we get 2-to-the-power-of-4 (the same as multiplying 2 together 4 times). Hopefully this all kind of rings a bell even if you've not studied any maths in a while. Roots So 2**4 is 2-to-the-power-of-4, but how do we get the square root (or fourth root) of a number? Right now, with what we've taught you so far, the best way is to remember that a square root is the same as an exponent of one-half. The fourth root is an exponent of one-quarter. Etc. Step8: Floating Point numbers Warning Step9: Many programming languages (Python used to be one of them) treat whole numbers (1, 8) differently from floating point numbers (1.0, 8.000000000001). In those cases, we get the answer 0 to the first equation because when we use whole numbers in our calculations we got whole numbers back. So 1/8 is obviously not 0, but that's the nearest whole number to 0.125! This is what we mean about computers doing exactly what you tell them Step10: Just to make this clear Step11: A challenge for you! What do you think will be the result of this operation? Work out your answer before running the code. Step12: The answer is 36 as Python would do the multiplication inside the parenthesis first. Now what do you think this will give you? Again, work out your answer before running the code. Step13: In the code below replace the question marks ??? with the appropriate code to produce a remainder of 6. You may need a calculator to help you. Run the code to check your answer (you can change your answer and run the code multiple times to get the right answer). Step14: Operator Precedence The operators +, -, *, **, /, %, etc., and the parentheses are all evaluated (i.e. calculated) according to a set of rules that establish precedence, which is just a fancy way of saying Step15: A challenge for you! Replace the questions marks ??? in the following exercise with the appropriate code to purposufully cause a ZeroDivisionError exception (again, feel free to use a calculator and you can run the code multiple times). Step16: Note Step17: Notice that last ticky bit Step18: The reason for this is that Python is implicitly converting (sometimes called casting) the numbers between the two different data types. This conversion doesn't happen with more complex bits of Python code, but for simple numbers Python tries to be helpful and save you the time and effort. Note Step19: Now let's work out the length of the hypotenuse of a right-angle triangle with the same side lengths as above Step20: Note Step21: Notice the helpful error message after running the code above? We'll come back to how to read errors like these in more detail in the next notebook, but again Python is giving us hints about where we might have made a mistake. Escape Club What do you do if your string contains double-quotes and single-quotes? That's where the 'backslash' (\, a kind of 'reverse division' symbol) comes in handy. The backslash is how we 'escape' characters so that we can do 'special' things with them when they should normally do something else. In the example below, the backslash in front of the apostrophe in "I'm" says "Don't treat this as 'normal' single-quote that means the end of the string, treat it as a literal single-quote in the middle of a longer string marked out by single-quotes. Step22: Let's look at this a little more closely Step23: If you run the code above, you'll see another error! Also notice that in the two lines of code, in the first the whole line is in one colour (meaning the computer can see that it's all one string), but in the broken example right before this the text changes colour once we get to "to escape the error..." (meaning that the computer doesn't see it all as one string). The escape symbol crops up in a lot of other circumstances. For example, what if we wanted to tell Python that a string contains a newline (i.e. that the string is split across one or more lines, like our Shakespeare quote above should be)? Remember that programmers are always lazy when given half a chance and so they figured out that the easiest way to mark a newline was \n. They used 'n' because it is fairly easy to remember that that means 'newline', and the backslash 'escapes' us from the simple world where an 'n' is the letter 'n' into the computer's world where 'n' is 'n', but \n is a newline Step24: See how that wraps the text on the \n? Also note that the computer is printing exactly what we told it to do Step25: As you can see from running the code above, it's a bit annoying that they look the same when we print them. But if you run the next lines of code (after thinking about what they might do), you'll see how Python tries to be helpful with its errors Step27: See how the first line of code prints 2016, but the second line of code (which tries to add together a string "2015" and the number 1) gives you an error that lazily tells you about a problem with str (i.e. string) and int (i.e. integer) 'concatentation'. More on concatenation in a minute. Advanced laziness Obviously, having a lot of \n markers would be hard to read and a potential problem if you wanted to copy and paste the text into a different application. If you have a long block of text then you can avoid the whole issue by putting your text inside triple-quotes Step28: Adding with strings (concatenation) As with numbers, there are many things that you can do with strings. The simplest, however, is like addition (which is why it uses a +) Step29: So just like you would do to add two numbers together, we can add "String1" and "String2" together to get "String1String2". But notice that the + operator doesn't insert whitespace (i.e. a ' ' character) or anything else. It just sticks the two strings together exactly as they are. And just like we can add together a whole set of numbers, we can add together a whole set of strings as in the second line beginning "Hey, looks like..." A challenge for you! Replace the questions marks "???" in the following exercise with the appropriate code to make it work Step30: Multiplication If you use the multiplication operator (*) on a string then you will multiply the string by the value of the multiplier. Step31: A challenge for you! What do you think will be the output of this code? (Work out your answer before running the code) Step32: Now, why do you think the next example below doesn't work? (Read the error output if you're not sure what's going on.) Step33: What is a variable? So far, everything we've done was about performing some kind of calculation on an integer, float, or string, and then showing the result. Given that a lot of programming doesn't involve solving everything in one easy line, how do you save an answer so that you can re-use it later? Let's start with the first true programming concept Step34: Hmmmm, nothing printed out this time... That's because this time we gave Python a bax with the label "result" in which to put the result of multiplying -2 and 10. Step35: Can you check the data type of your variable result and switch it to float? Step36: Check it out! We assigned the outcome of -2 * 10 to a variable called result; then we did something else (printed out a string); and then we printed out the value of the variable and the computer remembered! This video may help you to further understand the concept of a 'variable' Step37: Cool, both variables have the same value! We assigned the result of 1 * 5 to a variable named myFirstVariable and then we assigned this value to a second variable called mySecondVariable. But why is this called assignment (or, in plain English, copying)? Well what happens when I change the first variable myFirstVariable? Will the second change as well? Step38: Whoa! mySecondVariable didn't change and still remembers what we assigned to in the first time. Basically, we took the myFirstVariable label and attached it to a different box. As the Python Programming Wikibook explains, when you assign a variable you are just pointing this variable to an object (a value) which is stored somewhere in the memory. Python variables are a kind of 'label' (as the YouTube video above illustrates – watch it!). So when assigning new values to already declared variables (i.e. variables that already exist in your code) you are not overwriting the old values but simply "moving" the label from one value to another. That's why in Python variables have a name, a data-type and a value. | Name | Data Type | Value | | Step39: Naming variables How do you choose a variable name (i.e. label) in Python? Here's a short list of the conventions Step41: But this block of code will not Step42: Notice how the notebook has coloured the text so that the '1' in what we wanted to make the variable name stands out? Again, that's Python trying to help us figure out what is going wrong, but it requires that you look closely at the output of the error message. A final warning Remember that we said the string " Some text" and the string "Some text" are different because the space at the start of the string changes everything? The same sort of strict checking is true for variables Step43: As for many issues related to Python's style, it is good practice to always refer to the offical PEP 8 -- Style Guide for Python Code For more examples of Python variables check out also OpenTechSchool's intro to Python Code (general excercises) Now that we've had a taste of the fantastic Python programming world, let's solidify our newly acquired skills with a final round of excercises. Code from scratch Exercise 1 Look at the following example (and its output) Step44: Similar to the example above, in the code cell below Step45: Supported code Replace the questions marks ??? in the following exercise with the appropriate code to make it work Exercise 2 Landsat is a bit generic, the correct name is Landsat 8. How would you put together these two different Data Types? Remember what we've seen about casting? Edit the code below to make it work. Step46: Excercise 3 According to its Wikipedia page Sputnik 1 was a 58 cm diameter polished metal sphere. If a cm = 0.393700787 inches what was its diameter in inches? Edit the code below to make it work. Step47: Exercise 4 Wat was its volume (in cubic cm)? # NOTE Step48: Broken code mmh..something's broken in the following line of code; can you spot the error(s)? Hint Step49: Code (Applied Geo-example) In this excercise you'll dip a toe in the wonderful world of web maps! We are going to create a geographic marker (a pin on the map!) on top of OpenStreetMap (OSM) to visualise King's College location. To do so we'll have to create a string representing a web URL (that's the address you type in your browser when your surf the web) pointing to OSM website. Now, as you can see there are two variables containing King's College Longitute/Latitude coordinate position. You will need to use them within the variable KCL_position. Unfortunately they are in the wrong data type! Also, there might be something missing in the code. HINT
Python Code: 3 - 2 + 10 Explanation: Notebook-3: The Basics In this first proper programming lesson we are going to use the Python interpreter to perform simple operations, like numeric calculations that you would normally do on a calculator and slightly more advanced operations on words. The interpreter is what reads your code and converts that into the binary instructions that the computer can understand – think of it as translator between you and the low-level components (operating system, file system, network, display, etc.) of your machine. In these notebooks the interpreter is in the background and runs when you click the 'Run' button in a code cell (in future we'll see how we can use the Python interpreter with scripts of many lines of code). As we progress through the excercises we'll encounter a range of new programming concepts. At this stage don't worry too much about memorizing all of them; it is through repeated used and practice that they will eventually come naturally. Instead, here we want you to try to develop a sense of what is going on and keep the concepts in mind when reading the examples... but the really important thing is to try to apply them while doing the excercises. To help you get started, we have prepared an introductory video to aid your learning about the basic concepts in Python. Please note that by the time you have finished watching this video, you might need to re-launch binder. So if you have modified any part of your notebook, make sure to download the current version to your browser (the introductory video talks about how to do this). How the exercises work Lastly, a few words on the excercises; in every notebook we've tried to develop a mix of exercises and examples to show you how Python works. There are four 'levels' of exercise: Code from Scratch: using examples already encountered in the lesson as a starting point, you will write some very simple code (1-3 lines at most) into a blank code cell. Supported Code: using examples and exercises already encountered in the notebook, you will fill in somes 'gaps' in the lines of code where we've replaced some critical bits with (???) in order to make it work successfully. Broken Code: here we have deliberately broken something (sometimes, more than one thing) in the code and you will need to fix these before it will run successfully. Being able to debug code is an integral part of being a programmer, and this will also require you to have an eye for detail because tiny typos and simple sloppiness are enough to break a computer program. Applied Geo-Example: this will be a much more advanced bit of code that will (we hope) run successfully right from the start. The point of these examples is to demonstrate a real-world application of a concept covered in the lesson and we are asking you to spend some time puzzling over the code and adding comments about what you think is going on. Being able to read someone else's code and to make sense of what is going on is another crucial skill for a programmer. We're not expecting you to master the fourth level at this stage; in fact, some of the examples might be challenging even after a full year of classes. However, we think it's important to try to show you where we're trying to go as well as the steps involved in how we get there. It's like trying to follow directions from someone who only tells you 'turn left', 'turn right', 'go straight' – it's a lot easier to fill in the gaps and to understand what's going on if they say "We're headed to London" first! Simple Operations Numeric Operations You already saw a really simple example of calculating the mean in Notebook 1, but let's recall that you can easily use Python to like a calculator. Run the code already present in the code cells and make sure you understand the output. Basic Arithmetic Addition and Subtraction End of explanation 2 * 5 10 / 5 Explanation: Multiplication and Division End of explanation print(4 * (2 - 8) + 2) print(4 * 2 - 8 + 2) Explanation: A challenge for you! Do you think the results of these two operations will be identical? If not, why? Decide what you think the answer is before running the code! End of explanation (3 * 2) - 10 Explanation: The results are different due to the order in which Python runs the operations. Anyway, remember the importance of parentheses in determining 'precedence' – which operations get done first. You'll see that the code above encompases each formula in print() command on each line; this simply allows us to print out the results of both calculations when the code runs. We'll come back to this later. End of explanation 3 * (2 - 10) Explanation: is not the same as: End of explanation 2 ** 4 Explanation: Exponents Powers If you use two asterisks instead of one then you'll be performing an exponentiation: End of explanation 2 * 2 * 2 * 2 Explanation: That's the same as: End of explanation print(2**8) # 2-to-the-8 print(256**(1.0/8.0)) # 256-to-the-one-eighth Explanation: So 2**4 is how we get 2-to-the-power-of-4 (the same as multiplying 2 together 4 times). Hopefully this all kind of rings a bell even if you've not studied any maths in a while. Roots So 2**4 is 2-to-the-power-of-4, but how do we get the square root (or fourth root) of a number? Right now, with what we've taught you so far, the best way is to remember that a square root is the same as an exponent of one-half. The fourth root is an exponent of one-quarter. Etc. End of explanation print(1/8) print(1.0/8.0) Explanation: Floating Point numbers Warning: the following two equations are not always the same! End of explanation print(9/2) print(9%2) Explanation: Many programming languages (Python used to be one of them) treat whole numbers (1, 8) differently from floating point numbers (1.0, 8.000000000001). In those cases, we get the answer 0 to the first equation because when we use whole numbers in our calculations we got whole numbers back. So 1/8 is obviously not 0, but that's the nearest whole number to 0.125! This is what we mean about computers doing exactly what you tell them: sometimes you want 1/8 to equal 0, other times 0.125, and the computer doesn't know which answer you need unless you are very clear about what you want! Modulo While we're on the topic of division, how do you get the 'remainder' of 9/2 – the bit left over from the division? To get this there is a different symbol called the modulo operator which is a marked by a % sign. According in fact to the Python Documentation The % (modulo) operator yields the remainder from the division of the first argument by the second. Using the modulo operator will thus return the remainder: End of explanation 8%2 Explanation: Just to make this clear: 2 goes into 9 exactly 4 times and then you have 1 left over. So: (4 * 2) + 1 = 9. For division which yields no remainder the operation will return a value of 0. End of explanation print( (2*3) ** 2 ) Explanation: A challenge for you! What do you think will be the result of this operation? Work out your answer before running the code. End of explanation print( (2*3) ** 2 / (12 % 5) ) Explanation: The answer is 36 as Python would do the multiplication inside the parenthesis first. Now what do you think this will give you? Again, work out your answer before running the code. End of explanation (2+4) ** 2 % (120 / ???) Explanation: In the code below replace the question marks ??? with the appropriate code to produce a remainder of 6. You may need a calculator to help you. Run the code to check your answer (you can change your answer and run the code multiple times to get the right answer). End of explanation (30 + 2 ) / 0 Explanation: Operator Precedence The operators +, -, *, **, /, %, etc., and the parentheses are all evaluated (i.e. calculated) according to a set of rules that establish precedence, which is just a fancy way of saying: which calculations do we do first? There's a full list here but since that also lists a lot of operators we've not yet encountered it's easiest to summarise this in a table as follows: \begin{array}{cl} \hline Operator & Description \ \hline (\ldots) & Parentheses \ * & Exponentiation \ +x, -x & Positive, Negative \ , /, \% & Multiplication, Division, Remainder \ +, - & Addition, Subtraction \ \hline \end{array} So parentheses trump everything else, then exponents (so 2**5.0/2 is not the same as 2**(5.0/2)), then positive/negative, and then multiplication/division before we finally get to addition and subtraction. Division by Zero Also pay attention when dividing by zero. Python won't be able to compute any value and will return an error (which is sometimes also called an exception): End of explanation (1345 - 20 ) / ((- 3) ** 2 - ???) Explanation: A challenge for you! Replace the questions marks ??? in the following exercise with the appropriate code to purposufully cause a ZeroDivisionError exception (again, feel free to use a calculator and you can run the code multiple times). End of explanation print(7 * 4) print(7 * 4.0) print(20 / 5) print(20.0 / 5) print(22 / 7) print(22.0 / 7) print( int(22.0/7) ) Explanation: Note: the error message is Python's way of telling you what went wrong in the operation. We'll get to these in more detail in a later lesson, but you'll notice that Python always tries to tell you what it thinks went wrong and this is the starting point for all debugging. When something goes wrong in a program this error is like the first clue that puts you on the trail of the wrongdoer: sometimes one clue will lead to another, and another, and another... sometimes one clue is all you need. But regardless, if you ignore these clues and just put up your hand and say "It doesn't work!" then we're not going to be very impressed. We expect you to be able to explain the problem to us before we will help you with it. More on this later. More about Ints and Floats So we've seen some examples above of maths with integers (i.e. "whole" numbers) and maths with floats (i.e. "decimal" numbers). Both can be positive and negative (e.g. -1 or -254.32672). Programmers, being lazy, often call integers ints because it's faster and requires less typing. Any operation involving a mix of floats and integers will always yeld a float. For example, compare the output for the code below, but note how the resulting data type varies with the operation. End of explanation print(6 + 3) print(6.0 + 3) Explanation: Notice that last ticky bit: we'll get to what int(...) means later, but if you remember that programmers are lazy then you might realise that it must be short for integer. End of explanation (10 * 20)/2 Explanation: The reason for this is that Python is implicitly converting (sometimes called casting) the numbers between the two different data types. This conversion doesn't happen with more complex bits of Python code, but for simple numbers Python tries to be helpful and save you the time and effort. Note: the integer-to-float conversion might seem a bit of a pedantic distinction, but imagine if you were programming a banking application: you would definitely pay attention to all those decimals! Some final maths questions... Let's start with calculating the area of a triangle. Here's the equation for the area of a triangle: $$A = \frac{l_{base} * l_{height}}{2}$$ So lets work out for a triangle that has sides of length 10 and 20! If you type the maths correctly into the empty block below you should get the answer: 100 End of explanation print("I'm a string of text!") print('Me too! *&($£(£_@~{}|?<>$)') Explanation: Now let's work out the length of the hypotenuse of a right-angle triangle with the same side lengths as above: $$l = \sqrt{x^2 + y^2}$$ You might remember this as the Pythagorean Theorem, and you should get an answer of about 22.4. Let's move on to two last harder ones. Write a line of code to work out the area of a circle of radius 6. Here's the formula: $$A = \pi r^2$$ and you should get something around 113.1 as the area. Use 3.1412 for the constant pi. Now work out the approximate radius of a sphere whose volume is 85. To do this, you'll need to work backwards from the formula to calculate a volume... this might seem a little rough at first, but remembering how to rearrange a formula is really helpful for computational thinking! So what do you have to do to this formula: $$V = \frac{4}{3} \pi r^3$$ Here's a hint to get you started: $$ r^3 = \frac{3}{4 \pi} V $$ Also: remember that you're going to need to work with decimal numbers, not whole numbers and write your code accordingly! You should get a final answer of about 2.7. Note: there are a lot of different ways that you could write this formula as code and still get the right answer. Getting the right answer is 50% of the job. The remaining 50% is about doing it in a way that is elegant and easy to read... as get further into the term we'll point out how elegance and legibility (also: commenting) matter. String Operations OK, so that's the basics of numbers. What about text? How long is a piece of string? In most programming languages, text and words are called strings, which is really a fancy word to say a sequence of characters enclosed in single- or double-quotes (' or "). This might seem like stating the bleedin' obvious but this is a really, really important idea... "235op!u\$nlkgfd8 wp8ty fgdoy8p waklj ag9up0" is a string. So is: "If music be the food of love, play on; Give me excess of it, that, surfeiting, The appetite may sicken, and so die." (Twelfth Night, Act 1, Scene 1, 1–3) The thing is that computers can't automatically distinguish between Shakespeare and the work of a monkey hitting keys at random. As long as they are both marked out by single- or double-quotes then as far as the computer is concerned they are strings. So even to ask the computer how many words there in the first 3 lines of Twelfth Night means we have to 'teach' the computer what a word is by giving it rules to recognise the start or end of one, and even how to recognise the start and end of a line so that it can find the first three of them! End of explanation print("I'm contained within double quotes. I'll be fine.") print('I'm sourrounded by single-quotes and I contain a single-quote so there may be trouble ahead...') Explanation: Note: As before, we'll be using the print() command on many lines of code; that is just a way to tell the Python interpreter to output somthing for us to see. Single- or Double-Quotes? Although you can technically use either type of quote (' or "), it is generally better to use double-quotes as a way to prevent errors if a single-quote is contained in your string: End of explanation print('I\'m using the backslash.') print('I\'m also using the backslash \'\\\' to escape the error normally caused by having single-quotes.') Explanation: Notice the helpful error message after running the code above? We'll come back to how to read errors like these in more detail in the next notebook, but again Python is giving us hints about where we might have made a mistake. Escape Club What do you do if your string contains double-quotes and single-quotes? That's where the 'backslash' (\, a kind of 'reverse division' symbol) comes in handy. The backslash is how we 'escape' characters so that we can do 'special' things with them when they should normally do something else. In the example below, the backslash in front of the apostrophe in "I'm" says "Don't treat this as 'normal' single-quote that means the end of the string, treat it as a literal single-quote in the middle of a longer string marked out by single-quotes. End of explanation print('I\'m also using the backslash \'\\' to escape the error normally caused by having single-quotes.') Explanation: Let's look at this a little more closely: * The first line is easy: the backslash in front of the single-quote in "I'm" allows us to use single-quotes to mark the start and end of the string. This is a silly example since we could also have written "I'm using the backslash" but you get the point. * The second line gets a little more complicated: you'll see that we've now escaped three single-quotes (there are three occurences of "\'"). But what's happening with "\\"? Well, we need an extra backslash in front of the backslash so that the computer knows to print a literal backslash (we are 'escaping' the normal function of a backslash to escape the character immediately after it) instead of reading it as escaping the one in front of the single-quote. That's pretty weird, but just to show you what's happening here it is without that extra backslash: End of explanation print("If music be the food of love, play on;\n Give me excess of it, that, surfeiting ,\n The appetite may sicken, and so die.") Explanation: If you run the code above, you'll see another error! Also notice that in the two lines of code, in the first the whole line is in one colour (meaning the computer can see that it's all one string), but in the broken example right before this the text changes colour once we get to "to escape the error..." (meaning that the computer doesn't see it all as one string). The escape symbol crops up in a lot of other circumstances. For example, what if we wanted to tell Python that a string contains a newline (i.e. that the string is split across one or more lines, like our Shakespeare quote above should be)? Remember that programmers are always lazy when given half a chance and so they figured out that the easiest way to mark a newline was \n. They used 'n' because it is fairly easy to remember that that means 'newline', and the backslash 'escapes' us from the simple world where an 'n' is the letter 'n' into the computer's world where 'n' is 'n', but \n is a newline: End of explanation print("2016") print(2016) Explanation: See how that wraps the text on the \n? Also note that the computer is printing exactly what we told it to do: I kept a space between the \n and the start of the next line. If you squint, then you can see that lines 2 and 3 are indented by the width of one space character. There's also an extra space after 'surfeiting' before the comma. Why details matters We say this a lot later too, but you might as well start learning this fact now: spaces in a string matter. To a computer " A string" and "A string" are not the same. Notice that there is a single space in front of the 'A'. As a human being we tend to just skip over that space (especially if it's hard to see), but to a computer one string starts with 'A' and the other with ' ', so they are completely different. Further, numbers and strings are not the interachangeable: "2016" is not the same as 2016. The first is a string that happens to contain the characters 2, 0, 1, and 6. The second is an integer number one larger than 2015. End of explanation 2015 + 1 "2015" + 1 Explanation: As you can see from running the code above, it's a bit annoying that they look the same when we print them. But if you run the next lines of code (after thinking about what they might do), you'll see how Python tries to be helpful with its errors: End of explanation print(Hi there, this time, I won't need those annoying newline characters. I also don't have problems with "quotes" or 'quotes'! ) Explanation: See how the first line of code prints 2016, but the second line of code (which tries to add together a string "2015" and the number 1) gives you an error that lazily tells you about a problem with str (i.e. string) and int (i.e. integer) 'concatentation'. More on concatenation in a minute. Advanced laziness Obviously, having a lot of \n markers would be hard to read and a potential problem if you wanted to copy and paste the text into a different application. If you have a long block of text then you can avoid the whole issue by putting your text inside triple-quotes: End of explanation print("String1" + "String2") print("Hey, looks like" + " I'm " + "adding "+ "6" +" strings" + " together") Explanation: Adding with strings (concatenation) As with numbers, there are many things that you can do with strings. The simplest, however, is like addition (which is why it uses a +): when you add strings together you get a new, longer string that contains the characters of the original strings. This is usually called concatenation: End of explanation print("This is code " ??? " camp's notebook is " ??? " number " + "2.") Explanation: So just like you would do to add two numbers together, we can add "String1" and "String2" together to get "String1String2". But notice that the + operator doesn't insert whitespace (i.e. a ' ' character) or anything else. It just sticks the two strings together exactly as they are. And just like we can add together a whole set of numbers, we can add together a whole set of strings as in the second line beginning "Hey, looks like..." A challenge for you! Replace the questions marks "???" in the following exercise with the appropriate code to make it work End of explanation print("I like Python a lot" + "!" * 3) print("Foo " * 25) Explanation: Multiplication If you use the multiplication operator (*) on a string then you will multiply the string by the value of the multiplier. End of explanation 20 * '5' Explanation: A challenge for you! What do you think will be the output of this code? (Work out your answer before running the code) End of explanation print("5" * "2") Explanation: Now, why do you think the next example below doesn't work? (Read the error output if you're not sure what's going on.) End of explanation result = -2 * 10 Explanation: What is a variable? So far, everything we've done was about performing some kind of calculation on an integer, float, or string, and then showing the result. Given that a lot of programming doesn't involve solving everything in one easy line, how do you save an answer so that you can re-use it later? Let's start with the first true programming concept: the variable. If you have studied other programming languages before then the concept of the variable will be so familiar to you that it's hard to remember even having to learn it! Some people think of a variable simply as "a box" that contains values that we want to store and retrieve in the future. However, we think it might be more useful to think of a variable as the label of a box in which we put something: for programmers, the label is how we remember what we put in the box and where we put it. Let me try to explain: if you watched the introductory videos in the previous notebook, then you'll remember that the computer stores 'information' in lots of places (in memory, on the hard drive, etc.), but it doesn't use an addressing system that you or I would be able to read. Instead, it will use a long, complicated number that tells the computer "Go look in this place for what to do when the mouse is clicked" or "Go look what to do when someone asks you to add together 1 and 5". In the same way that Python translates between what we humans can deal with and what the computer can deal with, it also translates between the 'labels' that we use to refer to different boxes storing different things and how the computer finds what we actually put there. Here's an example: End of explanation result = -2 * 10 print("I'm wasting space...") print(result) Explanation: Hmmmm, nothing printed out this time... That's because this time we gave Python a bax with the label "result" in which to put the result of multiplying -2 and 10. End of explanation # First, we need to check the data type of our variable using a function called "type" type(result) # Rewrite the existing variable or put it as a new variable result = float(result) # or result_float = float(result_float) type(result) Explanation: Can you check the data type of your variable result and switch it to float? End of explanation myFirstVariable = 1 * 5 print(myFirstVariable) mySecondVariable = myFirstVariable print(mySecondVariable) Explanation: Check it out! We assigned the outcome of -2 * 10 to a variable called result; then we did something else (printed out a string); and then we printed out the value of the variable and the computer remembered! This video may help you to further understand the concept of a 'variable': <a href="http://www.youtube.com/watch?feature=player_embedded&v=_sVtcPgHAjI" target="_blank"><img src="http://img.youtube.com/vi/_sVtcPgHAjI/0.jpg" alt="IMAGE ALT TEXT HERE" width="480" height="360" border="10" /></a> Copying variables And variables can be copied using the = operator in the same way that the result of the maths operation above could be assigned to a new variable called result. End of explanation myFirstVariable = 2 print(myFirstVariable) print(mySecondVariable) Explanation: Cool, both variables have the same value! We assigned the result of 1 * 5 to a variable named myFirstVariable and then we assigned this value to a second variable called mySecondVariable. But why is this called assignment (or, in plain English, copying)? Well what happens when I change the first variable myFirstVariable? Will the second change as well? End of explanation cheers ??? name ??? " is awesome!" print(cheers) cheers = name + " is awesome!" print(cheers) Explanation: Whoa! mySecondVariable didn't change and still remembers what we assigned to in the first time. Basically, we took the myFirstVariable label and attached it to a different box. As the Python Programming Wikibook explains, when you assign a variable you are just pointing this variable to an object (a value) which is stored somewhere in the memory. Python variables are a kind of 'label' (as the YouTube video above illustrates – watch it!). So when assigning new values to already declared variables (i.e. variables that already exist in your code) you are not overwriting the old values but simply "moving" the label from one value to another. That's why in Python variables have a name, a data-type and a value. | Name | Data Type | Value | |:---------------:|:---------:|:-----:| | myFirstVariable | integer | 1 | A challenge for you! In the code cell below: a) define a variable called "name" and assign it a value of your choice, then b) print it with the "print" command For example: name = "Peter" print(name) Replace the questions marks ??? in the following lines of code with the appropriate code to make it work (i.e. not produce an error). End of explanation famous_geographer = "Mercator" print(famous_geographer) Explanation: Naming variables How do you choose a variable name (i.e. label) in Python? Here's a short list of the conventions: * names may contain letters and/or numbers (e.g. myVar2) * names cannot begin with a number (e.g. 2myVar) * names may contain an underscore ("_") (e.g. my_var_2) * names can be of any length (e.g. m2 or mySecondVariableIsReallyLong) * you cannot use Python keywords (e.g. print) So this block of code below will run: End of explanation 1st_geographic_projection = Mercator's most famous Geographic Projection is a cylindrical map projection that retains the ability to ability to represent lines of constant course (loxodromes) print(1st_geographic_projection) Explanation: But this block of code will not: End of explanation this_var = "Mercator" This_var = "Galileo" print(this_var) print(This_var) Explanation: Notice how the notebook has coloured the text so that the '1' in what we wanted to make the variable name stands out? Again, that's Python trying to help us figure out what is going wrong, but it requires that you look closely at the output of the error message. A final warning Remember that we said the string " Some text" and the string "Some text" are different because the space at the start of the string changes everything? The same sort of strict checking is true for variables: in short, Python is case-sensitive! This means that this_var and This_var are two different variables and can refer to two different boxes: End of explanation old_satellite = 'Sputnik 1' old_satellite_description = " was the first artificial Earth satellite, launched from the Soviet Union on the 4th of October, 1957." print("Здравствуйте! My name is " + old_satellite) print(old_satellite + old_satellite_description) Explanation: As for many issues related to Python's style, it is good practice to always refer to the offical PEP 8 -- Style Guide for Python Code For more examples of Python variables check out also OpenTechSchool's intro to Python Code (general excercises) Now that we've had a taste of the fantastic Python programming world, let's solidify our newly acquired skills with a final round of excercises. Code from scratch Exercise 1 Look at the following example (and its output): End of explanation new_satellite = 'Landsat' print(new_satellite) print("The new satellite is " + new_satellite + " and the old satellite is " + old_satellite) Explanation: Similar to the example above, in the code cell below: 1. define a variable named new_satellite with value landsat 2. try to print its name 3. then try to concatenate its name with another variable description of your choice, 4. and print them. End of explanation pr???nt("Hello there " + ???(new_satellite) ) print("Hello there " + new_satellite + " 8" ) Explanation: Supported code Replace the questions marks ??? in the following exercise with the appropriate code to make it work Exercise 2 Landsat is a bit generic, the correct name is Landsat 8. How would you put together these two different Data Types? Remember what we've seen about casting? Edit the code below to make it work. End of explanation diameter_cm = 58 cm2inches = 0.393700787 diameter_inches = diameter_cm ??? cm2inches print(diameter_inches) diameter_cm = 58 cm2inches = 0.393700787 diameter_inches = diameter_cm * cm2inches print(diameter_inches) Explanation: Excercise 3 According to its Wikipedia page Sputnik 1 was a 58 cm diameter polished metal sphere. If a cm = 0.393700787 inches what was its diameter in inches? Edit the code below to make it work. End of explanation import math PI = math.pi radius_cm = diameter_cm/2 volume = (4/3) ??? PI ??? (radius_cm ??? 3 ) print(volume) import math PI = math.pi radius_cm = diameter_cm/2 volume = (4/3) * PI * (radius_cm ** 3 ) print(volume) Explanation: Exercise 4 Wat was its volume (in cubic cm)? # NOTE: the first line below we are "importing" the math module and assigning to a variable PI the value of pi (3.14...). Edit the code to make it work. End of explanation print(new_satellite + "has a Near Infrared (NI), \ which band captures light in the wavelength from "+ 770 + " to " + 900 + " nanometers." ) # The error message indicates a type error, as we can only concatenate string # The code should work by including "" to the numbers print(new_satellite + " has a Near Infrared (NI), \ which band captures light in the wavelength from "+ "770" + " to " + "900" + " nanometers." ) Explanation: Broken code mmh..something's broken in the following line of code; can you spot the error(s)? Hint: remember what we said about different data types... End of explanation # King's College coordinates # What format are they in? Does it seem appropriate? longitude = -0.11596798896789551 latitude = 51.51130657591914 #cast the floats to strings ??? = str(longitude) lat = str(???) # King's College marker KCL_position = "https://www.openstreetmap.org/?mlat="+lat+"8&mlon="+lon+"#map=15/"+lat+"/"+lon print(KCL_position) # King's College coordinates # What format are they in? Does it seem appropriate? longitude = -0.11596798896789551 latitude = 51.51130657591914 #cast the floats to strings lon = str(longitude) lat = str(latitude) # King's College marker KCL_position = "https://www.openstreetmap.org/?mlat="+lat+"8&mlon="+lon+"#map=15/"+lat+"/"+lon print(KCL_position) Explanation: Code (Applied Geo-example) In this excercise you'll dip a toe in the wonderful world of web maps! We are going to create a geographic marker (a pin on the map!) on top of OpenStreetMap (OSM) to visualise King's College location. To do so we'll have to create a string representing a web URL (that's the address you type in your browser when your surf the web) pointing to OSM website. Now, as you can see there are two variables containing King's College Longitute/Latitude coordinate position. You will need to use them within the variable KCL_position. Unfortunately they are in the wrong data type! Also, there might be something missing in the code. HINT: To convert (cast) a float to a string use the str() function (we haven't talked about functions yet, but see if you can work it out). You'll also need to think about how to concatenate strings? Replace the ??? in the code below to make it work. End of explanation
12,661
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-2', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: BNU Source ID: SANDBOX-2 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:41 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
12,662
Given the following text description, write Python code to implement the functionality described below step by step Description: Angelic Search Search using angelic semantics (is a hierarchical search), where the agent chooses the implementation of the HLA's. <br> The algorithms input is Step1: The Angelic search algorithm consists of three parts. - Search using angelic semantics - Decompose - a search in the space of refinements, in a similar way with hierarchical search Searching using angelic semantics Find the reachable set (optimistic and pessimistic) of the sequence of angelic HLA in initialPlan If the optimistic reachable set doesn't intersect the goal, then there is no solution If the pessimistic reachable set intersects the goal, then we call decompose, in order to find the sequence of actions that lead us to the goal. If the optimistic reachable set intersects the goal, but the pessimistic doesn't we do some further refinements, in order to see if there is a sequence of actions that achieves the goal. Search in space of refinements Create a search tree, that has root the action and children it's refinements Extend frontier by adding each refinement, so that we keep looping till we find all primitive actions If we achieve that we return the path of the solution (search tree), else there is no solution and we return None. Step2: Decompose Finds recursively the sequence of states and actions that lead us from initial state to goal. For each of the above actions we find their refinements,if they are not primitive, by calling the angelic_search function. If there are not refinements return None Step3: Example Suppose that somebody wants to get to the airport. The possible ways to do so is either get a taxi, or drive to the airport. <br> Those two actions have some preconditions and some effects. If you get the taxi, you need to have cash, whereas if you drive you need to have a car. <br> Thus we define the following hierarchy of possible actions. hierarchy Step4: the possible actions are the following Step5: Suppose that (our preconditionds are that) we are Home and we have cash and car and our goal is to get to SFO and maintain our cash, and our possible actions are the above. <br> Then our problem is Step6: An agent gives us some approximate information about the plan we will follow Step7: We want to find the optimistic and pessimistic reachable set of initialPlan when applied to the problem Step8: Refinements Step9: Run the angelic search Top level call Step10: Example 2 Step11: Example 3 Sometimes there is no plan that achieves the goal!
Python Code: from planning import * from notebook import psource Explanation: Angelic Search Search using angelic semantics (is a hierarchical search), where the agent chooses the implementation of the HLA's. <br> The algorithms input is: problem, hierarchy and initialPlan - problem is of type Problem - hierarchy is a dictionary consisting of all the actions. - initialPlan is an approximate description(optimistic and pessimistic) of the agents choices for the implementation. <br> initialPlan contains a sequence of HLA's with angelic semantics End of explanation psource(Problem.angelic_search) Explanation: The Angelic search algorithm consists of three parts. - Search using angelic semantics - Decompose - a search in the space of refinements, in a similar way with hierarchical search Searching using angelic semantics Find the reachable set (optimistic and pessimistic) of the sequence of angelic HLA in initialPlan If the optimistic reachable set doesn't intersect the goal, then there is no solution If the pessimistic reachable set intersects the goal, then we call decompose, in order to find the sequence of actions that lead us to the goal. If the optimistic reachable set intersects the goal, but the pessimistic doesn't we do some further refinements, in order to see if there is a sequence of actions that achieves the goal. Search in space of refinements Create a search tree, that has root the action and children it's refinements Extend frontier by adding each refinement, so that we keep looping till we find all primitive actions If we achieve that we return the path of the solution (search tree), else there is no solution and we return None. End of explanation psource(Problem.decompose) Explanation: Decompose Finds recursively the sequence of states and actions that lead us from initial state to goal. For each of the above actions we find their refinements,if they are not primitive, by calling the angelic_search function. If there are not refinements return None End of explanation library = { 'HLA': ['Go(Home,SFO)', 'Go(Home,SFO)', 'Drive(Home, SFOLongTermParking)', 'Shuttle(SFOLongTermParking, SFO)', 'Taxi(Home, SFO)'], 'steps': [['Drive(Home, SFOLongTermParking)', 'Shuttle(SFOLongTermParking, SFO)'], ['Taxi(Home, SFO)'], [], [], []], 'precond': [['At(Home) & Have(Car)'], ['At(Home)'], ['At(Home) & Have(Car)'], ['At(SFOLongTermParking)'], ['At(Home)']], 'effect': [['At(SFO) & ~At(Home)'], ['At(SFO) & ~At(Home) & ~Have(Cash)'], ['At(SFOLongTermParking) & ~At(Home)'], ['At(SFO) & ~At(LongTermParking)'], ['At(SFO) & ~At(Home) & ~Have(Cash)']] } Explanation: Example Suppose that somebody wants to get to the airport. The possible ways to do so is either get a taxi, or drive to the airport. <br> Those two actions have some preconditions and some effects. If you get the taxi, you need to have cash, whereas if you drive you need to have a car. <br> Thus we define the following hierarchy of possible actions. hierarchy End of explanation go_SFO = HLA('Go(Home,SFO)', precond='At(Home)', effect='At(SFO) & ~At(Home)') taxi_SFO = HLA('Taxi(Home,SFO)', precond='At(Home)', effect='At(SFO) & ~At(Home) & ~Have(Cash)') drive_SFOLongTermParking = HLA('Drive(Home, SFOLongTermParking)', 'At(Home) & Have(Car)','At(SFOLongTermParking) & ~At(Home)' ) shuttle_SFO = HLA('Shuttle(SFOLongTermParking, SFO)', 'At(SFOLongTermParking)', 'At(SFO) & ~At(LongTermParking)') Explanation: the possible actions are the following: End of explanation prob = Problem('At(Home) & Have(Cash) & Have(Car)', 'At(SFO) & Have(Cash)', [go_SFO, taxi_SFO, drive_SFOLongTermParking,shuttle_SFO]) Explanation: Suppose that (our preconditionds are that) we are Home and we have cash and car and our goal is to get to SFO and maintain our cash, and our possible actions are the above. <br> Then our problem is: End of explanation angelic_opt_description = Angelic_HLA('Go(Home, SFO)', precond = 'At(Home)', effect ='$+At(SFO) & $-At(Home)' ) angelic_pes_description = Angelic_HLA('Go(Home, SFO)', precond = 'At(Home)', effect ='$+At(SFO) & ~At(Home)' ) initialPlan = [Angelic_Node(prob.init, None, [angelic_opt_description], [angelic_pes_description])] Explanation: An agent gives us some approximate information about the plan we will follow: <br> (initialPlan is an Angelic Node, where: - state is the initial state of the problem, - parent is None - action: is a list of actions (Angelic HLA's) with the optimistic estimators of effects and - action_pes: is a list of actions (Angelic HLA's) with the pessimistic approximations of the effects InitialPlan End of explanation opt_reachable_set = Problem.reach_opt(prob.init, initialPlan[0]) pes_reachable_set = Problem.reach_pes(prob.init, initialPlan[0]) print([x for y in opt_reachable_set.keys() for x in opt_reachable_set[y]], '\n') print([x for y in pes_reachable_set.keys() for x in pes_reachable_set[y]]) Explanation: We want to find the optimistic and pessimistic reachable set of initialPlan when applied to the problem: Optimistic/Pessimistic reachable set End of explanation for sequence in Problem.refinements(go_SFO, prob, library): print (sequence) print([x.__dict__ for x in sequence ], '\n') Explanation: Refinements End of explanation plan= Problem.angelic_search(prob, library, initialPlan) print (plan, '\n') print ([x.__dict__ for x in plan]) Explanation: Run the angelic search Top level call End of explanation library_2 = { 'HLA': ['Go(Home,SFO)', 'Go(Home,SFO)', 'Bus(Home, MetroStop)', 'Metro(MetroStop, SFO)' , 'Metro(MetroStop, SFO)', 'Metro1(MetroStop, SFO)', 'Metro2(MetroStop, SFO)' ,'Taxi(Home, SFO)'], 'steps': [['Bus(Home, MetroStop)', 'Metro(MetroStop, SFO)'], ['Taxi(Home, SFO)'], [], ['Metro1(MetroStop, SFO)'], ['Metro2(MetroStop, SFO)'],[],[],[]], 'precond': [['At(Home)'], ['At(Home)'], ['At(Home)'], ['At(MetroStop)'], ['At(MetroStop)'],['At(MetroStop)'], ['At(MetroStop)'] ,['At(Home) & Have(Cash)']], 'effect': [['At(SFO) & ~At(Home)'], ['At(SFO) & ~At(Home) & ~Have(Cash)'], ['At(MetroStop) & ~At(Home)'], ['At(SFO) & ~At(MetroStop)'], ['At(SFO) & ~At(MetroStop)'], ['At(SFO) & ~At(MetroStop)'] , ['At(SFO) & ~At(MetroStop)'] ,['At(SFO) & ~At(Home) & ~Have(Cash)']] } plan_2 = Problem.angelic_search(prob, library_2, initialPlan) print(plan_2, '\n') print([x.__dict__ for x in plan_2]) Explanation: Example 2 End of explanation library_3 = { 'HLA': ['Shuttle(SFOLongTermParking, SFO)', 'Go(Home, SFOLongTermParking)', 'Taxi(Home, SFOLongTermParking)', 'Drive(Home, SFOLongTermParking)', 'Drive(SFOLongTermParking, Home)', 'Get(Cash)', 'Go(Home, ATM)'], 'steps': [['Get(Cash)', 'Go(Home, SFOLongTermParking)'], ['Taxi(Home, SFOLongTermParking)'], [], [], [], ['Drive(SFOLongTermParking, Home)', 'Go(Home, ATM)'], []], 'precond': [['At(SFOLongTermParking)'], ['At(Home)'], ['At(Home) & Have(Cash)'], ['At(Home)'], ['At(SFOLongTermParking)'], ['At(SFOLongTermParking)'], ['At(Home)']], 'effect': [['At(SFO)'], ['At(SFO)'], ['At(SFOLongTermParking) & ~Have(Cash)'], ['At(SFOLongTermParking)'] ,['At(Home) & ~At(SFOLongTermParking)'], ['At(Home) & Have(Cash)'], ['Have(Cash)'] ] } shuttle_SFO = HLA('Shuttle(SFOLongTermParking, SFO)', 'Have(Cash) & At(SFOLongTermParking)', 'At(SFO)') prob_3 = Problem('At(SFOLongTermParking) & Have(Cash)', 'At(SFO) & Have(Cash)', [shuttle_SFO]) # optimistic/pessimistic descriptions angelic_opt_description = Angelic_HLA('Shuttle(SFOLongTermParking, SFO)', precond = 'At(SFOLongTermParking)', effect ='$+At(SFO) & $-At(SFOLongTermParking)' ) angelic_pes_description = Angelic_HLA('Shuttle(SFOLongTermParking, SFO)', precond = 'At(SFOLongTermParking)', effect ='$+At(SFO) & ~At(SFOLongTermParking)' ) # initial Plan initialPlan_3 = [Angelic_Node(prob.init, None, [angelic_opt_description], [angelic_pes_description])] plan_3 = prob_3.angelic_search(library_3, initialPlan_3) print(plan_3) Explanation: Example 3 Sometimes there is no plan that achieves the goal! End of explanation
12,663
Given the following text description, write Python code to implement the functionality described below step by step Description: <u>Word prediction</u> Language Model based on n-gram Probabilistic Model Good Turing Smoothing Used with Backoff Highest Order n-gram used is Quadgram <u>Import corpus</u> Step1: <u>Do preprocessing</u> Step2: Tokenize and load the corpus data Step3: Create a Hash Table for Probable words for Trigram sentences Step4: Create a Hash Table for Probable words for Bigram sentences Step5: Create a Hash Table for Probable words for Unigram Step6: Sort the probable words for the various Probability Dictionaries according to their probability Step7: <u>For Taking input from the User</u> Step8: <u>Test Score ,Perplexity Calculation Step9: For Computing the Perplexity Step10: <u>Regression related stuff Step11: <u>Find the count Nc for quadgrams and trigrams where c > k , k = 5 Step12: <u>For finding the Good Turing Probability Step13: <u>Driver function for doing the prediction</u> Find word Prediction using Backoff Step14: <u>Driver Function for Testing the Language Model</u> Step15: main function Step16: <i><u>For Debugging Purpose Only</u></i> <i>Uncomment the above two cells and ignore running the cells below if not debugging</i> Step17: For Testing the Language Model Calculates % Accuracy and Perplexity<br> NOTE
Python Code: from nltk.util import ngrams from collections import defaultdict from collections import OrderedDict import string import time import gc from math import log10 start_time = time.time() Explanation: <u>Word prediction</u> Language Model based on n-gram Probabilistic Model Good Turing Smoothing Used with Backoff Highest Order n-gram used is Quadgram <u>Import corpus</u> End of explanation #returns: string #arg: string #remove punctuations and make the string lowercase def removePunctuations(sen): #split the string into word tokens temp_l = sen.split() #print(temp_l) i = 0 j = 0 #changes the word to lowercase and removes punctuations from it for word in temp_l : j = 0 #print(len(word)) for l in word : if l in string.punctuation: if l == "'": if j+1<len(word) and word[j+1] == 's': j = j + 1 continue word = word.replace(l," ") #print(j,word[j]) j += 1 temp_l[i] = word.lower() i=i+1 #spliting is being don here beacause in sentences line here---so after punctuation removal it should #become "here so" content = " ".join(temp_l) return content Explanation: <u>Do preprocessing</u>: Remove the punctuations and lowercase the tokens End of explanation #returns : int #arg: string,dict,dict,dict,dict #loads the corpus for the dataset and makes the frequency count of quadgram ,bigram and trigram strings def loadCorpus(file_path, bi_dict, tri_dict, quad_dict, vocab_dict): w1 = '' #for storing the 3rd last word to be used for next token set w2 = '' #for storing the 2nd last word to be used for next token set w3 = '' #for storing the last word to be used for next token set token = [] #total no. of words in the corpus word_len = 0 #open the corpus file and read it line by line with open(file_path,'r') as file: for line in file: #split the string into word tokens temp_l = line.split() i = 0 j = 0 #does the same as the removePunctuations() function,implicit declratation for performance reasons #changes the word to lowercase and removes punctuations from it for word in temp_l : j = 0 #print(len(word)) for l in word : if l in string.punctuation: if l == "'": if j+1<len(word) and word[j+1] == 's': j = j + 1 continue word = word.replace(l," ") #print(j,word[j]) j += 1 temp_l[i] = word.lower() i=i+1 #spliting is being done here beacause in sentences line here---so after punctuation removal it should #become "here so" content = " ".join(temp_l) token = content.split() word_len = word_len + len(token) if not token: continue #add the last word from previous line if w3!= '': token.insert(0,w3) temp0 = list(ngrams(token,2)) #since we are reading line by line some combinations of word might get missed for pairing #for trigram #first add the previous words if w2!= '': token.insert(0,w2) #tokens for trigrams temp1 = list(ngrams(token,3)) #insert the 3rd last word from previous line for quadgram pairing if w1!= '': token.insert(0,w1) #add new unique words to the vocaulary set if available for word in token: if word not in vocab_dict: vocab_dict[word] = 1 else: vocab_dict[word]+= 1 #tokens for quadgrams temp2 = list(ngrams(token,4)) #count the frequency of the bigram sentences for t in temp0: sen = ' '.join(t) bi_dict[sen] += 1 #count the frequency of the trigram sentences for t in temp1: sen = ' '.join(t) tri_dict[sen] += 1 #count the frequency of the quadgram sentences for t in temp2: sen = ' '.join(t) quad_dict[sen] += 1 #then take out the last 3 words n = len(token) #store the last few words for the next sentence pairing w1 = token[n -3] w2 = token[n -2] w3 = token[n -1] return word_len Explanation: Tokenize and load the corpus data End of explanation #returns: void #arg: dict,dict,dict,dict,dict,dict,int #creates dict for storing probable words with their probabilities for a trigram sentence def findQuadgramProbGT(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict, nc_dict, k): i = 0 V = len(vocab_dict) for quad_sen in quad_dict: quad_token = quad_sen.split() #trigram sentence for key tri_sen = ' '.join(quad_token[:3]) #find the probability #Good Turing smoothing has been used quad_count = quad_dict[quad_sen] tri_count = tri_dict[tri_sen] if quad_dict[quad_sen] <= k or (quad_sen not in quad_dict): quad_count = findGoodTuringAdjustCount( quad_dict[quad_sen], k, nc_dict) if tri_dict[tri_sen] <= k or (tri_sen not in tri_dict): tri_count = findGoodTuringAdjustCount( tri_dict[tri_sen], k, nc_dict) prob = quad_count / tri_count #add the trigram to the quadgram probabiltity dict if tri_sen not in quad_prob_dict: quad_prob_dict[tri_sen] = [] quad_prob_dict[tri_sen].append([prob,quad_token[-1]]) else: quad_prob_dict[tri_sen].append([prob,quad_token[-1]]) prob = None quad_token = None tri_sen = None Explanation: Create a Hash Table for Probable words for Trigram sentences End of explanation #returns: void #arg: dict,dict,dict,dict,dict,int #creates dict for storing probable words with their probabilities for a bigram sentence def findTrigramProbGT(vocab_dict, bi_dict, tri_dict, tri_prob_dict, nc_dict, k): #vocabulary length V = len(vocab_dict) #create a dictionary of probable words with their probabilities for #trigram probabilites,key is a bigram and value is a list of prob and word for tri in tri_dict: tri_token = tri.split() #bigram sentence for key bi_sen = ' '.join(tri_token[:2]) #find the probability #Good Turing smoothing has been used tri_count = tri_dict[tri] bi_count = bi_dict[bi_sen] if tri_dict[tri] <= k or (tri not in tri_dict): tri_count = findGoodTuringAdjustCount( tri_dict[tri], k, nc_dict) if bi_dict[bi_sen] <= k or (bi_sen not in bi_dict): bi_count = findGoodTuringAdjustCount( bi_dict[bi_sen], k, nc_dict) prob = tri_count / bi_count #add the bigram sentence to the trigram probability dict #tri_prob_dict is a dict of list if bi_sen not in tri_prob_dict: tri_prob_dict[bi_sen] = [] tri_prob_dict[bi_sen].append([prob,tri_token[-1]]) else: tri_prob_dict[bi_sen].append([prob,tri_token[-1]]) prob = None tri_token = None bi_sen = None Explanation: Create a Hash Table for Probable words for Bigram sentences End of explanation #returns: void #arg: dict,dict,dict,dict,int #creates dict for storing probable words with their probabilities for a unigram def findBigramProbGT(vocab_dict, bi_dict, bi_prob_dict, nc_dict, k): #vocabulary size V = len(vocab_dict) #create a dictionary of probable words with their probabilities for bigram probabilites for bi in bi_dict: bi_token = bi.split() #unigram for key unigram = bi_token[0] #find the probability #Good Turing smoothing has been used bi_count = bi_dict[bi] uni_count = vocab_dict[unigram] if bi_dict[bi] <= k or (bi not in bi_dict): bi_count = findGoodTuringAdjustCount( bi_dict[bi], k, nc_dict) if vocab_dict[unigram] <= k or (unigram not in vocab_dict): uni_count = findGoodTuringAdjustCount( vocab_dict[unigram], k, nc_dict) prob = bi_count / uni_count #add the unigram to the bigram probability dict #bi_prob_dict is a dict of list if unigram not in bi_prob_dict: bi_prob_dict[unigram] = [] bi_prob_dict[unigram].append([prob,bi_token[-1]]) else: bi_prob_dict[unigram].append([prob,bi_token[-1]]) prob = None bi_token = None unigram = None Explanation: Create a Hash Table for Probable words for Unigram End of explanation #returns: void #arg: dict #for sorting the probable word acc. to their probabilities def sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict): for key in bi_prob_dict: if len(bi_prob_dict[key])>1: bi_prob_dict[key] = sorted(bi_prob_dict[key],reverse = True) for key in tri_prob_dict: if len(tri_prob_dict[key])>1: tri_prob_dict[key] = sorted(tri_prob_dict[key],reverse = True) for key in quad_prob_dict: if len(quad_prob_dict[key])>1: quad_prob_dict[key] = sorted(quad_prob_dict[key],reverse = True)[:2] Explanation: Sort the probable words for the various Probability Dictionaries according to their probability End of explanation #returns: string #arg: void #for taking input from user def takeInput(): cond = False #take input while(cond == False): sen = input('Enter the string\n') sen = removePunctuations(sen) temp = sen.split() if len(temp) < 3: print("Please enter atleast 3 words !") else: cond = True temp = temp[-3:] sen = " ".join(temp) return sen Explanation: <u>For Taking input from the User</u> End of explanation #computes the score for test data def computeTestScore(test_token, bi_dict, tri_dict, quad_dict, quad_prob_dict, tri_prob_dict,bi_prob_dict ): #increment the score value if correct prediction is made else decrement its value score = 0 wrong = 0 total = 0 with open('Test_Scores/Good_Turing_Backoff_Score.txt','w') as w: for sent in test_token: sen_token = sent[:3] sen = " ".join(sen_token) correct_word = sent[3] result = doPredictionBackoffGT(sen, bi_dict, tri_dict, quad_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict) if result: if result[1] == correct_word: score+=1 else: wrong += 1 else: wrong += 1 total += 1 w.write('Total Word Prdictions: '+str(total) + '\n' +'Correct Prdictions: '+str(score) + '\n'+'Wrong Prdictions: '+str(wrong) + '\n'+'ACCURACY: '+str((score/total)*100)+'%' ) #print stats print('Total Word Prdictions: '+str(total) + '\n' +'Correct Prdictions: '+str(score) + '\n'+'Wrong Prdictions: '+str(wrong) + '\n'+'ACCURACY:'+str((score/total)*100)+'%' ) return score Explanation: <u>Test Score ,Perplexity Calculation:</u> For computing the Test Score End of explanation #return:float #arg:list,int,dict,dict,dict,dict #computes the score for test data def computePerplexity(test_quadgrams, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, k, quad_nc_dict, tri_nc_dict, bi_nc_dict, uni_nc_dict): perplexity = float(1.0) n = token_len for key in quad_dict: quad_token = key.split() quad_count = quad_dict[key] tri_count = tri_dict[' '.join(quad_token[0:3])] if quad_dict[key] <= k or (key not in quad_dict): quad_count = findGoodTuringAdjustCount( quad_dict[key], k, quad_nc_dict) if tri_dict[' '.join(quad_token[0:3])] <= k or (' '.join(quad_token[0:3]) not in tri_dict): tri_count = findGoodTuringAdjustCount( tri_dict[' '.join(quad_token[0:3])], k, tri_nc_dict) prob = quad_count / tri_count if prob != 0: perplexity = perplexity * ( prob**(1./n)) with open('Test_Scores/Good_Turing_Backoff_Score.txt','a') as w: w.write('\nPerplexity: '+str(perplexity)) return perplexity Explanation: For Computing the Perplexity End of explanation ## Regression related stuff #calculate best fit line for simple regression from statistics import mean import numpy as np import matplotlib.pyplot as plt from matplotlib import style #finds the slope for the best fit line def findBestFitSlope(x,y): m = (( mean(x)*mean(y) - mean(x*y) ) / ( mean(x)** 2 - mean(x**2))) return m #finds the intercept for the best fit line def findBestFitIntercept(x,y,m): c = mean(y) - m*mean(x) return c Explanation: <u>Regression related stuff End of explanation ## Find the count Nc for quadgrams and trigrams where c > 5 #arg: dict, int, int, int, int #returns: dict #token_len : total no. of ngram tokens def findFrequencyOfFrequencyCount(ngram_dict, k, n, V, token_len): #for keeping count of 'c' value i.e Nc nc_dict = {} #we find the value of Nc,c = 0 by V^n - (total n-gram tokens) nc_dict[0] = V**n - token_len #find the count Nc till c = k,we will take k = 5 #find counts for n-gram for key in ngram_dict: if ngram_dict[key] <= k + 1: if ngram_dict[key] not in nc_dict: nc_dict[ ngram_dict[key]] = 1 else: nc_dict[ ngram_dict[key] ] += 1 #check if all the values of Nc are there in the nc_dict or not ,if there then return val_present = True for i in range(1,7): if i not in nc_dict: val_present = False break if val_present == True: return nc_dict #now fill in the values of nc in case it is not there using regression upto c = 6 #we use :[ log(Nc) = blog(c) + a ] as the equation #we first need to find data for regression that is values(Nc,c) we take 5 data points data_pts = {} i = 0 #get first 5 counts value i.e c #for quadgram for key in ngram_dict: if ngram_dict[key] not in data_pts: data_pts[ ngram_dict[key] ] = 1 i += 1 if i >5: break #now get Nc for those c values for key in ngram_dict: if ngram_dict[key] in data_pts: data_pts[ ngram_dict[key] ] += 1 #make x ,y coordinates for regression x_coor = [ np.log(item) for item in data_pts ] y_coor = [ np.log( data_pts[item] ) for item in data_pts ] x = np.array(x_coor, dtype = np.float64) y = np.array(y_coor , dtype = np.float64) #now do regression #find the slope and intercept for the regression equation slope_m = findBestFitSlope(x,y) intercept_c = findBestFitIntercept(x,y,slope_m) #now find the missing Nc terms and give them value using regression for i in range(1,(k+2)): if i not in nc_dict: nc_dict[i] = (slope_m*i) + intercept_c return nc_dict Explanation: <u>Find the count Nc for quadgrams and trigrams where c > k , k = 5 End of explanation #for finding the adjusted count c* in Good Turing Smoothing def findGoodTuringAdjustCount(c, k, nc_dict): adjust_count = ( ( (( c + 1)*( nc_dict[c + 1] / nc_dict[c])) - ( c * (k+1) * nc_dict[k+1] / nc_dict[1]) ) / ( 1 - (( k + 1)*nc_dict[k + 1] / nc_dict[1]) ) ) return adjust_count Explanation: <u>For finding the Good Turing Probability End of explanation #finds the word prediction usinng Backoff def doPredictionBackoffGT(input_sen, bi_dict, tri_dict, quad_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict): #split the input sentence into tokens token = input_sen.split() #if the input sen is found in any ngram then give the most probable word for that ngram #if not then go to the lower order ngram if input_sen in quad_prob_dict and quad_prob_dict[ input_sen ][0][0]>0: pred = quad_prob_dict[input_sen][0] elif ' '.join(token[1:]) in tri_prob_dict and tri_prob_dict[' '.join(token[1:])][0][0]>0: pred = tri_prob_dict[ ' '.join(token[1:]) ][0] elif ' '.join(token[2:]) in bi_prob_dict and bi_prob_dict[ ' '.join(token[2:]) ][0][0]>0: pred = bi_prob_dict[' '.join(token[2:])][0] else: pred = [] return pred Explanation: <u>Driver function for doing the prediction</u> Find word Prediction using Backoff End of explanation #return: void #arg:string,string,dict,dict,dict,dict,dict #Used for testing the Language Model def trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,prob_dict): test_result = '' score = 0 #load the training corpus for the dataset token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) print("---Processing Time for Corpus Loading: %s seconds ---" % (time.time() - start_time)) start_time1 = time.time() #create the different Nc dictionaries for ngrams #threshold value k = 5 V = len(vocab_dict) quad_nc_dict = findFrequencyOfFrequencyCount(quad_dict, k, 4, V, len(quad_dict)) tri_nc_dict = findFrequencyOfFrequencyCount(tri_dict, k, 3, V, len(tri_dict)) bi_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 2, V, len(bi_dict)) uni_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 1, V, len(vocab_dict)) #create quadgram probability dictionary findQuadgramProbGT(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict, quad_nc_dict, k) #create trigram probability dictionary findTrigramProbGT(vocab_dict, bi_dict, tri_dict, tri_prob_dict, tri_nc_dict, k) #create bigram probability dictionary findBigramProbGT(vocab_dict, bi_dict, bi_prob_dict, bi_nc_dict, k) #sort the probability dictionaries of quad,tri and bi grams sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) print("---Processing Time for Creating Probable Word Dict: %s seconds ---" % (time.time() - start_time1)) ### TESTING WITH TEST CORPUS test_data = '' #Now load the test corpus with open('test_corpus.txt','r') as file : test_data = file.read() #remove punctuations from the test data test_data = removePunctuations(test_data) test_token = test_data.split() #split the test data into 4 words list test_token = test_data.split() test_quadgrams = list(ngrams(test_token,4)) #choose most probable words for prediction start_time2 = time.time() score = computeTestScore(test_quadgrams, bi_dict, tri_dict, quad_dict, quad_prob_dict, tri_prob_dict,bi_prob_dict ) print('Score:',score) print("---Processing Time for computing score: %s seconds ---" % (time.time() - start_time2)) start_time3 = time.time() perplexity = computePerplexity(test_quadgrams, bi_dict, tri_dict, quad_dict, vocab_dict,token_len, k, quad_nc_dict, tri_nc_dict, bi_nc_dict, uni_nc_dict) print('Perplexity:',perplexity) print("---Processing Time for computing Perplexity: %s seconds ---" % (time.time() - start_time3)) Explanation: <u>Driver Function for Testing the Language Model</u> End of explanation def main(): #variable declaration vocab_dict = defaultdict(int) #for storing the different words with their frequencies bi_dict = defaultdict(int) #for keeping count of sentences of two words tri_dict = defaultdict(int) #for keeping count of sentences of three words quad_dict = defaultdict(int) #for keeping count of sentences of four words quad_prob_dict = OrderedDict() tri_prob_dict = OrderedDict() bi_prob_dict = OrderedDict() #load the corpus for the dataset train_file = 'corpusfile.txt' #load corpus token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) #create the different Nc dictionaries for ngrams #threshold value k = 5 V = len(vocab_dict) quad_nc_dict = findFrequencyOfFrequencyCount(quad_dict, k, 4, V, len(quad_dict)) tri_nc_dict = findFrequencyOfFrequencyCount(tri_dict, k, 3, V, len(tri_dict)) bi_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 2, V, len(bi_dict)) uni_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 1, V, len(vocab_dict)) #create quadgram probability dictionary findQuadgramProbGT(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict, quad_nc_dict, k) #create trigram probability dictionary findTrigramProbGT(vocab_dict, bi_dict, tri_dict, tri_prob_dict, tri_nc_dict, k) #create bigram probability dictionary findBigramProbGT(vocab_dict, bi_dict, bi_prob_dict, bi_nc_dict, k) #sort the probability dictionaries of quad,tri and bi grams sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) ##WORD PREDICTION #take user input input_sen = takeInput() prediction = doPredictionBackoffGT(input_sen, bi_dict, tri_dict, quad_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict) if prediction: print('Word Prediction:',prediction[1]) if __name__ == '__main__': main() Explanation: main function End of explanation #variable declaration vocab_dict = defaultdict(int) #for storing the different words with their frequencies bi_dict = defaultdict(int) #for keeping count of sentences of two words tri_dict = defaultdict(int) #for keeping count of sentences of three words quad_dict = defaultdict(int) #for keeping count of sentences of four words quad_prob_dict = OrderedDict() tri_prob_dict = OrderedDict() bi_prob_dict = OrderedDict() #load the corpus for the dataset #loadCorpus('corpusfile.txt',bi_dict,tri_dict,quad_dict,vocab_dict) print("---Preprocessing Time for Corpus loading: %s seconds ---" % (time.time() - start_time)) Explanation: <i><u>For Debugging Purpose Only</u></i> <i>Uncomment the above two cells and ignore running the cells below if not debugging</i> End of explanation train_file = 'training_corpus.txt' test_file = 'test_corpus.txt' #load the corpus for the dataset token_len = trainCorpus(train_file,test_file,bi_dict,tri_dict,quad_dict,vocab_dict,quad_prob_dict) train_file = 'corpusfile.txt' #load corpus token_len = loadCorpus(train_file, bi_dict, tri_dict, quad_dict, vocab_dict) #create the different Nc dictionaries for ngrams #threshold value k = 5 V = len(vocab_dict) quad_nc_dict = findFrequencyOfFrequencyCount(quad_dict, k, 4, V, len(quad_dict)) tri_nc_dict = findFrequencyOfFrequencyCount(tri_dict, k, 3, V, len(tri_dict)) bi_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 2, V, len(bi_dict)) uni_nc_dict = findFrequencyOfFrequencyCount(bi_dict, k, 1, V, len(vocab_dict)) #create quadgram probability dictionary findQuadgramProbGT(vocab_dict, bi_dict, tri_dict, quad_dict, quad_prob_dict, quad_nc_dict, k) #create trigram probability dictionary findTrigramProbGT(vocab_dict, bi_dict, tri_dict, tri_prob_dict, tri_nc_dict, k) #create bigram probability dictionary findBigramProbGT(vocab_dict, bi_dict, bi_prob_dict, bi_nc_dict, k) #sort the probability dictionaries of quad,tri and bi grams sortProbWordDict(bi_prob_dict, tri_prob_dict, quad_prob_dict) #FOR DEBUGGING ONLY writeProbDicts(bi_prob_dict, tri_prob_dict, quad_prob_dict) ##WORD PREDICTION start_time2 = time.time() #take user input input_sen = takeInput() prediction = doPredictionBackoffGT(input_sen, bi_dict, tri_dict, quad_dict, bi_prob_dict, tri_prob_dict, quad_prob_dict) if prediction: print('Word Prediction:',prediction[1]) print("---Time for Prediction Operation: %s seconds ---" % (time.time() - start_time2)) Explanation: For Testing the Language Model Calculates % Accuracy and Perplexity<br> NOTE : If this is run then no need to run the cells following it End of explanation
12,664
Given the following text description, write Python code to implement the functionality described below step by step Description: 数据抓取: Beautifulsoup简介 王成军 [email protected] 计算传播网 http Step1: 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。 尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。 以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。 在天涯论坛,关于雾霾的帖子的第一页是: http Step2: html.parser Beautiful Soup supports the html.parser included in Python’s standard library lxml but it also supports a number of third-party Python parsers. One is the lxml parser lxml. Depending on your setup, you might install lxml with one of these commands Step3: html head title body p (class = 'title', 'story' ) a (class = 'sister') href/id Step4: 数据抓取: 根据URL抓取微信公众号文章内容 王成军 [email protected] 计算传播网 http Step5: 查看源代码 Inspect
Python Code: import urllib2 from bs4 import BeautifulSoup Explanation: 数据抓取: Beautifulsoup简介 王成军 [email protected] 计算传播网 http://computational-communication.com 需要解决的问题 页面解析 获取Javascript隐藏源数据 自动翻页 自动登录 连接API接口 End of explanation url = 'file:///Users/chengjun/GitHub/cjc2016/data/test.html' content = urllib2.urlopen(url).read() soup = BeautifulSoup(content, 'html.parser') soup Explanation: 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。 尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。 以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。 在天涯论坛,关于雾霾的帖子的第一页是: http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾 第二页是: http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾 Beautiful Soup Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful: Beautiful Soup provides a few simple methods. It doesn't take much code to write an application Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding. Beautiful Soup sits on top of popular Python parsers like lxml and html5lib. Install beautifulsoup4 open your terminal/cmd $ pip install beautifulsoup4 第一个爬虫 Beautifulsoup Quick Start http://www.crummy.com/software/BeautifulSoup/bs4/doc/ End of explanation print(soup.prettify()) Explanation: html.parser Beautiful Soup supports the html.parser included in Python’s standard library lxml but it also supports a number of third-party Python parsers. One is the lxml parser lxml. Depending on your setup, you might install lxml with one of these commands: $ apt-get install python-lxml $ easy_install lxml $ pip install lxml html5lib Another alternative is the pure-Python html5lib parser html5lib, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands: $ apt-get install python-html5lib $ easy_install html5lib $ pip install html5lib End of explanation for tag in soup.find_all(True): print(tag.name) soup('head') # or soup.head soup('body') # or soup.body soup('title') # or soup.title soup('p') soup.p soup.title.name soup.title.string soup.title.text soup.title.parent.name soup.p soup.p['class'] soup.find_all('p', {'class', 'title'}) soup.find_all('p', class_= 'title') soup.find_all('p', {'class', 'story'}) soup.find_all('p', {'class', 'story'})[0].find_all('a') soup.a soup('a') soup.find(id="link3") soup.find_all('a') soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a') soup.find_all('a', {'class', 'sister'})[0] soup.find_all('a', {'class', 'sister'})[0].text soup.find_all('a', {'class', 'sister'})[0]['href'] soup.find_all('a', {'class', 'sister'})[0]['id'] soup.find_all(["a", "b"]) print(soup.get_text()) Explanation: html head title body p (class = 'title', 'story' ) a (class = 'sister') href/id End of explanation from IPython.display import display_html, HTML HTML('<iframe src=http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\ mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\ width=500 height=500></iframe>') # the webpage we would like to crawl Explanation: 数据抓取: 根据URL抓取微信公众号文章内容 王成军 [email protected] 计算传播网 http://computational-communication.com End of explanation url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\ mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd" content = urllib2.urlopen(url).read() #获取网页的html文本 soup = BeautifulSoup(content, 'html.parser') title = soup.title.text rmml = soup.find('div', {'class', 'rich_media_meta_list'}) date = rmml.find(id = 'post-date').text rmc = soup.find('div', {'class', 'rich_media_content'}) content = rmc.get_text() print title print date print content Explanation: 查看源代码 Inspect End of explanation
12,665
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial 0 - First JOOMMF notebook The goal of this tutorial is for all participants to familiarise themselves with running JOOMMF simulations in Jupyter notebook. The only thing you need to know for this tutorial is how to execute individual cells Step1: We create a system object and provide Step2: Our Hamiltonian should only contain exchange, demagnetisation, and Zeeman energy terms. We will apply the external magnetic field in the $x$ direction for the purpose of this demonstration Step3: The dynamics of the system is governed by the LLG equation containing precession and damping terms Step4: We initialise the system in positive $y$ direction, i.e. (0, 1, 0), which is different from the equlibrium state we expect for the external Zeeman field applied in $x$ direction Step5: We can check the characteristics of the system we defined by asking objects to represent themselves Step6: We can also visualise the current magnetisation field Step7: After the system object is created, we can minimise its energy (relax it) using the Minimisation Driver (MinDriver). Step8: The system is now relaxed, and we can plot its slice and compute its average magnetisation.
Python Code: import oommfc as oc import discretisedfield as df %matplotlib inline Explanation: Tutorial 0 - First JOOMMF notebook The goal of this tutorial is for all participants to familiarise themselves with running JOOMMF simulations in Jupyter notebook. The only thing you need to know for this tutorial is how to execute individual cells: this is done by pressing Shift + Return. Simple JOOMMF simulation End of explanation system = oc.System(name="first_notebook") Explanation: We create a system object and provide: Hamiltonian, dynamics, and magnetisation configuration. End of explanation A = 1e-12 # exchange energy constant (J/m) H = (5e6, 0, 0) # external magnetic field in x-direction (A/m) system.hamiltonian = oc.Exchange(A=A) + oc.Demag() + oc.Zeeman(H=H) Explanation: Our Hamiltonian should only contain exchange, demagnetisation, and Zeeman energy terms. We will apply the external magnetic field in the $x$ direction for the purpose of this demonstration: End of explanation gamma = 2.211e5 # gamma parameter (m/As) alpha = 0.2 # Gilbert damping system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha) Explanation: The dynamics of the system is governed by the LLG equation containing precession and damping terms: End of explanation L = 100e-9 # cubic sample edge length (m) d = 5e-9 # discretisation cell size (m) mesh = oc.Mesh(p1=(0, 0, 0), p2=(L, L, L), cell=(d, d, d)) Ms = 8e6 # saturation magnetisation (A/m) system.m = df.Field(mesh, value=(0, 1, 0), norm=Ms) Explanation: We initialise the system in positive $y$ direction, i.e. (0, 1, 0), which is different from the equlibrium state we expect for the external Zeeman field applied in $x$ direction: End of explanation mesh system.hamiltonian system.dynamics Explanation: We can check the characteristics of the system we defined by asking objects to represent themselves: End of explanation system.m.plot_slice("z", 50e-9, xsize=6); Explanation: We can also visualise the current magnetisation field: End of explanation md = oc.MinDriver() md.drive(system) Explanation: After the system object is created, we can minimise its energy (relax it) using the Minimisation Driver (MinDriver). End of explanation system.m.plot_slice("z", 50e-9, xsize=6); system.m.average Explanation: The system is now relaxed, and we can plot its slice and compute its average magnetisation. End of explanation
12,666
Given the following text description, write Python code to implement the functionality described below step by step Description: Additional forces REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces. This tutorial gives you a very quick overview of how that works. Implementing additional forces in python as below will typically be a factor of a few slower than a C implementation. For a library that has C implementations for several commonly used additional effects (with everything callable from Python), see REBOUNDx. Stark problem We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND. Step1: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function Step2: Next, we need to tell REBOUND about this function. Step3: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force. Step4: And let's plot the result. Step5: You can see that the eccentricity is oscillating between 0 and almost 1. Non-conservative forces The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before. Step6: But we change the additional force to be Step7: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles. Step8: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity.
Python Code: import rebound sim = rebound.Simulation() sim.integrator = "whfast" sim.add(m=1.) sim.add(m=1e-6,a=1.) sim.move_to_com() # Moves to the center of momentum frame Explanation: Additional forces REBOUND is a gravitational N-body integrator. But you can also use it to integrate systems with additional, non-gravitational forces. This tutorial gives you a very quick overview of how that works. Implementing additional forces in python as below will typically be a factor of a few slower than a C implementation. For a library that has C implementations for several commonly used additional effects (with everything callable from Python), see REBOUNDx. Stark problem We'll start be adding two particles, the Sun and an Earth-like planet to REBOUND. End of explanation ps = sim.particles c = 0.01 def starkForce(reb_sim): ps[1].ax += c Explanation: We could integrate this system and the planet would go around the star at a fixed orbit with $a=1$ forever. Let's add an additional constant force that acting on the planet and is pointing in one direction $F_x = m\cdot c$, where $m$ is the planet's mass and $c$ a constant. This is called the Stark problem. In python we can describe this with the following function End of explanation sim.additional_forces = starkForce Explanation: Next, we need to tell REBOUND about this function. End of explanation import numpy as np Nout = 1000 es = np.zeros(Nout) times = np.linspace(0.,100.*2.*np.pi,Nout) for i, time in enumerate(times): sim.integrate(time, exact_finish_time=0) # integrate to the nearest timestep so WHFast's timestep stays constant es[i] = sim.particles[1].e Explanation: Now we can just integrate as usual. Let's keep track of the eccentricity as we integrate as it will change due to the additional force. End of explanation %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(15,5)) ax = plt.subplot(111) plt.plot(times, es); Explanation: And let's plot the result. End of explanation sim = rebound.Simulation() sim.integrator = "ias15" sim.add(m=1.) sim.add(m=1e-6,a=1.) sim.move_to_com() # Moves to the center of momentum frame Explanation: You can see that the eccentricity is oscillating between 0 and almost 1. Non-conservative forces The previous example assumed a conservative force, i.e. we could describe it as a potential as it is velocity independent. Now, let's assume we have a velocity dependent force. This could be a migration force in a protoplanetary disk or PR drag. We'll start from scratch and add the same two particles as before. End of explanation ps = sim.particles tau = 1000. def migrationForce(reb_sim): ps[1].ax -= ps[1].vx/tau ps[1].ay -= ps[1].vy/tau ps[1].az -= ps[1].vz/tau Explanation: But we change the additional force to be End of explanation sim.additional_forces = migrationForce sim.force_is_velocity_dependent = 1 Explanation: We need to let REBOUND know that our force is velocity dependent. Otherwise, REBOUND will not update the velocities of the particles. End of explanation Nout = 1000 a_s = np.zeros(Nout) times = np.linspace(0.,100.*2.*np.pi,Nout) for i, time in enumerate(times): sim.integrate(time) a_s[i] = sim.particles[1].a fig = plt.figure(figsize=(15,5)) ax = plt.subplot(111) ax.set_xlabel("time") ax.set_ylabel("semi-major axis") plt.plot(times, a_s); Explanation: Now, we integrate as before. But this time we keep track of the semi-major axis instead of the eccentricity. End of explanation
12,667
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: NASA-GISS Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:20 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
12,668
Given the following text description, write Python code to implement the functionality described below step by step Description: Model25 Step1: KMeans Step2: B. Modeling Step3: Original === Bench with ElasticNetCV
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from utils import load_buzz, select, write_result from features import featurize, get_pos from containers import Questions, Users, Categories from nlp import extract_entities Explanation: Model25: using category accuracy per users End of explanation import pickle questions = pickle.load(open('questions01.pkl', 'rb')) users = pickle.load(open('users01.pkl', 'rb')) categories = pickle.load(open('categories01.pkl', 'rb')) set(users[0].keys()) - set(['cat_uid']) from sklearn.preprocessing import normalize wanted_user_items = list(set(users[0].keys()) - set(['cat_uid'])) X_pos_uid = users.select(wanted_user_items) X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid', 'ne_nor_mean', 'ne_mean', 'ne_median']) X_pos_uid = normalize(X_pos_uid, norm='l1') X_pos_qid = normalize(X_pos_qid, norm='l1') print(X_pos_qid[0]) print(X_pos_uid[0]) from sklearn.cluster import KMeans # Question category n_components = 27 est = KMeans(n_clusters=n_components) est.fit(X_pos_qid) pred_cat_qid = est.predict(X_pos_qid) plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75) plt.xlabel("Category number") plt.ylabel("Count") plt.title("Question Category: " + str(n_components) + " categories") plt.grid(True) plt.show() # User category n_components = 27 est = KMeans(n_clusters=n_components) est.fit(X_pos_uid) pred_cat_uid = est.predict(X_pos_uid) plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75) plt.xlabel("Category number") plt.ylabel("Count") plt.title("User Category: " + str(n_components) + " categories") plt.grid(True) plt.show() from collections import Counter users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())}) questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())}) # to get most frequent cat for some test data which do not have ids in train set most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0] most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0] print(most_pred_cat_uid) print(most_pred_cat_qid) Explanation: KMeans End of explanation def add_features(X): for item in X: # category for key in categories[item['category']].keys(): item[key] = categories[item['category']][key] uid = int(item['uid']) qid = int(item['qid']) # uid if int(uid) in users: item.update(users[uid]) else: acc = users.select(['acc_ratio_uid']) item['acc_ratio_uid'] = sum(acc) / float(len(acc)) item['cat_uid'] = most_pred_cat_uid # qid if int(qid) in questions: item.update(questions[qid]) import pickle questions = pickle.load(open('questions01.pkl', 'rb')) users = pickle.load(open('users01.pkl', 'rb')) categories = pickle.load(open('categories01.pkl', 'rb')) from utils import load_buzz, select, write_result from features import featurize, get_pos from containers import Questions, Users, Categories from nlp import extract_entities import math from collections import Counter from numpy import abs, sqrt from sklearn.linear_model import ElasticNetCV from sklearn.cross_validation import ShuffleSplit, cross_val_score from sklearn.feature_extraction import DictVectorizer from sklearn.preprocessing import normalize from sklearn.svm import LinearSVC from sklearn.cluster import KMeans wanted_user_items = list(set(users[0].keys()) - set(['cat_uid'])) X_pos_uid = users.select(wanted_user_items) X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid', 'ne_nor_mean', 'ne_mean', 'ne_median']) X_pos_uid = normalize(X_pos_uid, norm='l1') X_pos_qid = normalize(X_pos_qid, norm='l1') tu = ('l1', 'n_uid_clust', 'n_qid_clust', 'rmse') print ('=== Bench with ElasticNetCV: {0}, {1}, {2}, {3}'.format(*tu)) for ii in [27]: n_uid_clu = ii n_qid_clu = ii # clustering for uid uid_est = KMeans(n_clusters=n_uid_clu) uid_est.fit(X_pos_uid) pred_cat_uid = uid_est.predict(X_pos_uid) # clustering for qid qid_est = KMeans(n_clusters=n_qid_clu) qid_est.fit(X_pos_qid) pred_cat_qid = qid_est.predict(X_pos_qid) users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())}) questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())}) # to get most frequent cat for some test data which do not have ids in train set most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0] most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0] X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos']) add_features(X_train) unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group'] wanted_features = list(set(X_train[1].keys()) - set(unwanted_features)) X_train = select(X_train, wanted_features) vec = DictVectorizer() X_train_dict_vec = vec.fit_transform(X_train) X_new = X_train_dict_vec #X_new = LinearSVC(C=0.01, penalty="l1", dual=False, random_state=50).fit_transform(X_train_dict_vec, y_train) n_samples = X_new.shape[0] cv = ShuffleSplit(n_samples, n_iter=5, test_size=0.2, random_state=50) print("L1-based feature selection:", X_train_dict_vec.shape, X_new.shape) for l1 in [0.7]: scores = cross_val_score(ElasticNetCV(n_jobs=3, normalize=True, l1_ratio = l1), X_new, y_train, cv=cv, scoring='mean_squared_error') rmse = sqrt(abs(scores)).mean() print ('{0}, {1}, {2}, {3}'.format(l1, n_uid_clu, n_qid_clu, rmse)) Explanation: B. Modeling End of explanation X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos']) add_features(X_test) X_test = select(X_test, wanted_features) unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group'] wanted_features = list(set(X_train[1].keys()) - set(unwanted_features)) X_train = select(X_train, wanted_features) X_train[0] users[131] categories['astronomy'] X_test[1] vec = DictVectorizer() vec.fit(X_train + X_test) X_train = vec.transform(X_train) X_test = vec.transform(X_test) for l1_ratio in [0.7]: print('=== l1_ratio:', l1_ratio) regressor = ElasticNetCV(n_jobs=3, normalize=True, l1_ratio=l1_ratio, random_state=50) regressor.fit(X_train, y_train) print(regressor.coef_) print(regressor.alpha_) predictions = regressor.predict(X_test) write_result(load_buzz()['test'], predictions, file_name=str(l1_ratio)+'guess_adj.csv', adj=True) Explanation: Original === Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse L1-based feature selection: (28494, 1112) (28494, 1112) 0.7, 27, 27, 74.88480204218828 Without users features for regression === Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse L1-based feature selection: (28494, 1112) (28494, 1112) 0.7, 27, 27, 74.94733641570902 Training and testing model End of explanation
12,669
Given the following text description, write Python code to implement the functionality described below step by step Description: LEDs aansturen met de Raspberry Pi GPIO pins Met deze notebook zullen we de General Purpose Input/Output (GPIO) pinnen op de Raspberry Pi gebruiken om een LED lampje te laten branden. GPIO pinnen zijn de 40 (genummerde) pinnen tegenover de HDMI aansluiting waarop verbindingsdraden aangesloten kunnen worden. Daarvoor moeten we echter een manier afspreken met de Raspberry Pi om de verschillende pinnen van mekaar te onderscheiden en dat doen we als volgt met de functie setmode. IPython Instructies Step1: BCM is de nummering die gegraveerd is op de Raspberry Pi case die we gebruiken en die ook op het afgedrukte Pinout schema terug te vinden is. Opgelet Step2: Nu alle GPIO instellingen gebeurd zijn, kunnen we pin 18 aan het werk zetten. Hiervoor laden we eerst de time bibliotheek in, die tijdsgerelateerde functionaliteit beschikbaar maakt Step3: Nota
Python Code: #GPIO bibliotheek inladen import RPi.GPIO as GPIO #BCM (Broadcom) modus instellen voor het nummeren van de pins GPIO.setmode(GPIO.BCM) Explanation: LEDs aansturen met de Raspberry Pi GPIO pins Met deze notebook zullen we de General Purpose Input/Output (GPIO) pinnen op de Raspberry Pi gebruiken om een LED lampje te laten branden. GPIO pinnen zijn de 40 (genummerde) pinnen tegenover de HDMI aansluiting waarop verbindingsdraden aangesloten kunnen worden. Daarvoor moeten we echter een manier afspreken met de Raspberry Pi om de verschillende pinnen van mekaar te onderscheiden en dat doen we als volgt met de functie setmode. IPython Instructies: Plaats je cursor in de cel hieronder en druk Shift+Enter of klik op de Play knop in de menubalk bovenaan om de code in de cel uit te voeren Shift + Enter: Voer de cel uit en spring naar de volgende cel Ctrl + Enter: Voer de cel uit, maar blijf op de huidige cel staan Alt + Enter: Voer de cel uit en maak een nieuwe cel aan Zolang er een [*] staat links van de cel is de code nog aan het lopen. Zodra de code beëindigd is, komt er een volgnummmer en wordt eventuele output uitgeprint onder de cel End of explanation # Als we een variabele maken met de naam PIN en de waarde 18, kunnen we deze overal gebruiken en # toch makkelijk wijzigen als de LED naar een andere pin verplaatst wordt. PIN = 18 # pin instellen als output GPIO.setup(PIN, GPIO.OUT) Explanation: BCM is de nummering die gegraveerd is op de Raspberry Pi case die we gebruiken en die ook op het afgedrukte Pinout schema terug te vinden is. Opgelet: een LED is een diode en het is dus belangrijk om de stroom er in de juiste richting door te sturen. Het verschil tussen de korte en lange aansluiting van de LED is dus van belang. Het lange eind verbinden we met GPIO18 op de Pi. Maar om te vermijden dat de LED (en bijgevolg de Pi misschien ook) teveel stroom te verwerken krijgt, moeten we nog een resistor in serie met de LED plaatsen. Volg de illustratie -en let op dat je een resistor met lage waarde gebruikt, zoals 220-360 Ohm-: <img src="LED01.png" height="300"/> En dan moeten we de Raspberry Pi enkel nog vertellen dat we pin GPIO18 graag zouden gebruiken als output, zodat we de spanning op die pin kunnen wijzigen. End of explanation import time # Eeuwig loopen (herhalen) while True: # pin 18 uitschakelen GPIO.output(PIN, 0) # halve seconde wachten time.sleep(.5) # pin 18 weer inschakelen GPIO.output(PIN, 1) # halve seconde wachten time.sleep(.5) #... en opnieuw ... Explanation: Nu alle GPIO instellingen gebeurd zijn, kunnen we pin 18 aan het werk zetten. Hiervoor laden we eerst de time bibliotheek in, die tijdsgerelateerde functionaliteit beschikbaar maakt: time.sleep(x) is een functie die de computer vertelt dat hij x aantal seconden moet wachten voor hij verder gaat met de volgende instructie. End of explanation #reset the GPIO PIN = 18 GPIO.cleanup() GPIO.setmode(GPIO.BCM) GPIO.setup(PIN, GPIO.OUT) # PWM frequentie instellen in Hz (cycles per seconde) led = GPIO.PWM(PIN, 60) # Starten met PWM led.start(0) try: while True: # duty cycle telkens met 1% verhogen for dc in range(0, 101, 1): led.ChangeDutyCycle(dc) time.sleep(0.05) # en weer omlaag ... for dc in range(100, -1, -1): led.ChangeDutyCycle(dc) time.sleep(0.05) except KeyboardInterrupt: pass led.stop() GPIO.cleanup() Explanation: Nota: Om de code te onderbreken kan je: in de menubalk op de stop knop drukken in het menu bovenaan Kernel > Interrupt kiezen een keyboard shortcut gebruiken door tweemaal i te typen (enkel terwijl de cel zich in command modus (grijs rand) bevindt. Pulse Width Modulation Nu kunnen we binaire commando's sturen ("aan" en "uit"), maar de GPIO bibliotheek laat ons ook toe om een gesimuleerd analoog signaal te sturen, ook wel PWM genoemd. Dit bestaat erin om snel af te wisselen tussen aan en uit signalen en de verhouding tussen de totale "aan" en de totale "uit" tijd te gebruiken om een analoog niveau tussen 0 en 1 te simuleren. bvb 75% van de tijd "aan" == een duty cycle van 75% == een analoog signaal van 75% van de maximale amplitude In het geval van een LED uit zich dat in het feller of zwakker schijnen van de LED, in het geval van een motor kan het geïnterpreteerd worden als een beoogde snelheid of stand van de as. End of explanation
12,670
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Python is widely used general-purpose high-level programming language. Its design philosophy emphasizes code readability. It is very popular in science. Jupyter The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. * Evolved from IPython notebook * In addition to Python it supports many other programming languages (Julija, R, Haskell, etc..) * http Step1: Variables, lists and dictionaries Step2: Strings Step3: Conditionals Step4: Loops Step5: List comprehensions Step6: File operations Step8: Functions Step9: Python libraries Library is a collection of resources. These include pre-written code, subroutines, classes, etc. Step10: Plotting
Python Code: print('This is cell with code') Explanation: Python Python is widely used general-purpose high-level programming language. Its design philosophy emphasizes code readability. It is very popular in science. Jupyter The Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. * Evolved from IPython notebook * In addition to Python it supports many other programming languages (Julija, R, Haskell, etc..) * http://jupyter.org/ Getting started Anaconda/Conda (need to install) https://www.continuum.io/downloads I recommend PYTHON 2.7 Web hosted (only need a web browser) http://tmpnb.org The notebook Cell types - markdown and code This is Markdown cell End of explanation var1 = 1 my_string = "This is a string" var1 print(my_string) my_list = [1, 2, 3, 'x', 'y'] my_list my_list[0] my_list[1:3] salaries = {'Mike':2000, 'Ann':3000} salaries['Mike'] salaries['Jake'] = 2500 salaries Explanation: Variables, lists and dictionaries End of explanation long_string = 'This is a string \n Second line of the string' print(long_string) long_string.split(" ") long_string.split("\n") long_string.count('s') # case sensitive! long_string.upper() Explanation: Strings End of explanation if long_string.startswith('X'): print('Yes') elif long_string.startswith('T'): print('It has T') else: print('No') Explanation: Conditionals End of explanation for line in long_string.split('\n'): print line c = 0 while c < 10: c += 2 print c Explanation: Loops End of explanation some_numbers = [1,2,3,4] [x**2 for x in some_numbers] Explanation: List comprehensions End of explanation with open('../README.md', 'r') as f: content = f.read() print(content) Explanation: File operations End of explanation def average(numbers): return float(sum(numbers)/len(numbers)) average([1,2,2,2.5,3,]) map(average, [[1,2,2,2.5,3,],[3,2.3,4.2,2.5,5,]]) # %load cool_events.py #!/usr/bin/env python from IPython.display import HTML class HUB: HUB event class def __init__(self, version): self.full_name = "Heidelberg Unseminars in Bioinformatics" self.info = HTML("<p>Heidelberg Unseminars in Bioinformatics are participant-" "driven meetings where people with an interest in bioinformatics " "come together to discuss hot topics and exchange ideas and then go " "for a drink and a snack afterwards.</p>") self.version = version def __repr__(self): return self.full_name this_event = HUB(21) this_event this_event.full_name this_event.version Explanation: Functions End of explanation from math import exp exp(2) #shift tab to access documentation import math math.exp(10) import numpy as np # Numpy - package for scientifc computing #import pandas as pd # Pandas - package for working with data frames (tables) #import Bio # BioPython - package for bioinformatics #import sklearn # scikit-learn - package for machine larning #from rdkit import Chem # RDKit - Chemoinformatics library Explanation: Python libraries Library is a collection of resources. These include pre-written code, subroutines, classes, etc. End of explanation %matplotlib inline import matplotlib.pyplot as plt x_values = np.arange(0, 20, 0.1) y_values = [math.sin(x) for x in x_values] plt.plot(x_values, y_values) plt.scatter(x_values, y_values) plt.boxplot(y_values) Explanation: Plotting End of explanation
12,671
Given the following text description, write Python code to implement the functionality described below step by step Description: The Counted (project by The Guardian to count the people killed by police in the US) Why is this necessary? From The Guardian's http Step1: # Open your dataset up using pandas in a Jupyter notebook Step2: Do a .head() to get a feel for your data Step3: Write down 12 questions to ask your data, or 12 things to hunt for in the data 1) How many people were killed by police in 2015? Step4: 2)Who was/were the oldest person killed? Step5: 3)Who was/were the youngest person killed? Step6: 4)What was the age average of people killed? Step7: 5)What was the state with more killings by police in 2015? Step8: 6)What was the city with more killings by police in 2015? Step9: 7)List all the incidents in Los Angeles Step10: 8) What was the month with more police killings in 2015? Step11: 9) What was the day with more police killings in July? Step12: 10) How are these killings distributed by race? Step13: 11) And by gender? Step14: 12) How many of the people killed where carrying a firearm? Step15: 13) Which was the law enforcement agency with more killings? Step16: 13) How many people were killed in custody? Step17: 14) List the people killed who where male, Hispanic/Latino and armed with a knife?
Python Code: !pip install pandas !pip install matplotlib import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Explanation: The Counted (project by The Guardian to count the people killed by police in the US) Why is this necessary? From The Guardian's http://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database "The US government has no comprehensive record of the number of people killed by law enforcement. This lack of basic data has been glaring amid the protests, riots and worldwide debate set in motion by the fatal police shooting of Michael Brown, an unarmed 18-year-old, in Ferguson, Missouri, in August 2014." End of explanation df = pd.read_csv('the-counted-2015.csv', encoding = "ISO-8859-1") Explanation: # Open your dataset up using pandas in a Jupyter notebook End of explanation df.head() Explanation: Do a .head() to get a feel for your data End of explanation df.tail(1) #there is one line per incident, so tail will give us the last incident. Explanation: Write down 12 questions to ask your data, or 12 things to hunt for in the data 1) How many people were killed by police in 2015? End of explanation df.sort_values('age', ascending=False).head(10) Explanation: 2)Who was/were the oldest person killed? End of explanation df.sort_values('age', ascending=True).head(5) Explanation: 3)Who was/were the youngest person killed? End of explanation df['age'].describe() #I could not get the average :s! #I thought using describe I would get something like this: #count 18617.000000 #mean 53.314841 #std 10.679143 #min 25.000000 #25% 45.400000 #50% 53.000000 #75% 60.500000 #max 98.100000 #Name: age, dtype: float64 Explanation: 4)What was the age average of people killed? End of explanation df['state'].value_counts() Explanation: 5)What was the state with more killings by police in 2015? End of explanation df['city'].value_counts() Explanation: 6)What was the city with more killings by police in 2015? End of explanation los_angeles = df['city'] == 'Los Angeles' #df['complete_date'] = df['day'], df['year'] df[los_angeles] #I wanted to add a new column to order the incidents chronologically (month, day, year) but I got an error saying, #ValueError: Length of values does not match length of index Explanation: 7)List all the incidents in Los Angeles End of explanation df['month'].value_counts() Explanation: 8) What was the month with more police killings in 2015? End of explanation df['July'] = df['month'] == 'July' df['July'].value_counts() july_count = df.groupby('month')['day'].value_counts() pd.DataFrame(july_count) #I tried to do df.groupby('July')['day'].value_counts() but it did not work. #Here I am getting the results but I am not able to see July. Explanation: 9) What was the day with more police killings in July? End of explanation df['raceethnicity'].value_counts() #this results do not align with those in the Guardian's website. ??? Explanation: 10) How are these killings distributed by race? End of explanation df['gender'].value_counts() Explanation: 11) And by gender? End of explanation df['armed'].value_counts() df.head(20) Explanation: 12) How many of the people killed where carrying a firearm? End of explanation df['lawenforcementagency'].value_counts() Explanation: 13) Which was the law enforcement agency with more killings? End of explanation df['classification'].value_counts() Explanation: 13) How many people were killed in custody? End of explanation male = df['gender'] == 'Male' latino = df['raceethnicity'] == 'Hispanic/Latino' knife = df['armed'] == 'Knife' df[male&latino&knife] #How can I count them now? It tells me it is not possible? df['gender'].value_counts() #doesn't look right! plt.style.use('ggplot') df['age'].value_counts().hist() #we had to create a new list since histograms need floats and we had strings (unknown) age2 = [] for point in df['age']: if point != 'Unknown': age2.append(float(point)) else: age2.append(0) df['age_2'] = age2 df['age_2'].hist() df['age'].sort_values() #we still have unknown values plotted no_unknowns = df.drop(df.index[[147, 1072, 1066, 88]]) no_unknowns['age'].sort_values() age2 = [] for point in no_unknowns['age']: age2.append(float(point)) no_unknowns['age2'] = age2 no_unknowns['age2'].hist() df['state'].value_counts().hist() df['state'].value_counts() Explanation: 14) List the people killed who where male, Hispanic/Latino and armed with a knife? End of explanation
12,672
Given the following text description, write Python code to implement the functionality described below step by step Description: Finite Sequences Step1: Infinite Sequences Step2: Branching Sequences Sometimes we want to do multiple things with an infinite sequence. This is where the Python iterator abstraction starts to feel awkward. Step3: Also want Handle multiple incoming streams with joins Perform time-window operations like "group by 50 ms" or "slow down input stream, I'm swamped" ... Streamz Same applications, just a different way of thinking about controlling data. Step4: Easy to add on new components
Python Code: import json data = ['{"name": "Alice", "value": 1}', '{"name": "Bob", "value": 2}', '{"name": "Alice", "value": 3}', '{"name": "Alice", "value": 4}', '{"name": "Charlie", "value": 5}', '{"name": "Bob", "value": 6}', '{"name": "Alice", "value": 7}'] seq = list(map(json.loads, data)) seq import toolz seq = list(toolz.pluck('value', seq)) seq sum(seq) Explanation: Finite Sequences End of explanation def infinite_data(): for x in data: yield x # Here we stop, but we could keep going forever... raise StopIteration from operator import add seq = infinite_data() seq = map(json.loads, seq) seq = toolz.pluck('value', seq) seq = toolz.accumulate(add, seq) for item in seq: print(item) Explanation: Infinite Sequences End of explanation import itertools import logging from collections import deque seq = infinite_data() seq = map(json.loads, data) seq1, seq2 = itertools.tee(seq, 2) seq1 = toolz.pluck('value', seq1) # what we did before seq1 = toolz.accumulate(add, seq1) last_three = deque(maxlen=3) seq2 = map(last_three.append, seq2) while True: try: item = next(seq1) print(item) next(seq2) print(last_three) except StopIteration: break Explanation: Branching Sequences Sometimes we want to do multiple things with an infinite sequence. This is where the Python iterator abstraction starts to feel awkward. End of explanation from streamz import Stream L = [] # Simple linear stream source = Stream() stream = (source.map(json.loads) .map(lambda x: x['value']) .scan(add)) # Two actions whenever a value comes through stream.sink(print) stream.sink(L.append) for line in data: source.emit(line) L source.emit('{"name": "Charlie", "value": 100}'); L Explanation: Also want Handle multiple incoming streams with joins Perform time-window operations like "group by 50 ms" or "slow down input stream, I'm swamped" ... Streamz Same applications, just a different way of thinking about controlling data. End of explanation stream.sliding_window(2).sink(print) for line in data: source.emit(line) Explanation: Easy to add on new components End of explanation
12,673
Given the following text description, write Python code to implement the functionality described below step by step Description: Download the list of occultation periods from the MOC at Berkeley. Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning. Step1: Download the NuSTAR TLE archive. This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later. The times, line1, and line2 elements are now the TLE elements for each epoch. Step2: Here is where we define the observing window that we want to use. Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error. Step3: We want to know how to orient NuSTAR for the Sun. We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates). This is what you tell the SOC you want the "Sky PA angle" to be. Step4: Set up the offset you want to use here Step5: Loop over each orbit and correct the pointing for the same heliocentric pointing position. Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here. Step6: This is where you actually make the Mosaic
Python Code: fname = io.download_occultation_times(outdir='../data/') print(fname) Explanation: Download the list of occultation periods from the MOC at Berkeley. Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning. End of explanation tlefile = io.download_tle(outdir='../data') print(tlefile) times, line1, line2 = io.read_tle_file(tlefile) Explanation: Download the NuSTAR TLE archive. This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later. The times, line1, and line2 elements are now the TLE elements for each epoch. End of explanation tstart = '2018-09-27T12:00:00' tend = '2018-09-29T12:10:00' orbits = planning.sunlight_periods(fname, tstart, tend) Explanation: Here is where we define the observing window that we want to use. Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error. End of explanation pa = planning.get_nustar_roll(tstart, 0) print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa)) Explanation: We want to know how to orient NuSTAR for the Sun. We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates). This is what you tell the SOC you want the "Sky PA angle" to be. End of explanation offset = [0., 0.]*u.arcsec Explanation: Set up the offset you want to use here: The first element is the direction +WEST of the center of the Sun, the second is the offset +NORTH of the center of the Sun. If you want multiple pointing locations you can either specify an array of offsets or do this "by hand" below. End of explanation for ind, orbit in enumerate(orbits): midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0]) sky_pos = planning.get_skyfield_position(midTime, offset, parallax_correction=True) print("Orbit: {}".format(ind)) print("Orbit start: {} Orbit end: {}".format(orbit[0].isoformat(), orbit[1].isoformat())) print('Aim time: {} RA (deg): {} Dec (deg): {}'.format(midTime.isoformat(), sky_pos[0], sky_pos[1])) print("") Explanation: Loop over each orbit and correct the pointing for the same heliocentric pointing position. Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here. End of explanation # Just use the first orbit...or choose one. This may download a ton of deltat.preds, which is a known # bug to be fixed. orbit = orbits[20] planning.make_mosaic(orbit, write_output=True, make_regions=True) Explanation: This is where you actually make the Mosaic End of explanation
12,674
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: WASM demo - brainfuck Brainfuck is an esoteric language that consists of only eight simple commands Step3: Hello world example Step5: Fibonacci example
Python Code: import wasmfun as wf def _commands2instructions(commands): Compile brainfuck commands to WASM instructions (as tuples). instructions = [] while commands: c = commands.pop(0) if c == '>': instructions += [('get_local', 0), ('i32.const', 1), ('i32.add'), ('set_local', 0)] elif c == '<': instructions += [('get_local', 0), ('i32.const', 1), ('i32.sub'), ('set_local', 0)] elif c == '+': instructions += [('get_local', 0), ('get_local', 0), # once for the read, once for the write ('i32.load8_u', 0, 0), ('i32.const', 1), ('i32.add'), ('i32.store8', 0, 0)] elif c == '-': instructions += [('get_local', 0), ('get_local', 0), # once for the read, once for the write ('i32.load8_u', 0, 0), ('i32.const', 1), ('i32.sub'), ('i32.store8', 0, 0)] elif c == '.': instructions += [('get_local', 0), ('i32.load8_u', 0, 0), ('call', 0)] elif c == ',': # We don't support input, just set to zero instructions += [('get_local', 0), ('i32.const', 0), ('i32.store8', 0, 0)] elif c == '[': instructions += [('block', 'emptyblock'), # if current data point == 0 goto end of block ('get_local', 0), ('i32.load8_u', 0, 0), ('i32.const', 0), ('i32.eq'), ('br_if', 0), ('loop', 'emptyblock'), ] + _commands2instructions(commands ) + [ # if current data point > 0 goto start of block ('get_local', 0), ('i32.load8_u', 0, 0), ('i32.const', 0), ('i32.ne'), ('br_if', 0), ('end'), ('end')] elif c == ']': break else: raise ValueError('Unknown Brainfuck command: %r' % c) return instructions Explanation: WASM demo - brainfuck Brainfuck is an esoteric language that consists of only eight simple commands: &gt; &nbsp;&nbsp; increment the data pointer (to point to the next cell to the right). &lt; &nbsp;&nbsp; decrement the data pointer (to point to the next cell to the left). + &nbsp;&nbsp; increment (increase by one) the byte at the data pointer. - &nbsp;&nbsp; decrement (decrease by one) the byte at the data pointer. . &nbsp;&nbsp; output the byte at the data pointer. , &nbsp;&nbsp; accept one byte of input, storing its value in the byte at the data pointer. [ &nbsp;&nbsp; if the byte at the data pointer is zero, then instead of moving the instruction pointer forward to the next command, jump it forward to the command after the matching ] command. ] &nbsp;&nbsp; if the byte at the data pointer is nonzero, then instead of moving the instruction pointer forward to the next command, jump it back to the command after the matching [ command Brainfuck is a simple language, but that does not mean that programming Brainfuck is easy! End of explanation EXAMPLE1 = [This program prints "Hello World!" and a newline to the screen] ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>. >---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++. instructions = _commands2instructions([c for c in EXAMPLE1 if c in '><+-.,[]']) m = wf.Module( wf.ImportedFuncion('print_charcode', ['i32'], [], 'js', 'print_charcode'), wf.Function('$main', [], [], ['i32'], instructions), wf.MemorySection((1, 1)), wf.DataSection(), ) len(m.to_bytes()) wf.run_wasm_in_notebook(m) Explanation: Hello world example End of explanation EXAMPLE2 = [Generate the fibonacci number sequence, (for numbers under 100). Taken from http://esoteric.sange.fi/brainfuck/bf-source/prog/fibonacci.txt ] +++++++++++>+>>>>++++++++++++++++++++++++++++++++++++++++++++> ++++++++++++++++++++++++++++++++<<<<<<[>[>>>>>>+>+<<<<<<<-]>>>>>>> [<<<<<<<+>>>>>>>-]<[>++++++++++[-<-[>>+>+<<<-]>>>[<<<+>>>-]+<[>[-] <[-]]>[<<[>>>+<<<-]>>[-]]<<]>>>[>>+>+<<<-]>>>[<<<+>>>-]+<[>[-]<[-]]> [<<+>>[-]]<<<<<<<]>>>>>[++++++++++++++++++++++++++++++++++++++++++++++++. [-]]++++++++++<[->-<]>++++++++++++++++++++++++++++++++++++++++++++++++.[-] <<<<<<<<<<<<[>>>+>+<<<<-]>>>>[<<<<+>>>>-]<-[>>.>.<<<[-]]<<[>>+>+<<<-]>>> [<<<+>>>-]<<[<+>-]>[<+>-]<<<-] instructions = _commands2instructions([c for c in EXAMPLE2 if c in '><+-.,[]']) m = wf.Module( wf.ImportedFuncion('print_charcode', ['i32'], [], 'js', 'print_charcode'), wf.Function('$main', [], [], ['i32'], instructions), wf.MemorySection((1, 1)), wf.DataSection(), ) wf.run_wasm_in_notebook(m) wf.run_wasm_in_node(m) Explanation: Fibonacci example End of explanation
12,675
Given the following text description, write Python code to implement the functionality described below step by step Description: One Dimensional Data Worksheet This worksheet reviews the concepts discussed about 1 dimensional data. The goal for these exercises is getting you to think in terms of vectorized computing. This worksheet should take 20-30 minutes to complete. Step1: Exercise 1 Create a Series object with 100 random integers, then filter out odd integers and reindex the Series. Hint Step2: Exercise 3 The code below contains a lambda function which converts a temperature from Farenheit to Celsius. You are given a Series called temperatures in Farhenheit. Using the .apply() function, convert the data into degrees Celsius. Step3: Exercise 4 You are given a list of numbers called numList. Without using a loop, write a script to count occurances of each value in the list. Step4: Exercise 5 You are given a Series of IP Addresses and the goal is to limit this data to private IP addresses. Python has an ipaddress module which provides the capability to create, manipulate and operate on IPv4 and IPv6 addresses and networks. Complete documentation is available here
Python Code: import pandas as pd import numpy as np Explanation: One Dimensional Data Worksheet This worksheet reviews the concepts discussed about 1 dimensional data. The goal for these exercises is getting you to think in terms of vectorized computing. This worksheet should take 20-30 minutes to complete. End of explanation numbers = ['(342)123-2345', '410-342-3421', '(234 434-2121', '(301)822-3423', '123-234-3423', '(410)555-4443', 'AAAAHHH', '(XXX)XXX-XXXX', '(602)123-4535', '(234)127-4534'] #Your code here... Explanation: Exercise 1 Create a Series object with 100 random integers, then filter out odd integers and reindex the Series. Hint: you can use python np.random.random_integers(1, 100, 100) to create the random numbers. Print out the first 20 numbers. Exercise 2 You will be given a list containing 10 strings. Create a new Series called validPhoneNumbers that only contains data in the format (XXX)XXX-XXXX. Don't forget to reindex the series after you've filtered it. End of explanation #This function converts a number from Farenheit to Celsius toCelsius = lambda x: (float(5)/9)*(x-32) #Creates a series with numbers that represent temperatures in Farenheit tempsInFarenheit = pd.Series( [92,33,-5,17,122,87 ]) #Your code here... Explanation: Exercise 3 The code below contains a lambda function which converts a temperature from Farenheit to Celsius. You are given a Series called temperatures in Farhenheit. Using the .apply() function, convert the data into degrees Celsius. End of explanation numList = [1,1,1,1,1,2,4,5,7,5,4,5,6,4,3,5,5,5,6,9,0,7,6,7,5,4,4,7] #Your code here... Explanation: Exercise 4 You are given a list of numbers called numList. Without using a loop, write a script to count occurances of each value in the list. End of explanation import ipaddress hosts = [ '192.168.1.2', '10.10.10.2', '172.143.23.34', '34.34.35.34', '172.15.0.1', '172.17.0.1'] #Your code here... Explanation: Exercise 5 You are given a Series of IP Addresses and the goal is to limit this data to private IP addresses. Python has an ipaddress module which provides the capability to create, manipulate and operate on IPv4 and IPv6 addresses and networks. Complete documentation is available here: https://docs.python.org/3/library/ipaddress.html. Here are some examples of how you might use this module: ```python import ipaddress myIP = ipaddress.ip_address( '192.168.0.1' ) myNetwork = ipaddress.ip_network( '192.168.0.0/28' ) Check membership in network if myIP in myNetwork: #This works print "Yay!" Loop through CIDR blocks for ip in myNetwork: print( ip ) 192.168.0.0 192.168.0.1 … … 192.168.0.13 192.168.0.14 192.168.0.15 Testing to see if an IP is private if myIP.is_private: print( "This IP is private" ) else: print( "Routable IP" ) ``` First, write a function which takes an IP address and returns true if the IP is private, false if it is public. HINT: use the ipaddress module. Next, use this to create a Series of true/false values in the same sequence as your original Series. Finally, use this to filter out the original Series so that it contains only private IP addresses. End of explanation
12,676
Given the following text description, write Python code to implement the functionality described below step by step Description: Statements Assessment Test Lets test your knowledge! Use for, split(), and if to create a Statement that will print out words that start with 's' Step1: Use range() to print all the even numbers from 0 to 10. Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3. Step3: Go through the string below and if the length of a word is even print "even!" Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". Step5: Use List Comprehension to create a list of the first letters of every word in the string below
Python Code: st = 'Print only the words that start with s in this sentence' #Code here # to note: a for in for a string iterates through letters, not numbers for word in st.split(): letter = word[0].lower() if letter == 's': print word Explanation: Statements Assessment Test Lets test your knowledge! Use for, split(), and if to create a Statement that will print out words that start with 's': End of explanation #Code Here range(0, 11, 2) Explanation: Use range() to print all the even numbers from 0 to 10. End of explanation #Code in this cell [num for num in range(1, 50) if num % 3 == 0] Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisible by 3. End of explanation st = 'Print every word in this sentence that has an even number of letters' #Code in this cell for word in st.split(): if len(word) % 2 == 0: print word Explanation: Go through the string below and if the length of a word is even print "even!" End of explanation #Code in this cell def fizzbuzz(start, end): for i in range(start, end): is_fizzy = i % 3 == 0 is_buzzy = i % 5 == 0 if is_fizzy and not is_buzzy: print "Fizz" elif is_buzzy and not is_fizzy: print "Buzz" elif is_fizzy and is_buzzy: print "FizzBuzz" else: print i fizzbuzz(0, 100) Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". End of explanation st = 'Create a list of the first letters of every word in this string' #Code in this cell running_list = [] for word in st.split(): running_list.append(word[0]) print running_list Explanation: Use List Comprehension to create a list of the first letters of every word in the string below: End of explanation
12,677
Given the following text description, write Python code to implement the functionality described below step by step Description: Versicherung on Paper Step1: Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P Step2: Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P Step3: Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat. Step4: Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe. Geben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P Step5: Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren. 4 P Step6: Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P Step7: Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P
Python Code: %load_ext sql %sql mysql://steinam:steinam@localhost/versicherung_complete Explanation: Versicherung on Paper End of explanation %%sql -- meine Lösung select distinct(Land) from Fahrzeughersteller; %%sql -- deine Lösung select fahrzeughersteller.Land from fahrzeughersteller group by fahrzeughersteller.Land ; Explanation: Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P End of explanation %%sql -- meine Lösung select fahrzeugtyp.Bezeichnung, count(fahrzeug.iD) as Anzahl from fahrzeugtyp left join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id group by fahrzeugtyp.bezeichnung having count(Anzahl) > 2 %%sql select *, (select count(*) from fahrzeug where fahrzeug.fahrzeugtyp_id = fahrzeugtyp.id) as Fahrzeuge from fahrzeugtyp having Fahrzeuge > 2 order by fahrzeugtyp.bezeichnung; Explanation: Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P End of explanation %%sql -- meine Lösung -- select ID from Abteilung where Abteilung.Ort = 'Dortmund' or abteilung.Ort = 'Bochum' select Name, vorname, Bezeichnung, Abteilung.ID, Mitarbeiter.Abteilung_ID, Abteilung.Ort from Mitarbeiter inner join Abteilung on Mitarbeiter.Abteilung_ID = Abteilung.ID where Abteilung.Ort in('Dortmund', 'Bochum') order by Name %%sql -- deine Lösung select mitarbeiter.Name, mitarbeiter.Vorname, (select abteilung.bezeichnung from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Abteilung, (select abteilung.ort from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Standort from mitarbeiter having Standort = "Dortmund" or Standort = "Bochum"; Explanation: Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat. End of explanation %%sql -- meine Lösung select fahrzeughersteller.id, year(datum) as Jahr, min(zuordnung_sf_fz.schadenshoehe), max(zuordnung_sf_fz.Schadenshoehe), (max(zuordnung_sf_fz.schadenshoehe) - min(zuordnung_sf_fz.schadenshoehe)) as Differenz from fahrzeughersteller left join fahrzeugtyp on fahrzeughersteller.id = fahrzeugtyp.hersteller_ID inner join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id inner join zuordnung_sf_fz on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id inner join schadensfall on schadensfall.id = zuordnung_sf_fz.schadensfall_id group by fahrzeughersteller.id, year(datum) %%sql -- redigierte Version von Wortmann geht select fahrzeughersteller.Name, (select min(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz where zuordnung_sf_fz.fahrzeug_id in( select fahrzeug.id from fahrzeug where fahrzeug.fahrzeugtyp_id in( select fahrzeugtyp.id from fahrzeugtyp where fahrzeugtyp.hersteller_id = fahrzeughersteller.id ) ) ) as Kleinste, (select max(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz where zuordnung_sf_fz.fahrzeug_id in( select fahrzeug.id from fahrzeug where fahrzeug.fahrzeugtyp_id in( select fahrzeugtyp.id from fahrzeugtyp where fahrzeugtyp.hersteller_id = fahrzeughersteller.id ) ) ) as `Groesste` from fahrzeughersteller; Explanation: Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe. Geben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P End of explanation %%sql select Mitarbeiter.Name, dienstwagen.Kennzeichen from Mitarbeiter inner join dienstwagen on mitarbeiter.id = dienstwagen.Mitarbeiter_id inner join fahrzeugtyp on dienstwagen.fahrzeugtyp_Id = fahrzeugtyp.id inner join fahrzeughersteller on fahrzeugtyp.hersteller_id = fahrzeughersteller.id where Fahrzeughersteller.NAme = 'Opel' %%sql select * from mitarbeiter where mitarbeiter.id in( select dienstwagen.mitarbeiter_id from dienstwagen where dienstwagen.mitarbeiter_id = mitarbeiter.id and dienstwagen.fahrzeugtyp_id in( select fahrzeugtyp.id from fahrzeugtyp where fahrzeugtyp.hersteller_id in( select fahrzeughersteller.id from fahrzeughersteller where fahrzeughersteller.name = "Opel" ) ) ) Explanation: Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren. 4 P End of explanation %%sql select fahrzeug.kennzeichen, sum(schadenshoehe) from fahrzeug inner join zuordnung_sf_fz on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id group by fahrzeug.kennzeichen having sum(schadenshoehe) > (select avg(schadenshoehe) from zuordnung_sf_fz) %%sql -- deine Lösung Wortmann /* select * from fahrzeug having fahrzeug.id in( select zuordnung_sf_zf.fahrzeugtyp_id from zuordnung_sf_zf where zuordnung_sf_zf.schadenhoehe > ((select sum(zuordnung_sf_zf.schadenhoehe) from zuordnung_sf_zf)) / (select count(*) from zuordnung_sf_zf)) */ select * from fahrzeug having fahrzeug.id in( select zuordnung_sf_fz.fahrzeug_id from zuordnung_sf_fz where zuordnung_sf_fz.schadenshoehe > ((select sum(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz)) / (select count(*) from zuordnung_sf_fz)) Explanation: Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P End of explanation %%sql select Mitarbeiter.Name, Mitarbeiter.Geburtsdatum from Mitarbeiter where Geburtsdatum < (select avg(Geburtsdatum) from Mitarbeiter ma) order by Mitarbeiter.Name %%sql -- geht auch select ma.Name, ma.Geburtsdatum from Mitarbeiter ma where (now() - ma.Geburtsdatum) < (now() - (select avg(geburtsdatum) from mitarbeiter)) order by ma.Name; %%sql -- deine Lösung Wortmann select * from mitarbeiter having mitarbeiter.geburtsdatum < (select sum(mitarbeiter.geburtsdatum) from mitarbeiter) / (select count(*) from mitarbeiter) Explanation: Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P End of explanation
12,678
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This notebook facilitates the manual curation of sample alignment. Step1: Setup Parameters Step2: Directories Step3: Extract list of files Step4: Import raw data and perform preprocessing Step5: Example alignment The start function initializes a sample for alignment. It displays a plot of the data with three projections (XY, XZ, and ZY) that should give a general sense of sample alignment. Note, that the start function attempts to align the sample using PCA. In some really bad cases, it is better to work on a sample without PCA alignment, in which case we can use the following code to initialize Step6: Process remaining samples The workflow presented in the section above needs to be applied to all samples in klist. It is up to the user to decide which corrections are appropriate for each individual sample. I recommend that this review is also used as an opportunity to exclude samples with issues in the raw data, such as tears in the commissure or limited signal. The reason for rejecting a sample can be recorded in this notebook for future reference. Step7: Wrapping up
Python Code: import deltascope.alignment as ut import numpy as np import pandas as pd import matplotlib.pyplot as plt import h5py import os import re import time import tqdm Explanation: Introduction This notebook facilitates the manual curation of sample alignment. End of explanation # -------------------------------- # -------- User input ------------ # -------------------------------- param = { 'gthresh':0.5, 'scale':[1,1,1], 'microns':[0.16,0.16,0.21], 'mthresh':0.5, 'radius':10, 'comp_order':[0,2,1], 'fit_dim':['x','z'], 'deg':2, # Don't forget to modify this with a specific sample name 'expname':'expname' } Explanation: Setup Parameters End of explanation # -------------------------------- # -------- User input ------------ # -------------------------------- # Specify file paths to directories containing probability files # after processing by ilastik gfap = os.path.abspath('..\expname\GFAP\Prob') at = os.path.abspath('..\expname\AT\Prob') # Specify root directory where output will be saved root = os.path.abspath('..\expname') # Output directory with timestamp outname = 'Output_'+time.strftime("%m-%d-%H-%M", time.localtime()) # Create output directory outdir = os.path.join(root,outname) os.mkdir(outdir) Explanation: Directories End of explanation Dat = {} for f in os.listdir(at): if 'h5' in f: num = re.findall(r'\d+',f.split('.')[0])[-1] Dat[num] = os.path.join(at,f) Dzrf = {} for f in os.listdir(gfap): if 'h5' in f: num = re.findall(r'\d+',f.split('.')[0])[-1] Dzrf[num] = os.path.join(gfap,f) # Extract list of filename keys klist = Dat.keys() # Create dictionaries to contain the deltascope brain object for each sample Dbat = {} Dbzrf = {} Explanation: Extract list of files End of explanation %%time for k in tqdm.tqdm(klist): if k not in list(Dbat.keys()): Dbat[k] = ut.preprocess(Dat[k],param) Dbzrf[k] = ut.preprocess(Dzrf[k],param,pca=Dbat[k].pcamed, mm=Dbat[k].mm,vertex=Dbat[k].vertex) else: print(k,'already processed') Explanation: Import raw data and perform preprocessing End of explanation ''' Define wrapper functions for starting and saving to minimize the number of inputs that the user needs to type for each call of the function.''' def start(k): return(ut.start(k,Dbat,[Dbzrf],im=True)) def save_both(k,dfa,dfb): ut.save_both(k,dfa,dfb,outdir,param.expname) '''Save model parameters for each file to a dataframe that can be exported for later reference.''' model = pd.DataFrame({'a':[],'b':[],'c':[]}) def save_model(k,mm,model): row = pd.Series({'a':mm[0],'b':mm[1],'c':mm[2]},name=k) model = model.append(row) return(model) '''Define a function that can both fit a model and plot it on an existing plot''' def fit_model(axi,df,mm=None): if mm == None: mm = np.polyfit(df.x,df.z,2) p = np.poly1d(mm) xrange = np.arange(np.min(df.x),np.max(df.x)) axi.plot(xrange,p(xrange),c='m') return(mm) '''Take a set of points and transform to a dataframe format for ease of access.''' def pick_pts(x1,z1,vx,vz,x2,z2): pts = pd.DataFrame({'x':[x1,vx,x2],'z':[z1,vz,z2]}) return(pts) Explanation: Example alignment The start function initializes a sample for alignment. It displays a plot of the data with three projections (XY, XZ, and ZY) that should give a general sense of sample alignment. Note, that the start function attempts to align the sample using PCA. In some really bad cases, it is better to work on a sample without PCA alignment, in which case we can use the following code to initialize: python k = klist[0] df,Ldf = get_dfs(k) ax = ut.make_graph([df]+Ldf) In most cases, the following approach will be appropriate. python k,df,Ldf,ax = start(klist[0]) k: The dictionary key that identifies this sample. It should be the sample number extracted from the filename. df: The dataframe containing datapoints associated with the primary alignment channel, in this case, AT. Ldf: A list of additional dataframes corresponding to other channels in the collection. In this template, we are assuming only one additional channel, ZRF. ax: The array containing subplots for this sample. There should be three projections for each channel and a row per channel. In this example the dimensions of the array are 2x3. Correction options A: Rotation around the X axis This rotation error can best be identified in the YZ projection where the line of the sample does not fall on either axis. In order to correct this error, we will select two points in YZ that will be used to calculate a line that fits the sample. We can then use this line to calculate the angle of rotation needed to align the sample with the Z axis. To perform this rotation, we will use the ut.check_yz function, which will fit a line in the YZ plane to use for rotation. This function takes only df and Ldf as required inputs. python df1,Ldf1,ax,p = ut.check_yz(df,Ldf) This function returns updated verions of df and Ldf following the rotation. I typically define new variables df1 and Ldf1 to keep track of the changes. It also returns a plotting object ax that will display a before and after comparison of the alignment with the points used for alignment plotted on the before plot for reference. Finally, it returns the np.poly1d object which contains the line that was used for alignment. If the results of this alignment are good, we can proceed with df1 and Ldf1. Alternatively, we can try to manually assign an improved line and pass the resulting np.poly1d object to ut.check_yz as an optional arguement ut.check_yz(df,Ldf,mm=np.poly1d). B: Rotation around the Y axis This error can be seen in the XZ projection where the parabola of the sample is tilted towards one side or the other. In order to correct this error, we will select two points that mark the endpoints of the parabola. The line between these two points will be used to calculate the angle of rotation to correct the parabola. To perform this rotation, we will use the check_pts function, which will perform a rotation either in the XY or XZ plane. It requires three parameters: df, Ldf, and the second dimension of the plane of interest ('y' or 'z'). ```python Attempt rotation based on alignment points calculated from the data df1,Ldf1,pts,ax = ut.check_pts(df,Ldf,'z') ``` In addition to typical outputs, this function returns pts, which is a pandas dataframe specifying two points in the XZ plane that were used for alignment. If we are unhappy with the results of the alignment, we can manually adjust the anchor points and then recalculate the rotation. ```python Assign new values to pts pts.iloc[0].x = 10 pts.iloc[1].z = 50 Replot the updated pts to check choices ax[0,1].scatter(pts.x,pts.z,c='y') ``` If these pts look good, then we can use ut.revise_pts to recalculate the rotation. python df2,Ldf2,ax = ut.revise_pts(df,Ldf,'z',pts=pts) If we are happy with these results, we could use df2 and Ldf2 to save as our final result. C: Mismatched Y and Z axes Here the parabola appears in the XY projection when we expect it in the XZ projection. We can correct this by simply switching the Y and Z axes. The ut.zyswitch function facilitates this process. ```python df1,Ldf1 = zyswitch(df,Ldf) Plot data to check correction result ax = ut.make_graph(df1,Ldf1) ``` D: Upside down parabola We expect the parabola to lie in the positive half of the Z dimension. To correct an upside down parabola, we rotate the sample by 180$^\circ$ around X axis. Here, we will use the ut.flip function. ```python df1,Ldf1 = ut.flip(df,Ldf) Plot data to check correction result ax = ut.make_graph(df1,Ldf1) ``` Correct vertex After performing the corrections described in the previous section, the vertex of the parabola may no longer be positioned at the origin. The function ut.ch_vertex attempts to reposition the vertex to the origin. This function also returns the math model mm, which describes the parabola of the data. We will save mm for future reference. For this example, we will assume that we are happy with the alignment in df1 and Ldf1. python df2,Ldf2,mm,ax = ut.ch_vertex(df1,Ldf1) If we disagree with the assignment of the vertex, we can manually pick three points that will be used to recalculate the vertex. These points should mark the two ends of the parabola (x1,z1) and (x2,z2) as well as the approximate vertex (vx,vz). We can specify and check these points before using them to shift the sample. python pts = pick_pts(-78,12,-36,0,0,10) #these are essentially random numbers for example ax[0,1].scatter(pts.x,pts.z,c='m',s=50) Finally if we like these points, we can run ut.ch_vertex again with these specific points. python df3,Ldf3,mm,ax = ut.ch_vertex(df2,Ldf2,pts=pts) In order to complete the alignment process, we need to add the math model mm to dataframe for future reference and save the aligned data. python model = save_model(k,mm,model) save_both(k,df3,Ldf3[0]) Mack's Notes on Minimum one must do to each sample python k,df,Ldf,ax = start(klist[0]) df,Ldf,mm,ax = ut.ch_vertex(df,Ldf) model = save_model(k,mm,model) save_both(k,df,Ldf[0]) Define experiment specific functions End of explanation klist Explanation: Process remaining samples The workflow presented in the section above needs to be applied to all samples in klist. It is up to the user to decide which corrections are appropriate for each individual sample. I recommend that this review is also used as an opportunity to exclude samples with issues in the raw data, such as tears in the commissure or limited signal. The reason for rejecting a sample can be recorded in this notebook for future reference. End of explanation model.to_csv(os.path.join(outdir,'model.csv')) Explanation: Wrapping up: after all samples are processed Once all of the data has been processed, we want to save the model data we collected to a file for future reference. End of explanation
12,679
Given the following text description, write Python code to implement the functionality described below step by step Description: Ordinary Differential Equations Exercise 3 Imports Step1: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. Step5: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. Step7: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. Step9: Use interact to explore the plot_pendulum function with
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import seaborn as sns from scipy.integrate import odeint from IPython.html.widgets import interact, fixed Explanation: Ordinary Differential Equations Exercise 3 Imports End of explanation g = 9.81 # m/s^2 l = 0.5 # length of pendulum, in meters tmax = 50. # seconds t = np.linspace(0, tmax, int(100*tmax)) Explanation: Damped, driven nonlinear pendulum The equations of motion for a simple pendulum of mass $m$, length $l$ are: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta $$ When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics: $$ \frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t) $$ In this equation: $a$ governs the strength of the damping. $b$ governs the strength of the driving force. $\omega_0$ is the angular frequency of the driving force. When $a=0$ and $b=0$, the energy/mass is conserved: $$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$ Basic setup Here are the basic parameters we are going to use for this exercise: End of explanation def derivs(y, t, a, b, omega0): Compute the derivatives of the damped, driven pendulum. Parameters ---------- y : ndarray The solution vector at the current time t[i]: [theta[i],omega[i]]. t : float The current time t[i]. a, b, omega0: float The parameters in the differential equation. Returns ------- dy : ndarray The vector of derviatives at t[i]: [dtheta[i],domega[i]]. dy0 = y[1] dy1 = -g/l * np.sin(y[0]) - a*dy0 - b*np.sin(omega0*t) return(dy0,dy1) derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0) assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.]) def energy(y): Compute the energy for the state array y. The state array y can have two forms: 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time. 2. It could be an ndim=2 array where each row is the [theta,omega] at single time. Parameters ---------- y : ndarray, list, tuple A solution vector Returns ------- E/m : float (ndim=1) or ndarray (ndim=2) The energy per mass. if np.ndim(y) == 1: y = np.array([y]) z = np.shape(y)[0] Em = g * l * (1 - np.cos([y[i][0] for i in range(z)])) + 1/2 * l**2 * (np.array([y[i][1] for i in range(z)]))**2 return Em assert np.allclose(energy(np.array([np.pi,0])),g) assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1]))) Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. End of explanation a=0 b=0 omega0=0 ans = odeint(derivs, np.array([np.pi,0]), t, args=(a,b,omega0), atol=10**(-5), rtol=10**(-4)) plt.plot(t, energy(ans)) plt.title("Energy of Simple Pendulum at ( $\pi$, 0 )") plt.xlabel("Time") plt.ylabel("Energy") ax = plt.gca() ax.set_axis_bgcolor("#fcfcfc") plt.plot(t, np.transpose(ans)[0], label="Omega") plt.plot(t, np.transpose(ans)[1], label="Theta") plt.title("Simple Pendulum") plt.xlabel("Time") ax = plt.gca() ax.set_axis_bgcolor("#fcfcfc") plt.legend(loc ='lower right') assert True # leave this to grade the two plots and their tuning of atol, rtol. Explanation: Simple pendulum Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy. Integrate the equations of motion. Plot $E/m$ versus time. Plot $\theta(t)$ and $\omega(t)$ versus time. Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant. Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable. End of explanation def plot_pendulum(a=0.0, b=0.0, omega0=0.0): Integrate the damped, driven pendulum and make a phase plot of the solution. ans = np.transpose(odeint(derivs, np.array([-np.pi + 0.1,0]), t, args=(a,b,omega0))) plt.plot(ans[0], ans[1]) plt.title("Damped Driven Pendulum") plt.xlabel("Omega") plt.ylim(-10,10) plt.ylabel("Theta") plt.grid(False) ax = plt.gca() ax.set_axis_bgcolor("white") plt.xticks(np.linspace(-2*np.pi, 2*np.pi, 5), [r'$-2\pi$', r'$-\pi$', r'$0$', r'$\pi$', r'$2\pi$']) Explanation: Damped pendulum Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$. Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$. Decrease your atol and rtol even futher and make sure your solutions have converged. Make a parametric plot of $[\theta(t),\omega(t)]$ versus time. Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$ Label your axes and customize your plot to make it beautiful and effective. End of explanation plot_pendulum(0.5, 0.0, 0.0) Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral. End of explanation interact(plot_pendulum, a=(0.0,10.0,0.1), b=(0.0,10.0, 0.1), omega0=(0,10.0,0.1)); Explanation: Use interact to explore the plot_pendulum function with: a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$. b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. End of explanation
12,680
Given the following text description, write Python code to implement the functionality described below step by step Description: Applying Contextual Bandits for Recommendation systems using Tensorflow and Cloud Storage Learning objectives Install and import required libraries. Initialize and configure the MovieLens Environment. Initialize the Agent. Define and link the evaluation metrics. Initialize & configure the Replay Buffer. Setup and Train the model. Inference with trained model & Tensorboard Evaluation. Introduction Multi-Armed Bandit (MAB) is a Machine Learning framework in which an agent has to select actions (arms) in order to maximize its cumulative reward in the long term. In each round, the agent receives some information about the current state (context), then it chooses an action based on this information and the experience gathered in previous rounds. At the end of each round, the agent receives the reward assiociated with the chosen action. https Step1: 2. Initializing and configuring the MovieLens Environment Firstly we need to load the movielens.data csv file stored in cloud storage, load it locally and initilialze the MovielensPyenvironment with it. Refer here for guidance on it. An environment in the TF-Agents Bandits library is a class that provides observations and reports rewards based on observations and actions. We will be using the MovieLens environment. This environment implements the MovieLens 100K dataset, available at Step2: 3. Initializing the Agent Now that we have the environment query we reach the part where we define and initialize our policy and the Agent which will be our utilize that policy to make decisions given an observation. We have several policies Step3: 4. Define and link the evaluation metrics Just like you have metrics like accuracy/recall in supervised learning, in bandits we use the regret metric per episode. To calculate the regret, we need to know what the highest possible expected reward is in every time step. For that, we define the optimal_reward_fn. Another similar metric is the number of times a suboptimal action was chosen. That requires the definition if the optimal_action_fn. Step4: 5. Initialize & configure the Replay Buffer Reinforcement learning algorithms use replay buffers to store trajectories of experience when executing a policy in an environment. During training, replay buffers are queried for a subset of the trajectories (either a sequential subset or a sample) to "replay" the agent's experience. Sampling from the replay buffer facilitate data re-use and breaks harmful co-relation between sequential data in RL, although in contextual bandits this isn't absolutely required but still helpful. The replay buffer exposes several functions which allow you to manipulate the replay buffer in several ways. Read more on them [here] (https Step5: Now we have a Replay buffer but we also need something to fill it with. Often a common practice is to have the agent Interact with and collect experience with the environment, without actually learning from it ( i.e. only forward pass). This loop can be either by you manually as shown here or you can do it using the DynamicStepDriver. The data encountered by the driver at each step is saved in a named tuple called Trajectory and broadcast to a set of observers such as replay buffers and metrics. This Trajectory includes the observation from the environment, the action recommended by the policy, the reward obtained, the type of the current and the next step, etc. In order for the driver to fill the replay buffer with data, as well as to compute ongoing metrics, it needs acess to the add_batch, functionality of the buffer, and the metrics ( both step and regular). Refer here for more information aand example code on how initialize a step driver with observers. Step7: 6. Setup and Train the Model Here we provide you a helper function in order to save your agent, the metrics and its lighter policy seperately, while training the model. We make all the aspects into trackable objects and then use checkpoint to save as well warm restart a previous training. For more information on checkpoints and policy savers ( which will be used in the training loop below) refer here Step8: Now we have all the components ready to start training the model. Here is the process for Training the model 1. We first use the DynamicStepdriver instance to collect experience( trajectories) from the environment and fill up the replay buffer. 2. We then extract all the stored experience from the replay buffer by specfiying the batch size and num_steps the same as we initialized the driver with. We extract it as tf.dataset instance. 3. We then iterate on the tf.dataset and the first sample we draw actually has all the data batch_size*num_time_steps 4. the agent then trains on the acquired experience 5. the replay buffer is cleared to make space for new data 6. Log the metrics and store them on disk 7. Save the Agent ( via checkpoints) as well as the policy We recommend doing the training for 15,000 loops with 2 steps per loop, and an agent alpha of 10.0 Step9: Note Step10: One last task before starting the training Step11: <img src='./assets/example_tensorboard.png'> 7. Inferencing with trained model & Tensorboard Evaluation Now that our model is trained, what if we want to determine which action to take given a new "context"
Python Code: !pip install --quiet --upgrade --force-reinstall tensorflow==2.4 tensorflow_probability==0.12.1 tensorflow-io==0.17.0 --use-feature=2020-resolver !pip install tf_agents==0.7.1 --quiet gast==0.3.3 --upgrade --use-feature=2020-resolver import functools import os from absl import app from absl import flags import tensorflow as tf # pylint: disable=g-explicit-tensorflow-version-import from tf_agents.bandits.agents import dropout_thompson_sampling_agent as dropout_ts_agent from tf_agents.bandits.agents import lin_ucb_agent from tf_agents.bandits.agents import linear_thompson_sampling_agent as lin_ts_agent from tf_agents.bandits.agents import neural_epsilon_greedy_agent as eps_greedy_agent from tf_agents.bandits.agents.examples.v2 import trainer from tf_agents.bandits.environments import environment_utilities #from tf_agents.bandits.environments import movielens_per_arm_py_environment from tf_agents.bandits.environments import movielens_py_environment from tf_agents.metrics import tf_metrics from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics from tf_agents.bandits.networks import global_and_arm_feature_network from tf_agents.environments import tf_py_environment from tf_agents.networks import q_network from tf_agents.drivers import dynamic_step_driver from tf_agents.eval import metric_utils from tf_agents.policies import policy_saver from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import time_step as ts # If there are version / incompatibility errors, make sure you restarted the kernel and use !pip freeze in a new cell to check whether the correct TF and tf_agents version had been installed. # Create target Directory if don't exist from datetime import date today = date.today() fdate = date.today().strftime('%d_%m_%Y') root_path = os.getcwd() log_path = "{}/{}".format(root_path, fdate) if not os.path.exists(log_path): os.mkdir(log_path) print("Directory {} Created".format(fdate)) else: print("Directory {} already exists".format(fdate)) print("Full path is {}".format(log_path)) Explanation: Applying Contextual Bandits for Recommendation systems using Tensorflow and Cloud Storage Learning objectives Install and import required libraries. Initialize and configure the MovieLens Environment. Initialize the Agent. Define and link the evaluation metrics. Initialize & configure the Replay Buffer. Setup and Train the model. Inference with trained model & Tensorboard Evaluation. Introduction Multi-Armed Bandit (MAB) is a Machine Learning framework in which an agent has to select actions (arms) in order to maximize its cumulative reward in the long term. In each round, the agent receives some information about the current state (context), then it chooses an action based on this information and the experience gathered in previous rounds. At the end of each round, the agent receives the reward assiociated with the chosen action. https://www.tensorflow.org/agents/tutorials/intro_bandit#multi-armed_bandits_and_reinforcement_learning Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. 1. Initial Setup: installing and importing required Libraries End of explanation # initialize the movielens pyenvironment with default parameters NUM_ACTIONS = 20 # take this as 20 RANK_K = 20 # take rank as 20 BATCH_SIZE = 8 # take batch size as 8 data_path = "gs://ta-reinforecement-learning/dataset/movielens.data" # specify the path to the movielens.data OR get it from the GCS bucket env = movielens_py_environment.MovieLensPyEnvironment( data_path, RANK_K, BATCH_SIZE, num_movies=NUM_ACTIONS) environment = tf_py_environment.TFPyEnvironment(env) Explanation: 2. Initializing and configuring the MovieLens Environment Firstly we need to load the movielens.data csv file stored in cloud storage, load it locally and initilialze the MovielensPyenvironment with it. Refer here for guidance on it. An environment in the TF-Agents Bandits library is a class that provides observations and reports rewards based on observations and actions. We will be using the MovieLens environment. This environment implements the MovieLens 100K dataset, available at: https://www.kaggle.com/prajitdatta/movielens-100k-dataset This dataset contains 100K ratings from m=943 users on n=1682 items. The ratings can be organized as a matrix A of size m-by-n. Note that the ratings matrix is a <b>sparse matrix</b> i.e., only a subset of certain (user, movie) pairs is provided, since not all users have seen all movies. In order for the environment to be able to compute a reasonable estimate of the reward, which represents how much a user i would enjoy a movie j, the environment computes a dense approximation to this sparse matrix A. In collaborative filtering, it is common practice to obtain this dense approximation by means of a low-rank matrix factorization of the matrix A. The MovieLens environment uses truncated Singular Value Decomposition (SVD) (but other matrix factorization techniques could be potentially also used). With truncated SVD of rank k, the matrix A is factorized as follows: $A_k = U_k \Sigma_k V_k^T$, where: <li>$U_k$ is a matrix of orthogonal columns of size $m$-by-$k$,<\li> <li>$V_k$ is a matrix of orthogonal columns of size $n$-by-$k$</li> <li> $\Sigma_k$ is a diagonal matrix of size $k$-by-$k$ that holds the $k$ largest singular values of A.</li> By splitting $\Sigma$ into $\sqrt{\Sigma_k} \sqrt{\Sigma_k}$, we can finally approximate the matrix A as a product of two factors $\tilde{U}$ and $\tilde{V}$ i.e., $A ~= \tilde{U} \tilde{V}^T$, where $\tilde{U} = U_k \sqrt{\Sigma_k}$ and $\tilde{V} = V_k \sqrt{\Sigma_k}$ Once the matrix factorization has been computed, the environment caches it and uses it to compute the reward for recommending an movie `j` to a user `i` by retrieving the (`i`, `j`)-entry of matrix $A$. Apart from computing the reward when the agent recommends a certain movie to a user, the environment is also responsible for generating observations that are given as input to the agent in order to make an informed decision. In order to generate a random observation, the environment samples a random row `i` from the matrix $\tilde{U}$. Once the agent selects movie `j` then the environment responds with the (`i`, `j`)-entry of matrix $A$. End of explanation EPSILON = 0.05 LAYERS = (50, 50, 50) LR = 0.005 DROPOUT_RATE = 0.2 # Initialize the Qnetwork network = q_network.QNetwork( input_tensor_spec=environment.time_step_spec().observation, action_spec=environment.action_spec(), fc_layer_params=LAYERS) # Creating a neuron Epsilon greedy agent with an optimizer, # Epsilon exploration value, learning & dropout rate agent = eps_greedy_agent.NeuralEpsilonGreedyAgent( time_step_spec=environment.time_step_spec(),# get the spec/format of the environment action_spec=environment.action_spec(), # get the spec/format of the environment reward_network=network, #q network goes here optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=LR), #start w/ adam optimizer with a learning rate of .002 epsilon=EPSILON) # we recommend an exploration of value of 1%) Explanation: 3. Initializing the Agent Now that we have the environment query we reach the part where we define and initialize our policy and the Agent which will be our utilize that policy to make decisions given an observation. We have several policies: as shown here: NeuralEpsilonGreedyAgent: The neural episilon greed algorithm makes a value estimate for all the arms, and then chooses the best arm with the probaility (1-episilon) and any of the random arms with a probability of epsilon. this balances the exploration-exploitation tradeoff and epsilon is set to a small value like 10%. Example: In this example we have seven arms: one of each of the classes, and if we set episilon to say 10%, then 90% of the times the agent will choose the arm with the highest value estimate ( expplotiing the one most likely to be the predicted class) and 10% of the time it will choose a random arm from all of the 7 arms( thus exploring the other possibilities). Refer here for more information of the tensorflow agents version of the same. Each Agent is initializied with a policy: which is essentially the function approximator ( be it linear or non linear) for estimating the Q values. Ther agen trains this policy, and the policy adds the exploration-exploitation component on top of this, and also chooses the action. In this example we will use a Deep Q Network as our value function, and we use the epsilon greedy on topof this to select the actions. In this case the action space would be 20 for 20 movies, the contextual state vector would be the dense user vector from the matrix decomposition. In applied situations, a dictionary mapping could be made from a user id to its dense representation to make it more convinient for the end user. Step 1. Initialize the Qnetwork, which takes in the state and returns the value function for each action. Define the Fully connected layer parameters to be (50, 50, 50) from the left to the right respectively. Step 2. Creating a neuron Epsilon greedy agent with an Adam Optimizer with Epsilon exploration value of 0.05, learning rate = 0.005, Dropout rate = 0.2. Feel free to experiment with these later to gauge their impact on the training later Click here for reference on code example of how to create a Q network and DQN Agents End of explanation # Making functions for computing optimal reward/action and attaching the env variable to it using partial functions, so it doesnt need to be passed with every invocation optimal_reward_fn = functools.partial( environment_utilities.compute_optimal_reward_with_movielens_environment, environment=environment) optimal_action_fn = functools.partial( environment_utilities.compute_optimal_action_with_movielens_environment, environment=environment) # Initilializing the regret and suboptimal arms metric using the optimal reward and action functions regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward_fn) suboptimal_arms_metric = tf_bandit_metrics.SuboptimalArmsMetric( optimal_action_fn) step_metric = tf_metrics.EnvironmentSteps() metrics = [tf_metrics.NumberOfEpisodes(), #equivalent to number of steps in bandits problem regret_metric, # measures regret suboptimal_arms_metric, # number of times the suboptimal arms are pulled tf_metrics.AverageReturnMetric(batch_size=environment.batch_size) # the average return ] Explanation: 4. Define and link the evaluation metrics Just like you have metrics like accuracy/recall in supervised learning, in bandits we use the regret metric per episode. To calculate the regret, we need to know what the highest possible expected reward is in every time step. For that, we define the optimal_reward_fn. Another similar metric is the number of times a suboptimal action was chosen. That requires the definition if the optimal_action_fn. End of explanation STEPS_PER_LOOP = 2 buf = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.policy.trajectory_spec, batch_size=BATCH_SIZE, max_length=STEPS_PER_LOOP) Explanation: 5. Initialize & configure the Replay Buffer Reinforcement learning algorithms use replay buffers to store trajectories of experience when executing a policy in an environment. During training, replay buffers are queried for a subset of the trajectories (either a sequential subset or a sample) to "replay" the agent's experience. Sampling from the replay buffer facilitate data re-use and breaks harmful co-relation between sequential data in RL, although in contextual bandits this isn't absolutely required but still helpful. The replay buffer exposes several functions which allow you to manipulate the replay buffer in several ways. Read more on them [here] (https://www.tensorflow.org/agents/tutorials/5_replay_buffers_tutorial) In this demo we would be using the TFUniformReplayBuffer for which we need to initialize the buffer spec with the spec of the trajectory of the agent's policy, a chosen batch size( number of trajectories to store), and the maximum length of the trajectory. ( this is the amount of sequential time steps which will be considered as one data point). so a batch of 3 with 2 time steps each would result in a tensor of shape (3,2). Since unlike regular RL problems, Contextual bandits have only one time step we can keep max_length =1, however since this tutorial is to enable you for RL problems as well, let set it to 2. Do not worry, any contextual bandit agent will internally split the time steps inside each data point such that the effective batch size ends up being (6,1). Create a Tensorflow based UniformReplayBuffer And initialize it with an appropriate values. Recommended: Batch size = 8 Max length = 2 ( 2 time steps per item) End of explanation #TOFINISH: setup the replay observer as a list to capture both metrics, step metrics and provide access to the function to load data from the driver into the buffer replay_observer = [buf.add_batch, step_metric] + metrics driver = dynamic_step_driver.DynamicStepDriver( env=environment, policy=agent.collect_policy, num_steps=STEPS_PER_LOOP * environment.batch_size, observers=replay_observer ) Explanation: Now we have a Replay buffer but we also need something to fill it with. Often a common practice is to have the agent Interact with and collect experience with the environment, without actually learning from it ( i.e. only forward pass). This loop can be either by you manually as shown here or you can do it using the DynamicStepDriver. The data encountered by the driver at each step is saved in a named tuple called Trajectory and broadcast to a set of observers such as replay buffers and metrics. This Trajectory includes the observation from the environment, the action recommended by the policy, the reward obtained, the type of the current and the next step, etc. In order for the driver to fill the replay buffer with data, as well as to compute ongoing metrics, it needs acess to the add_batch, functionality of the buffer, and the metrics ( both step and regular). Refer here for more information aand example code on how initialize a step driver with observers. End of explanation AGENT_CHECKPOINT_NAME = 'agent' STEP_CHECKPOINT_NAME = 'step' CHECKPOINT_FILE_PREFIX = 'ckpt' def restore_and_get_checkpoint_manager(root_dir, agent, metrics, step_metric): Restores from `root_dir` and returns a function that writes checkpoints. trackable_objects = {metric.name: metric for metric in metrics} trackable_objects[AGENT_CHECKPOINT_NAME] = agent trackable_objects[STEP_CHECKPOINT_NAME] = step_metric checkpoint = tf.train.Checkpoint(**trackable_objects) checkpoint_manager = tf.train.CheckpointManager(checkpoint=checkpoint, directory=root_dir, max_to_keep=5) latest = checkpoint_manager.latest_checkpoint if latest is not None: print('Restoring checkpoint from %s.', latest) checkpoint.restore(latest) print('Successfully restored to step %s.', step_metric.result()) else: print('Did not find a pre-existing checkpoint. ' 'Starting from scratch.') return checkpoint_manager checkpoint_manager = restore_and_get_checkpoint_manager( log_path, agent, metrics, step_metric) saver = policy_saver.PolicySaver(agent.policy) summary_writer = tf.summary.create_file_writer(log_path) summary_writer.set_as_default() Explanation: 6. Setup and Train the Model Here we provide you a helper function in order to save your agent, the metrics and its lighter policy seperately, while training the model. We make all the aspects into trackable objects and then use checkpoint to save as well warm restart a previous training. For more information on checkpoints and policy savers ( which will be used in the training loop below) refer here End of explanation AGENT_ALPHA = 10.0 TRAINING_LOOPS = 15000 Explanation: Now we have all the components ready to start training the model. Here is the process for Training the model 1. We first use the DynamicStepdriver instance to collect experience( trajectories) from the environment and fill up the replay buffer. 2. We then extract all the stored experience from the replay buffer by specfiying the batch size and num_steps the same as we initialized the driver with. We extract it as tf.dataset instance. 3. We then iterate on the tf.dataset and the first sample we draw actually has all the data batch_size*num_time_steps 4. the agent then trains on the acquired experience 5. the replay buffer is cleared to make space for new data 6. Log the metrics and store them on disk 7. Save the Agent ( via checkpoints) as well as the policy We recommend doing the training for 15,000 loops with 2 steps per loop, and an agent alpha of 10.0 End of explanation import warnings warnings.filterwarnings('ignore') for _ in range(TRAINING_LOOPS): driver.run() batch_size = driver.env.batch_size dataset = buf.as_dataset( sample_batch_size = BATCH_SIZE, num_steps=STEPS_PER_LOOP, single_deterministic_pass=True) experience, unused_info = next(iter(dataset)) train_loss = agent.train(experience).loss buf.clear() metric_utils.log_metrics(metrics) # for m in metrics: # print(m.name, ": ", m.result()) for metric in metrics: metric.tf_summaries(train_step=step_metric.result()) checkpoint_manager.save() saver.save(os.path.join(ROOT_DIR, "./", 'policy_%d' % step_metric.result())) Explanation: Note: The training will take around 50 minutes to complete and all the data are stored in the log_path directory. End of explanation print("tensorboard dev upload --logdir {} --name \"(optional) My latest experiment\" --description \"(optional) Agent trained\"".format(log_path)) Explanation: One last task before starting the training: let's upload the tensoboard logs, to get an overview of the performance of our model. We will upload our logs to tensorboard.dev and for that you need to run the print statement below and copy the output of the cell (which is a command) into a terminal, then execute the command from there. It will give you a link from which you need to copy/paste the authentication code, and once that is done, you will receive the url of your model evaluation, hosted on a public tensorboard.dev instance. As soon as you kicked off the training in the subsequent cell, you should see some graphs as in the picture below. End of explanation import numpy as np feature = np.reshape(environment._observe()[0], (1,20)) feature.shape ## Inference step = ts.TimeStep( tf.constant( ts.StepType.FIRST, dtype=tf.int32, shape=[1], name='step_type'), tf.constant(0.0, dtype=tf.float32, shape=[1], name='reward'), tf.constant(1.0, dtype=tf.float32, shape=[1], name='discount'), tf.constant(feature, dtype=tf.float64, shape=[1, 20], name='observation')) agent.policy.action(step).action.numpy() Explanation: <img src='./assets/example_tensorboard.png'> 7. Inferencing with trained model & Tensorboard Evaluation Now that our model is trained, what if we want to determine which action to take given a new "context": for that we will iterate on our dataset to get the next item, make a timestep out of it by wrapping the results using ts.Timestep. It expects step_type, reward, discount, and observation as input: since we are performing prediction you can fill in dummy values for the first 3: only the observation/context is relevant. Read about how it works here, and perform the task below the movielens environment provides us a private observe_method which randomly samples upto 8 user context observations, and we select one of them, and reshape it to (1,20): the shape required for the model to consume. End of explanation
12,681
Given the following text description, write Python code to implement the functionality described below step by step Description: Let's scrape some death row data Texas executes a lot of criminals, and it has a web page that keeps track of people on its death row. Using what you've learned so far, let's scrape this table into a CSV. Then we're going write a function to grab a couple pieces of additional data from the inmates' detail pages. Import our libraries Step1: Fetch and parse the summary page Step2: Loop over the table rows and write to CSV Step4: Let's write a parsing function We need a function that will take a URL of a detail page and do these things Step5: Putting it all together Now that we have our parsing function, we can
Python Code: import csv import time import requests from bs4 import BeautifulSoup Explanation: Let's scrape some death row data Texas executes a lot of criminals, and it has a web page that keeps track of people on its death row. Using what you've learned so far, let's scrape this table into a CSV. Then we're going write a function to grab a couple pieces of additional data from the inmates' detail pages. Import our libraries End of explanation # the URL to request URL = 'https://www.tdcj.state.tx.us/death_row/dr_offenders_on_dr.html' # get that page page = requests.get(URL) # turn the page text into soup soup = BeautifulSoup(page.text, 'html.parser') # find the table of interest table = soup.find('table') Explanation: Fetch and parse the summary page End of explanation # find all table rows (skip the first one) rows = table.find_all('tr')[1:] # open a file to write to with open('death-row.csv', 'w') as outfile: # create a writer object writer = csv.DictWriter(outfile, fieldnames=['id', 'link', 'last', 'first', 'dob', 'sex', 'race', 'date_received', 'county', 'offense_date']) # write header row writer.writeheader() # loop over the rows for row in rows: # extract the cells cells = row.find_all('td') # offense ID off_id = cells[0].string # link to detail page link = 'https://www.tdcj.state.tx.us/death_row/' + cells[1].a['href'] # last name last = cells[2].string # first name first = cells[3].string # dob dob = cells[4].string # sex sex = cells[5].string # race race = cells[6].string # date received date_received = cells[7].string # county county = cells[8].string # offense date offense_date = cells[9].string # write out to file writer.writerow({ 'id': off_id, 'link': link, 'last': last, 'first': first, 'dob': dob, 'sex': sex, 'race': race, 'date_received': date_received, 'county': county, 'offense_date': offense_date }) Explanation: Loop over the table rows and write to CSV End of explanation def fetch_details(url): Fetch details from a death row inmate's page. # create a dictionary with some default values # as we go through, we're going to add stuff to it # (if you want to explore further, there is actually # a special kind of dictionary called a "defaultdict" to # handle this use case) => # https://docs.python.org/3/library/collections.html#collections.defaultdict out_dict = { 'Height': None, 'Weight': None, 'Eye Color': None, 'Hair Color': None, 'Native County': None, 'Native State': None, 'mug': None } # partway down the page, the links go to JPEGs instead of HTML pages # we can't parse images, so we'll just return the empty dictionary if not url.endswith('.html'): return out_dict # get the page r = requests.get(url) # soup the HTML soup = BeautifulSoup(r.text, 'html.parser') # find the table of info table = soup.find('table', {'class': 'tabledata_deathrow_table'}) # target the mugshot, if it exists mug = table.find('img', {'class': 'photo_border_black_right'}) # if there is a mug, grab the src and add it to the dictionary if mug: out_dict['mug'] = 'http://www.tdcj.state.tx.us/death_row/dr_info/' + mug['src'] # get a list of the "label" cells # on some pages, they're identified by the class 'tabledata_bold_align_right_deathrow' # on others, they're identified by the class 'tabledata_bold_align_right_unit' # so we pass it a list of possible classes label_cells = table.find_all('td', {'class': ['tabledata_bold_align_right_deathrow', 'tabledata_bold_align_right_unit']}) # gonna do some fanciness here in the interests of DRY => # a list of attributes we're interested in -- should match exactly the text inside the cells of interest attr_list = ['Height', 'Weight', 'Eye Color', 'Hair Color', 'Native County', 'Native State'] # loop over the list of label cells that we targeted earlier for cell in label_cells: clean_label_cell_text = cell.text.strip() # check to see if the cell text is in our list of attributes if clean_label_cell_text in attr_list: # if so, find the value -- go up to the tr and search for the other td -- # and add that attribute to our dictionary value_cell_text = cell.parent.find('td', {'class': 'tabledata_align_left_deathrow'}).text.strip() out_dict[clean_label_cell_text] = value_cell_text # return the dictionary to the script return(out_dict) Explanation: Let's write a parsing function We need a function that will take a URL of a detail page and do these things: Open the detail page URL using requests Parse the contents using BeautifulSoup Isolate the bits of information we're interested in: height, weight, eye color, hair color, native county, native state, link to mugshot Return those bits of information in a dictionary A couple things to keep in mind: Not every inmate will have every piece of data. Also, not every inmate has an HTML detail page to parse -- the older ones are a picture. So we'll need to work around those limitations. We shall call our function fetch_details(). End of explanation # open the CSV file to read from and the one to write to with open('death-row.csv', 'r') as infile, open('death-row-details.csv', 'w') as outfile: # create a reader object reader = csv.DictReader(infile) # the output headers are goind to be the headers from the summary file # plus a list of new attributes headers = reader.fieldnames + ['Height', 'Weight', 'Eye Color', 'Hair Color', 'Native County', 'Native State', 'mug'] # create the writer object writer = csv.DictWriter(outfile, fieldnames=headers) # write the header row writer.writeheader() # loop over the rows in the input file for row in reader: # print the inmate's name (so we can keep track of where we're at) # helps with debugging, too print(row['first'], row['last']) # call our function on the URL in the row deets = fetch_details(row['link']) # add the two dicts together by # unpacking them inside a new one # and write out to file writer.writerow({**row, **deets}) time.sleep(2) print('---') print('Done!') Explanation: Putting it all together Now that we have our parsing function, we can: Open and read the CSV files of summary inmate info (the one we just scraped) Open and write a new CSV file of detailed inmate info As we loop over the summary inmate data, we're going to call our new parsing function on the detail URL in each row. Then we'll combine the dictionaries (data from the row of summary data + new detailed data) and write out to the new file. End of explanation
12,682
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Batch Normalization One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3]. The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated. The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features. It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension. [3] Sergey Ioffe and Christian Szegedy, "Batch Normalization Step2: Batch normalization Step3: Batch Normalization Step4: Batch Normalization Step5: Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization. Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation. HINT Step6: Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. Step7: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. Step8: Batch normalization and initialization We will now run a small experiment to study the interaction of batch normalization and weight initialization. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.
Python Code: # As usual, a bit of setup import time import numpy as np import matplotlib.pyplot as plt from skynet.neural_network.classifiers.fc_net import * from skynet.utils.data_utils import get_CIFAR10_data from skynet.utils.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from skynet.solvers.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in data.items(): print('%s: ' % k, v.shape) Explanation: Batch Normalization One way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3]. The idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated. The authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features. It is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension. [3] Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", ICML 2015. End of explanation # Check the training-time forward pass by checking means and variances # of features both before and after batch normalization # Simulate the forward pass for a two-layer network N, D1, D2, D3 = 200, 50, 60, 3 X = np.random.randn(N, D1) W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) a = np.maximum(0, X.dot(W1)).dot(W2) print('Before batch normalization:') print(' means: ', a.mean(axis=0)) print(' stds: ', a.std(axis=0)) # Means should be close to zero and stds close to one print('After batch normalization (gamma=1, beta=0)') a_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'}) print(' mean: ', a_norm.mean(axis=0)) print(' std: ', a_norm.std(axis=0)) # Now means should be close to beta and stds close to gamma gamma = np.asarray([1.0, 2.0, 3.0]) beta = np.asarray([11.0, 12.0, 13.0]) a_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'}) print('After batch normalization (nontrivial gamma, beta)') print(' means: ', a_norm.mean(axis=0)) print(' stds: ', a_norm.std(axis=0)) # Check the test-time forward pass by running the training-time # forward pass many times to warm up the running averages, and then # checking the means and variances of activations after a test-time # forward pass. N, D1, D2, D3 = 200, 50, 60, 3 W1 = np.random.randn(D1, D2) W2 = np.random.randn(D2, D3) bn_param = {'mode': 'train'} gamma = np.ones(D3) beta = np.zeros(D3) for t in range(50): X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) batchnorm_forward(a, gamma, beta, bn_param) bn_param['mode'] = 'test' X = np.random.randn(N, D1) a = np.maximum(0, X.dot(W1)).dot(W2) a_norm, _ = batchnorm_forward(a, gamma, beta, bn_param) # Means should be close to zero and stds close to one, but will be # noisier than training-time forward passes. print('After batch normalization (test-time):') print(' means: ', a_norm.mean(axis=0)) print(' stds: ', a_norm.std(axis=0)) Explanation: Batch normalization: Forward In the file neural_network/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation. End of explanation # Gradient check batchnorm backward pass N, D = 4, 5 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} fx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0] fg = lambda a: batchnorm_forward(x, gamma, beta, bn_param)[0] fb = lambda b: batchnorm_forward(x, gamma, beta, bn_param)[0] dx_num = eval_numerical_gradient_array(fx, x, dout) da_num = eval_numerical_gradient_array(fg, gamma, dout) db_num = eval_numerical_gradient_array(fb, beta, dout) _, cache = batchnorm_forward(x, gamma, beta, bn_param) dx, dgamma, dbeta = batchnorm_backward(dout, cache) print('dx error: ', rel_error(dx_num, dx)) print('dgamma error: ', rel_error(da_num, dgamma)) print('dbeta error: ', rel_error(db_num, dbeta)) Explanation: Batch Normalization: backward Now implement the backward pass for batch normalization in the function batchnorm_backward. To derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass. Once you have finished, run the following to numerically check your backward pass. $\frac{\partial l}{\partial x_i} = \frac{\partial l}{\partial y_i}\cdot \gamma$ End of explanation N, D = 100, 500 x = 5 * np.random.randn(N, D) + 12 gamma = np.random.randn(D) beta = np.random.randn(D) dout = np.random.randn(N, D) bn_param = {'mode': 'train'} out, cache = batchnorm_forward(x, gamma, beta, bn_param) t1 = time.time() dx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache) t2 = time.time() dx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache) t3 = time.time() print('dx difference: ', rel_error(dx1, dx2)) print('dgamma difference: ', rel_error(dgamma1, dgamma2)) print('dbeta difference: ', rel_error(dbeta1, dbeta2)) print('speedup: %.2fx' % ((t2 - t1) / (t3 - t2))) Explanation: Batch Normalization: alternative backward In class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper. Surprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster. NOTE: You can still complete the rest of the assignment if you don't figure this part out, so don't worry too much if you can't get it. End of explanation N, D, H1, H2, C = 2, 15, 20, 30, 10 X = np.random.randn(N, D) y = np.random.randint(C, size=(N,)) for reg in [0, 3.14]: print('Running check with reg = ', reg) model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C, reg=reg, weight_scale=5e-2, dtype=np.float64, use_batchnorm=True) loss, grads = model.loss(X, y) print('Initial loss: ', loss) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))) if reg == 0: print() Explanation: Fully Connected Nets with Batch Normalization Now that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization. Concretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation. HINT: You might find it useful to define an additional helper layer similar to those in the file neural_network/layer_utils.py. If you decide to do so, do it in the file neural_network/classifiers/fc_net.py. End of explanation # Try training a very deep net with batchnorm hidden_dims = [100, 100, 100, 100, 100] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } weight_scale = 2e-2 bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) bn_solver.train() solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=True, print_every=200) solver.train() Explanation: Batchnorm for deep networks Run the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization. End of explanation plt.subplot(3, 1, 1) plt.title('Training loss') plt.xlabel('Iteration') plt.subplot(3, 1, 2) plt.title('Training accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 3) plt.title('Validation accuracy') plt.xlabel('Epoch') plt.subplot(3, 1, 1) plt.plot(solver.loss_history, 'o', label='baseline') plt.plot(bn_solver.loss_history, 'o', label='batchnorm') plt.subplot(3, 1, 2) plt.plot(solver.train_acc_history, '-o', label='baseline') plt.plot(bn_solver.train_acc_history, '-o', label='batchnorm') plt.subplot(3, 1, 3) plt.plot(solver.val_acc_history, '-o', label='baseline') plt.plot(bn_solver.val_acc_history, '-o', label='batchnorm') for i in [1, 2, 3]: plt.subplot(3, 1, i) plt.legend(loc='upper center', ncol=4) plt.gcf().set_size_inches(15, 15) plt.show() Explanation: Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster. End of explanation # Try training a very deep net with batchnorm hidden_dims = [50, 50, 50, 50, 50, 50, 50] num_train = 1000 small_data = { 'X_train': data['X_train'][:num_train], 'y_train': data['y_train'][:num_train], 'X_val': data['X_val'], 'y_val': data['y_val'], } bn_solvers = {} solvers = {} weight_scales = np.logspace(-4, 0, num=20) for i, weight_scale in enumerate(weight_scales): print('Running weight scale %d / %d' % (i + 1, len(weight_scales))) bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True) model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False) bn_solver = Solver(bn_model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) bn_solver.train() bn_solvers[weight_scale] = bn_solver solver = Solver(model, small_data, num_epochs=10, batch_size=50, update_rule='adam', optim_config={ 'learning_rate': 1e-3, }, verbose=False, print_every=200) solver.train() solvers[weight_scale] = solver # Plot results of weight scale experiment best_train_accs, bn_best_train_accs = [], [] best_val_accs, bn_best_val_accs = [], [] final_train_loss, bn_final_train_loss = [], [] for ws in weight_scales: best_train_accs.append(max(solvers[ws].train_acc_history)) bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history)) best_val_accs.append(max(solvers[ws].val_acc_history)) bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history)) final_train_loss.append(np.mean(solvers[ws].loss_history[-100:])) bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:])) plt.subplot(3, 1, 1) plt.title('Best val accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best val accuracy') plt.semilogx(weight_scales, best_val_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm') plt.legend(ncol=2, loc='lower right') plt.subplot(3, 1, 2) plt.title('Best train accuracy vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Best training accuracy') plt.semilogx(weight_scales, best_train_accs, '-o', label='baseline') plt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm') plt.legend() plt.subplot(3, 1, 3) plt.title('Final training loss vs weight initialization scale') plt.xlabel('Weight initialization scale') plt.ylabel('Final training loss') plt.semilogx(weight_scales, final_train_loss, '-o', label='baseline') plt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm') plt.legend() plt.gcf().set_size_inches(10, 15) plt.show() Explanation: Batch normalization and initialization We will now run a small experiment to study the interaction of batch normalization and weight initialization. The first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale. End of explanation
12,683
Given the following text description, write Python code to implement the functionality described below step by step Description: pymofa tutorial last updated Step2: A discrete predetor prey dummy model First we need to create a dummy model. Let's use a discrete version of the famous predator prey model. Step3: Example usage Step5: Applying pymofa
Python Code: # if you work with this notebook interactively, exectue # cd .. # to be at the pymofa root %matplotlib notebook import numpy as np import matplotlib.pyplot as plt Explanation: pymofa tutorial last updated: 2016-09-06 This notebook introduces the basic functionalities of pymofa, the python modeling framework to run and evaluate your models systematically ;) End of explanation def predprey_model(prey_birth_rate, prey_mortality, predator_efficiency, predator_death_rate, initial_prey, initial_predators, time_length): Discrete predetor prey model. A = -1 * np.ones(time_length) B = -1 * np.ones(time_length) A[0] = initial_prey B[0] = initial_predators for t in range(1, time_length): A[t] = A[t-1] + prey_birth_rate * A[t-1] - prey_mortality * B[t-1]*A[t-1] B[t] = B[t-1] + predator_efficiency * B[t-1]*A[t-1] - predator_death_rate * B[t-1] +\ 0.02 * (0.5 - np.random.rand()) return A, B Explanation: A discrete predetor prey dummy model First we need to create a dummy model. Let's use a discrete version of the famous predator prey model. End of explanation preys, predators = predprey_model(0.1, 0.1, 0.1, 0.01, 1.0, 1.0, 1000) plt.plot(preys, label="preys") plt.plot(predators, label="predators") plt.legend() Explanation: Example usage: End of explanation # imports from pymofa.experiment_handling import experiment_handling as eh import itertools as it import pandas as pd # import cPickle # Path where to Store the simulated Data SAVE_PATH_RAW = "./dummy/pymofatutorial" # Definingh the experiment execution function # it gets paramater you want to investigate, plus `filename` as the last parameter def RUN_FUNC(prey_birth_rate, coupling, predator_death_rate, initial_pop, time_length, filename): Insightful docstring. # poss. process prey_mortality = coupling predator_efficiency = coupling initial_prey = initial_pop initial_predators = initial_pop # one could also do more complicated stuff here, e.g. drawing something from a random distribution # running the model preys, predators = predprey_model(prey_birth_rate, prey_mortality, predator_efficiency, predator_death_rate, initial_prey, initial_predators, time_length) # preparing the data res = pd.DataFrame({"preys": np.array(preys), "predators": np.array(predators)}) # Save Result res.to_pickle(filename) # determine exit status (if something went wrong) # if exit status > 0 == run passen # if exit status < 0 == Run Failed exit_status = 42 # RUN_FUNC needs to return exit_status return exit_status # Parameter combinations to investiage prey_birth_rate = [0.09, 0.1, 0.11] coupling = [0.1] predator_death_rate = [0.005, 0.01, 0.05, 0.1] initial_pop = [1.0, 2.0] time_length = [1000] PARAM_COMBS = list(it.product(prey_birth_rate, coupling, predator_death_rate, initial_pop, time_length)) # Sample Size SAMPLE_SIZE = 5 # INDEX INDEX = {i: RUN_FUNC.__code__.co_varnames[i] for i in range(len(RUN_FUNC.__code__.co_varnames)-1)} # initiate handle instance with experiment variables handle = eh(SAMPLE_SIZE, PARAM_COMBS, INDEX, SAVE_PATH_RAW) # Compute experiemnts raw data handle.compute(RUN_FUNC) rm -r dummy/ Explanation: Applying pymofa End of explanation
12,684
Given the following text description, write Python code to implement the functionality described below step by step Description: 关于泰坦尼克号生存率的数据分析 首先通过观察数据,可以了解到每位旅客的详细数据: Survived:是否存活(0代表否,1代表是) Pclass:舱位(一等舱,二等舱,三等舱) Name:船上乘客的名字 Sex:船上乘客的性别 Age:船上乘客的年龄(可能存在 NaN) SibSp:乘客在船上的兄弟姐妹和配偶的数量 Parch:乘客在船上的父母以及小孩的数量 Ticket:乘客船票的编号 Fare:乘客为船票支付的费用 Cabin:乘客所在船舱的编号(可能存在 NaN) Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船) 通过对原始数据的初步观察可以发现存活率和社会等级,性别,年龄,在船上的兄弟姐妹和配偶数量,在船上的父母以及小孩的数量有着某种联系。因此根据初步推测可以提出以下几个问题并进行分析: - 乘客的存活率和其社会等级是否有关系?是否社会等级越高存活率就越高? - 乘客的存活率和其性别,年龄又有什么关系? - 乘客的存活率和其在船上的兄弟姐妹和配偶数量,父母以及小孩的数量又有什么联系? Step1: 首先,我们观察一下几个比较重要的数值,初步得出一些结论,比如只有‘Age’这一列存在缺失值,整体的存活率只有0.383838。所以首先应该对年龄的缺失值进行填充。 Step2: 可以看出年龄这一列数据的总数正常了,为891,接下来可以进一步分析生存率了。 Step3: 根据以上不同舱位人数所占比例和关于生存率的直方图可以看出头等舱的生存率最高,经济舱的生存率最低。虽然头等舱的人数占总人数的比例很少,生存率却极高,三等舱的人数超过一半,而生存率确只有20%,间接的说明了一个现实问题:社会地位越高生存机率越高。或者说头等舱的安全措施很高 Step4: 我们可以清晰的看到虽然船上的男性人数显著多于女性人数,但是女性的存活率高达74%,而男性的存活率只有19%。这说明在逃生的时候会有男性保护女性的情况。一般是女先男后。 Step5: 可见0~10岁的儿童成活率是最高的,也说明了在家长陪同下的婴幼儿受到了很好的保护,超过60岁的老年人成活率非常低,由此我们可以推测老年人可能会因为年迈行动不便而导致在灾难中无法及时脱身。在10~60各个年龄阶段的生存率几本相等。
Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt import pylab as pl %matplotlib inline filename = './titanic-data.csv' titanic_df = pd.read_csv(filename) titanic_df.describe() Explanation: 关于泰坦尼克号生存率的数据分析 首先通过观察数据,可以了解到每位旅客的详细数据: Survived:是否存活(0代表否,1代表是) Pclass:舱位(一等舱,二等舱,三等舱) Name:船上乘客的名字 Sex:船上乘客的性别 Age:船上乘客的年龄(可能存在 NaN) SibSp:乘客在船上的兄弟姐妹和配偶的数量 Parch:乘客在船上的父母以及小孩的数量 Ticket:乘客船票的编号 Fare:乘客为船票支付的费用 Cabin:乘客所在船舱的编号(可能存在 NaN) Embarked:乘客上船的港口(C 代表从 Cherbourg 登船,Q 代表从 Queenstown 登船,S 代表从 Southampton 登船) 通过对原始数据的初步观察可以发现存活率和社会等级,性别,年龄,在船上的兄弟姐妹和配偶数量,在船上的父母以及小孩的数量有着某种联系。因此根据初步推测可以提出以下几个问题并进行分析: - 乘客的存活率和其社会等级是否有关系?是否社会等级越高存活率就越高? - 乘客的存活率和其性别,年龄又有什么关系? - 乘客的存活率和其在船上的兄弟姐妹和配偶数量,父母以及小孩的数量又有什么联系? End of explanation titanic_df = titanic_df.fillna(method='pad')#用前一个数值填充 titanic_df.describe() Explanation: 首先,我们观察一下几个比较重要的数值,初步得出一些结论,比如只有‘Age’这一列存在缺失值,整体的存活率只有0.383838。所以首先应该对年龄的缺失值进行填充。 End of explanation sort_pclass = titanic_df.groupby('Pclass').count()['PassengerId'] print sort_pclass titanic_df.groupby('Pclass')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%') plt.title('Pclass VS Count') plt.show() Pclass_survived = titanic_df.groupby('Pclass').mean()['Survived'] print Pclass_survived.plot.bar() Explanation: 可以看出年龄这一列数据的总数正常了,为891,接下来可以进一步分析生存率了。 End of explanation sort_sex = titanic_df.groupby('Sex').count()['PassengerId'] print sort_sex Sex_survived = titanic_df.groupby('Sex').mean()['Survived'] print Sex_survived print Sex_survived.plot.bar() Explanation: 根据以上不同舱位人数所占比例和关于生存率的直方图可以看出头等舱的生存率最高,经济舱的生存率最低。虽然头等舱的人数占总人数的比例很少,生存率却极高,三等舱的人数超过一半,而生存率确只有20%,间接的说明了一个现实问题:社会地位越高生存机率越高。或者说头等舱的安全措施很高 End of explanation titanic_df['Age_bins'] = pd.cut(titanic_df['Age'],range(0,80,10)) Age_survived = titanic_df.groupby('Age_bins').mean()['Survived'] Sort_survived = titanic_df.groupby('Age_bins').count()['Survived'] print Age_survived print Sort_survived Age_survived.plot(kind='bar', stacked=True) Explanation: 我们可以清晰的看到虽然船上的男性人数显著多于女性人数,但是女性的存活率高达74%,而男性的存活率只有19%。这说明在逃生的时候会有男性保护女性的情况。一般是女先男后。 End of explanation sort_SibSp = titanic_df.groupby('SibSp').count()['PassengerId'] print sort_SibSp titanic_df.groupby('SibSp')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%') plt.title('SibSp VS Count') plt.show() SibSp_survived = titanic_df.groupby('SibSp').mean()['Survived'] print SibSp_survived SibSp_survived.plot.bar() sort_Parch = titanic_df.groupby('Parch').count()['PassengerId'] print sort_Parch titanic_df.groupby('Parch')['PassengerId'].count().plot(kind = 'pie',autopct = '%.0f%%') plt.title('Parch VS Count') plt.show() Parch_survived = titanic_df.groupby('Parch').mean()['Survived'] print Parch_survived Parch_survived.plot.bar() Explanation: 可见0~10岁的儿童成活率是最高的,也说明了在家长陪同下的婴幼儿受到了很好的保护,超过60岁的老年人成活率非常低,由此我们可以推测老年人可能会因为年迈行动不便而导致在灾难中无法及时脱身。在10~60各个年龄阶段的生存率几本相等。 End of explanation
12,685
Given the following text description, write Python code to implement the functionality described below step by step Description: Keras for Text Classification Learning Objectives 1. Learn how to create a text classification datasets using BigQuery 1. Learn how to tokenize and integerize a corpus of text for training in Keras 1. Learn how to do one-hot-encodings in Keras 1. Learn how to use embedding layers to represent words in Keras 1. Learn about the bag-of-word representation for sentences 1. Learn how to use DNN/CNN/RNN model to classify text in keras Introduction In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab. In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector. Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors. The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs. Step1: Replace the variable values in the cell below Step2: Create a Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning. Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here. Step8: AutoML for text classification requires that * the dataset be in csv form with * the first column being the texts to classify or a GCS path to the text * the last colum to be the text labels The dataset we pulled from BiqQuery satisfies these requirements. Step9: Let's make sure we have roughly the same number of labels for each of our three labels Step10: Finally we will save our data, which is currently in-memory, to disk. We will create a csv file containing the full dataset and another containing only 1000 articles for development. Note Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML). Step12: Let's write the sample datatset to disk. Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located Step14: Loading the dataset Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times). Step15: Integerize the texts The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that Step16: Let's now implement a function create_sequence that will * take as input our titles as well as the maximum sentence length and * returns a list of the integers corresponding to our tokens padded to the sentence maximum length Keras has the helper functions pad_sequence for that on the top of the tokenizer methods. Step17: We now need to write a function that * takes a title source and * returns the corresponding one-hot encoded vector Keras to_categorical is handy for that. Step18: Preparing the train/test splits Let's split our data into train and test splits Step19: To be on the safe side, we verify that the train and test splits have roughly the same number of examples per classes. Since it is the case, accuracy will be a good metric to use to measure the performance of our models. Step20: Using create_sequence and encode_labels, we can now prepare the training and validation data to feed our models. The features will be padded list of integers and the labels will be one-hot-encoded 3D vectors. Step21: Building a DNN model The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class. Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words". Step22: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard. Step23: Building a RNN model The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence. The first and last layers are the same as for the simple DNN model. Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers. Step24: Let's train the model with early stoping as above. Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs) Step25: Build a CNN model The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence. The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer. Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers. Step26: Let's train the model. Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
Python Code: import os import pandas as pd from google.cloud import bigquery %load_ext google.cloud.bigquery Explanation: Keras for Text Classification Learning Objectives 1. Learn how to create a text classification datasets using BigQuery 1. Learn how to tokenize and integerize a corpus of text for training in Keras 1. Learn how to do one-hot-encodings in Keras 1. Learn how to use embedding layers to represent words in Keras 1. Learn about the bag-of-word representation for sentences 1. Learn how to use DNN/CNN/RNN model to classify text in keras Introduction In this notebook, we will implement text models to recognize the probable source (GitHub, TechCrunch, or The New York Times) of the titles we have in the title dataset we constructed in the first task of the lab. In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector. Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors. The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs. End of explanation PROJECT = "qwiklabs-gcp-04-14242c0aa6a7" # Replace with your PROJECT BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # Replace with your REGION SEED = 0 Explanation: Replace the variable values in the cell below: End of explanation %%bigquery --project $PROJECT SELECT url, title, score FROM `bigquery-public-data.hacker_news.stories` WHERE LENGTH(title) > 10 AND score > 10 AND LENGTH(url) > 0 LIMIT 10 Explanation: Create a Dataset from BigQuery Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015. Here is a sample of the dataset: End of explanation %%bigquery --project $PROJECT SELECT ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source, COUNT(title) AS num_articles FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$') AND LENGTH(title) > 10 GROUP BY source ORDER BY num_articles DESC LIMIT 100 Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i> End of explanation regex = ".*://(.[^/]+)/" sub_query = SELECT title, ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source FROM `bigquery-public-data.hacker_news.stories` WHERE REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$') AND LENGTH(title) > 10 .format( regex ) query = SELECT LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title, source FROM ({sub_query}) WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch') .format( sub_query=sub_query ) print(query) Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning. End of explanation bq = bigquery.Client(project=PROJECT) title_dataset = bq.query(query).to_dataframe() title_dataset.head() Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here. End of explanation print(f"The full dataset contains {len(title_dataset)} titles") Explanation: AutoML for text classification requires that * the dataset be in csv form with * the first column being the texts to classify or a GCS path to the text * the last colum to be the text labels The dataset we pulled from BiqQuery satisfies these requirements. End of explanation title_dataset.source.value_counts() Explanation: Let's make sure we have roughly the same number of labels for each of our three labels: End of explanation DATADIR = "./data/" if not os.path.exists(DATADIR): os.makedirs(DATADIR) FULL_DATASET_NAME = "titles_full.csv" FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME) # Let's shuffle the data before writing it to disk. title_dataset = title_dataset.sample(n=len(title_dataset)) title_dataset.to_csv( FULL_DATASET_PATH, header=False, index=False, encoding="utf-8" ) Explanation: Finally we will save our data, which is currently in-memory, to disk. We will create a csv file containing the full dataset and another containing only 1000 articles for development. Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool. End of explanation sample_title_dataset = title_dataset.sample(n=1000) sample_title_dataset.source.value_counts() Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML). End of explanation SAMPLE_DATASET_NAME = "titles_sample.csv" SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME) sample_title_dataset.to_csv( SAMPLE_DATASET_PATH, header=False, index=False, encoding="utf-8" ) sample_title_dataset.head() import os import shutil import pandas as pd import tensorflow as tf from tensorflow.keras.callbacks import EarlyStopping, TensorBoard from tensorflow.keras.layers import ( GRU, Conv1D, Dense, Embedding, Flatten, Lambda, ) from tensorflow.keras.models import Sequential from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.utils import to_categorical print(tf.__version__) %matplotlib inline Explanation: Let's write the sample datatset to disk. End of explanation LOGDIR = "./text_models" DATA_DIR = "./data" Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located: End of explanation DATASET_NAME = "titles_full.csv" TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME) COLUMNS = ["title", "source"] titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS) titles_df.head() Explanation: Loading the dataset Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, TechCrunch, or The New York Times). End of explanation tokenizer = Tokenizer() tokenizer.fit_on_texts(titles_df.title) integerized_titles = tokenizer.texts_to_sequences(titles_df.title) integerized_titles[:3] VOCAB_SIZE = len(tokenizer.index_word) VOCAB_SIZE DATASET_SIZE = tokenizer.document_count DATASET_SIZE MAX_LEN = max(len(sequence) for sequence in integerized_titles) MAX_LEN Explanation: Integerize the texts The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that: End of explanation # TODO 1 def create_sequences(texts, max_len=MAX_LEN): sequences = tokenizer.texts_to_sequences(texts) padded_sequences = pad_sequences(sequences, max_len, padding="post") return padded_sequences sequences = create_sequences(titles_df.title[:3]) sequences titles_df.source[:4] Explanation: Let's now implement a function create_sequence that will * take as input our titles as well as the maximum sentence length and * returns a list of the integers corresponding to our tokens padded to the sentence maximum length Keras has the helper functions pad_sequence for that on the top of the tokenizer methods. End of explanation CLASSES = {"github": 0, "nytimes": 1, "techcrunch": 2} N_CLASSES = len(CLASSES) # TODO 2 def encode_labels(sources): classes = [CLASSES[source] for source in sources] one_hots = to_categorical(classes) return one_hots encode_labels(titles_df.source[:4]) Explanation: We now need to write a function that * takes a title source and * returns the corresponding one-hot encoded vector Keras to_categorical is handy for that. End of explanation N_TRAIN = int(DATASET_SIZE * 0.80) titles_train, sources_train = ( titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN], ) titles_valid, sources_valid = ( titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:], ) Explanation: Preparing the train/test splits Let's split our data into train and test splits: End of explanation sources_train.value_counts() sources_valid.value_counts() Explanation: To be on the safe side, we verify that the train and test splits have roughly the same number of examples per classes. Since it is the case, accuracy will be a good metric to use to measure the performance of our models. End of explanation X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train) X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid) X_train[:3] Y_train[:3] Explanation: Using create_sequence and encode_labels, we can now prepare the training and validation data to feed our models. The features will be padded list of integers and the labels will be one-hot-encoded 3D vectors. End of explanation def build_dnn_model(embed_dim): model = Sequential( [ Embedding( VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN] ), # TODO 3 Lambda(lambda x: tf.reduce_mean(x, axis=1)), # TODO 4 Dense(N_CLASSES, activation="softmax"), # TODO 5 ] ) model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) return model Explanation: Building a DNN model The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class. Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words". End of explanation %%time tf.random.set_seed(33) MODEL_DIR = os.path.join(LOGDIR, "dnn") shutil.rmtree(MODEL_DIR, ignore_errors=True) BATCH_SIZE = 300 EPOCHS = 100 EMBED_DIM = 10 PATIENCE = 0 dnn_model = build_dnn_model(embed_dim=EMBED_DIM) dnn_history = dnn_model.fit( X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(X_valid, Y_valid), callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)], ) pd.DataFrame(dnn_history.history)[["loss", "val_loss"]].plot() pd.DataFrame(dnn_history.history)[["accuracy", "val_accuracy"]].plot() dnn_model.summary() Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard. End of explanation def build_rnn_model(embed_dim, units): model = Sequential( [ Embedding( VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True ), # TODO 3 GRU(units), # TODO 5 Dense(N_CLASSES, activation="softmax"), ] ) model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) return model Explanation: Building a RNN model The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence. The first and last layers are the same as for the simple DNN model. Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers. End of explanation %%time tf.random.set_seed(33) MODEL_DIR = os.path.join(LOGDIR, "rnn") shutil.rmtree(MODEL_DIR, ignore_errors=True) EPOCHS = 100 BATCH_SIZE = 300 EMBED_DIM = 10 UNITS = 16 PATIENCE = 0 rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS) history = rnn_model.fit( X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(X_valid, Y_valid), callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)], ) pd.DataFrame(history.history)[["loss", "val_loss"]].plot() pd.DataFrame(history.history)[["accuracy", "val_accuracy"]].plot() rnn_model.summary() Explanation: Let's train the model with early stoping as above. Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs): End of explanation def build_cnn_model(embed_dim, filters, ksize, strides): model = Sequential( [ Embedding( VOCAB_SIZE + 1, embed_dim, input_shape=[MAX_LEN], mask_zero=True ), # TODO 3 Conv1D( # TODO 5 filters=filters, kernel_size=ksize, strides=strides, activation="relu", ), Flatten(), # TODO 5 Dense(N_CLASSES, activation="softmax"), ] ) model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) return model Explanation: Build a CNN model The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence. The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer. Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers. End of explanation %%time tf.random.set_seed(33) MODEL_DIR = os.path.join(LOGDIR, "cnn") shutil.rmtree(MODEL_DIR, ignore_errors=True) EPOCHS = 100 BATCH_SIZE = 300 EMBED_DIM = 5 FILTERS = 200 STRIDES = 2 KSIZE = 3 PATIENCE = 0 cnn_model = build_cnn_model( embed_dim=EMBED_DIM, filters=FILTERS, strides=STRIDES, ksize=KSIZE, ) cnn_history = cnn_model.fit( X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(X_valid, Y_valid), callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)], ) pd.DataFrame(cnn_history.history)[["loss", "val_loss"]].plot() pd.DataFrame(cnn_history.history)[["accuracy", "val_accuracy"]].plot() cnn_model.summary() Explanation: Let's train the model. Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps. End of explanation
12,686
Given the following text description, write Python code to implement the functionality described below step by step Description: Vizualizing BigQuery data in a Jupyter notebook BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime. Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table. Using Jupyter magics to query BigQuery data The BigQuery Python client library provides a magic command that allows you to run queries with minimal code. The BigQuery client library provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year. Step1: The following command to runs the same query, but this time the results are saved to a variable. The variable name, total_births, is given as an argument to the %%bigquery. The results can then be used for further analysis and visualization. Step2: The next cell uses the pandas DataFrame.plot method to visualize the query results as a bar chart. See the pandas documentation to learn more about data visualization with pandas. Step3: Run the following query to retrieve the number of births by weekday. Because the wday (weekday) field allows null values, the query excludes records where wday is null. Step4: Visualize the query results using a line chart. Step5: Using Python to query BigQuery data Magic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, %%bigquery uses the BigQuery Python client library to run the given query, convert the results to a pandas Dataframe, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks. To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API. Step7: Use the Client.query method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.). Step8: To chart the query results in your DataFrame, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time. Step10: Run the following query to retrieve the count of births by the number of gestation weeks. Step11: Finally, chart the query results in your DataFrame.
Python Code: %%bigquery SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year DESC LIMIT 15 Explanation: Vizualizing BigQuery data in a Jupyter notebook BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime. Data visualization tools can help you make sense of your BigQuery data and help you analyze the data interactively. You can use visualization tools to help you identify trends, respond to them, and make predictions using your data. In this tutorial, you use the BigQuery Python client library and pandas in a Jupyter notebook to visualize data in the BigQuery natality sample table. Using Jupyter magics to query BigQuery data The BigQuery Python client library provides a magic command that allows you to run queries with minimal code. The BigQuery client library provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame. The following cell executes a query of the BigQuery natality public dataset and returns the total births by year. End of explanation %%bigquery total_births SELECT source_year AS year, COUNT(is_male) AS birth_count FROM `bigquery-public-data.samples.natality` GROUP BY year ORDER BY year DESC LIMIT 15 Explanation: The following command to runs the same query, but this time the results are saved to a variable. The variable name, total_births, is given as an argument to the %%bigquery. The results can then be used for further analysis and visualization. End of explanation total_births.plot(kind="bar", x="year", y="birth_count"); Explanation: The next cell uses the pandas DataFrame.plot method to visualize the query results as a bar chart. See the pandas documentation to learn more about data visualization with pandas. End of explanation %%bigquery births_by_weekday SELECT wday, SUM(CASE WHEN is_male THEN 1 ELSE 0 END) AS male_births, SUM(CASE WHEN is_male THEN 0 ELSE 1 END) AS female_births FROM `bigquery-public-data.samples.natality` WHERE wday IS NOT NULL GROUP BY wday ORDER BY wday ASC Explanation: Run the following query to retrieve the number of births by weekday. Because the wday (weekday) field allows null values, the query excludes records where wday is null. End of explanation births_by_weekday.plot(x="wday"); Explanation: Visualize the query results using a line chart. End of explanation from google.cloud import bigquery client = bigquery.Client() Explanation: Using Python to query BigQuery data Magic commands allow you to use minimal syntax to interact with BigQuery. Behind the scenes, %%bigquery uses the BigQuery Python client library to run the given query, convert the results to a pandas Dataframe, optionally save the results to a variable, and finally display the results. Using the BigQuery Python client library directly instead of through magic commands gives you more control over your queries and allows for more complex configurations. The library's integrations with pandas enable you to combine the power of declarative SQL with imperative code (Python) to perform interesting data analysis, visualization, and transformation tasks. To use the BigQuery Python client library, start by importing the library and initializing a client. The BigQuery client is used to send and receive messages from the BigQuery API. End of explanation sql = SELECT plurality, COUNT(1) AS count, year FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(plurality) AND plurality > 1 GROUP BY plurality, year ORDER BY count DESC df = client.query(sql).to_dataframe() df.head() Explanation: Use the Client.query method to run a query. Execute the following cell to run a query to retrieve the annual count of plural births by plurality (2 for twins, 3 for triplets, etc.). End of explanation pivot_table = df.pivot(index="year", columns="plurality", values="count") pivot_table.plot(kind="bar", stacked=True, figsize=(15, 7)); Explanation: To chart the query results in your DataFrame, run the following cell to pivot the data and create a stacked bar chart of the count of plural births over time. End of explanation sql = SELECT gestation_weeks, COUNT(1) AS count FROM `bigquery-public-data.samples.natality` WHERE NOT IS_NAN(gestation_weeks) AND gestation_weeks <> 99 GROUP BY gestation_weeks ORDER BY gestation_weeks df = client.query(sql).to_dataframe() Explanation: Run the following query to retrieve the count of births by the number of gestation weeks. End of explanation ax = df.plot(kind="bar", x="gestation_weeks", y="count", figsize=(15, 7)) ax.set_title("Count of Births by Gestation Weeks") ax.set_xlabel("Gestation Weeks") ax.set_ylabel("Count"); Explanation: Finally, chart the query results in your DataFrame. End of explanation
12,687
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: CCCR-IITM Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:48 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
12,688
Given the following text description, write Python code to implement the functionality described below step by step Description: MCMC Demonstration Markov Chain Monte Carlo is a useful technique for fitting models to data and obtaining estimates for the uncertainties of the model parameters. There are a slew of python modules and interfaces to do MCMC including Step1: Emcee has multithreadding support. Set this to the number of cores you would like to use. In this demo we will use the python multiprocessing module support built in to emcee. Emcee can also use MPI if you're working on a cluster and want to distribute the job across nodes. See the documentation for that. Step2: Fitting a Line Generate and Plot Some Random Data Give it both x and y errors. Step3: Least-Squares Fit (ignoring the x-errors) Step4: Maximum likelihood So, we need to define a likelihood function. Step5: What about the Errors? This is where MCMC comes in. But we need to add some priors for the parameters and use those priors Step6: That took about 10 seconds on my desktop (3.4 GHz Core i7). What is the acceptance rate? Lore has it that this should be between $0.3-0.5$. Step7: This acceptance rate is okay. If it is too low, the emcee documentation suggests increasing the number of walkers until the acceptance fraction is at the desired level. Let's visualize the chains. Step8: It looks like the walkers have "burned in" by 50 steps, so keep only those samples after 50 steps. Step9: What does this look like? Let's visualize with the traditional corner plot. I will give it the actual line parameters with the "truths" parameter, so we can see how our results compare to the actual values. Step10: Now let's plot a bunch of sample fits from the MCMC chain, on top of our data and other models. Step12: Astrophysical Example Step13: MCMC This will show how you might use informative priors. Let's make sure it knows that the dust needs to be warmer than the CMB and that the amplitude needs to be positive. Also, "normal" galaxies have dust temperatures of ~25 K, with a dispersion of a few 2K. Let's set the prior on temperature to be a Gaussian centered at 25 K with a sigma of 2.5K. Step14: Because of the larger parameter space and more complex model, this will take longer to run. Step15: Again, look at the distribution of parameter estimates. But here, show the estimated parameters from the maximum likelihood model as the "true" values. Step16: The offsets between the MCMC median values and the maximum likelihood are at least partially a consequence of our chosen prior on the temperature.
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import emcee import corner Explanation: MCMC Demonstration Markov Chain Monte Carlo is a useful technique for fitting models to data and obtaining estimates for the uncertainties of the model parameters. There are a slew of python modules and interfaces to do MCMC including: emcee PyMC pymultinest emcee is fairly straightforward to use, so this demonstration is written to use that. pymultinest is worth investigating if you have large numbers of parameters (say, > 30) and/or multi-nodal solution spaces. Required Packages For this demo you should have the following packages installed (in addition to standard ones like numpy, scipy, matplotlib, and astropy): emcee corner Optionally, install the dust_emissivity package for the last section, on fitting a blackbody. Demo Overview This demo will proceed very simply, and will follow the emcee tutorial for fititng a line. At the end is a short, astrophysical example, which includes non-flat priors. Preliminaries End of explanation nthreads = 2 Explanation: Emcee has multithreadding support. Set this to the number of cores you would like to use. In this demo we will use the python multiprocessing module support built in to emcee. Emcee can also use MPI if you're working on a cluster and want to distribute the job across nodes. See the documentation for that. End of explanation # define our true relation m_true = 1.7 b_true = 2.7 f_true = 0.3 # generate some data N = 30 x = np.sort(10*np.random.rand(N)) yerr = 0.2+0.6*np.random.rand(N) y = m_true*x+b_true y += np.abs(f_true*y) * np.random.randn(N) y += yerr * np.random.randn(N) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') Explanation: Fitting a Line Generate and Plot Some Random Data Give it both x and y errors. End of explanation A = np.vstack((np.ones_like(x), x)).T C = np.diag(yerr * yerr) cov = np.linalg.inv(np.dot(A.T, np.linalg.solve(C, A))) b_ls, m_ls = np.dot(cov, np.dot(A.T, np.linalg.solve(C, y))) print('Least squares fitting result:') print('slope: {0:1.2f}'.format(m_ls)) print('y-intercept: {0:1.2f}'.format(b_ls)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') Explanation: Least-Squares Fit (ignoring the x-errors) End of explanation import scipy.optimize as op def lnlike(theta, x, y, yerr): b, m, lnf = theta model = m * x + b inv_sigma2 = 1.0/(yerr**2 + model**2*np.exp(2*lnf)) return -0.5*(np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2))) # let's make some initial guesses for our parameters # remember this is now theta and b_perp p2 = [b_true, m_true, f_true] nll = lambda *args: -lnlike(*args) result = op.minimize(nll, p2, args=(x, y, yerr)) if not(result['success']): print("Max likelihood failed.") print(result['message']) ml_b, ml_m, ml_f = result['x'] print("Maximum likelihood result:") print("slope: {0:1.2f}".format(ml_m)) print("y-intercept: {0:1.2f}".format(ml_b)) print("ln(f): {0:1.2f}".format(ml_f)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.plot(x, ml_m * x + ml_b, color='blue', ls='--', label='Max likelihood') ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') Explanation: Maximum likelihood So, we need to define a likelihood function. End of explanation def lnprior(theta): b, m, lnf = theta if lnf >= 0.0: return -np.inf return 0.0 def lnprob(theta, x, y, yerr): lp = lnprior(theta) if not np.isfinite(lp): return -np.inf return lp + lnlike(theta, x, y, yerr) # now let's set up the MCMC chains ndim = 3 nwalkers = 500 steps = 500 # initialize the walkers to the vicinity of the parameters derived from # ML pos = [result["x"] + 1e-3*np.random.randn(ndim) for i in range(nwalkers)] # initialze the sampler sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y, yerr), threads=nthreads) # go! go! go! # run the sampler for 500 steps sampler.run_mcmc(pos, steps) samples = sampler.chain Explanation: What about the Errors? This is where MCMC comes in. But we need to add some priors for the parameters and use those priors End of explanation print("Mean acceptance rate is: {0:1.2f}".format(np.mean(sampler.acceptance_fraction))) Explanation: That took about 10 seconds on my desktop (3.4 GHz Core i7). What is the acceptance rate? Lore has it that this should be between $0.3-0.5$. End of explanation fig = plt.figure() dim_name = [r'$b$', r'$m$', r'$\ln f$'] for dim in range(ndim): ax = fig.add_subplot(ndim, 1, dim+1) for i in range(nwalkers): ax.plot(np.arange(steps), samples[i, :, dim], ls='-', color='black', alpha=10./nwalkers) ax.set_ylabel(dim_name[dim], fontsize='large') ax.set_xlabel('step', fontsize='large') Explanation: This acceptance rate is okay. If it is too low, the emcee documentation suggests increasing the number of walkers until the acceptance fraction is at the desired level. Let's visualize the chains. End of explanation samples = sampler.chain[:, 50:, :].reshape((-1, ndim)) Explanation: It looks like the walkers have "burned in" by 50 steps, so keep only those samples after 50 steps. End of explanation fig = corner.corner(samples, labels=[r"$b$", r"$m$", r"$\ln\,f$"], quantiles=[0.16, 0.5, 0.84], truths=[b_true, m_true, np.log(f_true)], show_titles=True) Explanation: What does this look like? Let's visualize with the traditional corner plot. I will give it the actual line parameters with the "truths" parameter, so we can see how our results compare to the actual values. End of explanation fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(x, y, yerr=yerr, ls='', marker='.', color='gray', label='Data') ax.plot(x, m_true*x + b_true, color='black', ls='-', label='True Relation') ax.plot(x, m_ls * x + b_ls, color='red', ls=':', label='Least Squares') ax.plot(x, ml_m * x + ml_b, color='blue', ls='--', label='Max likelihood') for b, m, lnf in samples[np.random.randint(len(samples), size=100)]: ax.plot(x, m * x + b, color='green', alpha=0.1) ax.set_ylabel('y', fontsize='x-large') ax.set_xlabel('x', fontsize='x-large') ax.minorticks_on() ax.legend(loc='best') samples[:, 2] = np.exp(samples[:, 2]) b_mcmc, m_mcmc, f_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) print("MCMC Parameter estimates:") print("slope: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(m_mcmc[0], m_mcmc[1], m_mcmc[2])) print("y-intercept: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(b_mcmc[0], b_mcmc[1], b_mcmc[2])) print("\nTrue values:") print("slope: {0:1.2f}".format(m_true)) print("y-intercept: {0:1.2f}".format(b_true)) Explanation: Now let's plot a bunch of sample fits from the MCMC chain, on top of our data and other models. End of explanation from dust_emissivity.blackbody import modified_blackbody import astropy.units as u def fit_bb(x, *p): simpler wrapper function to get the units right I don't care about the absolute amplitude, so the 1e-9 factor is just for numerical happiness. return 1.e-9* p[1] * modified_blackbody((x*u.micron).to(u.Hz, equivalencies=u.spectral()), p[0] * u.K, beta=p[2], kappa0=0.48*u.m**2/u.kg, nu0=(250*u.micron).to('Hz', u.spectral())).to('Jy').value FIRm = np.array([(70., 50., 2.6), (100., 55., 2.3), (160., 34., 1.6), (250., 12., 0.8), (350., 4.6, 0.3), (500., 1.3, 0.1)], dtype=[('wave', float), ('flux', float), ('dflux', float)]) plotrange = np.arange(FIRm['wave'][0], FIRm['wave'][-1], 1) def lnlike(theta, x, y, yerr): T, amp, beta, lnf = theta model = fit_bb(x, T, amp, beta) inv_sigma2 = 1.0 / (yerr**2 + model**2*np.exp(2*lnf)) return -0.5 * np.sum((y-model)**2*inv_sigma2 - np.log(inv_sigma2)) # initial guesses. 25K, arbitrary p0 = [25, 1, 1.8, -1] nll = lambda *args: -lnlike(*args) maxlike = op.minimize(nll, p0, args=(FIRm['wave'], FIRm['flux'], FIRm['dflux'])) Tfit, Ampfit, betafit, lnffit = maxlike["x"] print("Max likelihood:") print("T: {0:1.1f} K".format(Tfit)) print("amp: {0:1.1f}".format(Ampfit)) print("beta: {0:1.2f}".format(betafit)) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(FIRm['wave'], FIRm['flux'], yerr=FIRm['dflux'], ls='', marker='.', color='black', label='Herschel PACS+SPIRE') ax.plot(plotrange, fit_bb(plotrange, Tfit, Ampfit, betafit), color='red', label='Max likelihood') ax.set_ylabel(r'F$_{\nu}$') ax.set_xlabel('$\lambda$ ($\mu m$)') ax.set_xlim([60, 600]) ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc='best') Explanation: Astrophysical Example: FIR SED Let's say we have Herschel PACS+SPIRE photometry and we want to get the dust temperature... End of explanation def lnprior(theta): T, amp, lnf, beta = theta if T >= 2.73 and amp > 0.: return -1 * (T - 25)**2 / (2 * 2.5**2) return -np.inf def lnprob(theta, x, y, yerr): lp = lnprior(theta) if not(np.isfinite(lp)): return -np.inf return lp + lnlike(theta, x, y, yerr) ndim, nwalkers = 4, 300 pos = [maxlike["x"] + 1e-4 * np.random.randn(ndim) for i in range(nwalkers)] Explanation: MCMC This will show how you might use informative priors. Let's make sure it knows that the dust needs to be warmer than the CMB and that the amplitude needs to be positive. Also, "normal" galaxies have dust temperatures of ~25 K, with a dispersion of a few 2K. Let's set the prior on temperature to be a Gaussian centered at 25 K with a sigma of 2.5K. End of explanation sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(FIRm['wave'], FIRm['flux'], FIRm['dflux']), threads=nthreads) sampler.run_mcmc(pos, 1000) samples = sampler.chain[:, 100:, :].reshape((-1, ndim)) Explanation: Because of the larger parameter space and more complex model, this will take longer to run. End of explanation # show best-fit values as the "truth" values fig = corner.corner(samples, labels=["T", "Amp", r"$\beta$", r"$\ln\,f$"], quantiles=[0.16, 0.5, 0.84], show_titles=True, truths=[Tfit, Ampfit, betafit, lnffit]) Explanation: Again, look at the distribution of parameter estimates. But here, show the estimated parameters from the maximum likelihood model as the "true" values. End of explanation fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.errorbar(FIRm['wave'], FIRm['flux'], yerr=FIRm['dflux'], ls='', marker='.', color='black', label='Herschel PACS+SPIRE') ax.plot(plotrange, fit_bb(plotrange, Tfit, Ampfit, betafit), color='red', label='Max likelihood') for T, A, b, lnf in samples[np.random.randint(len(samples), size=100)]: ax.plot(plotrange, fit_bb(plotrange, T, A, b), color='green', alpha=0.05) ax.set_ylabel(r'F$_{\nu}$') ax.set_xlabel('$\lambda$ ($\mu m$)') ax.set_xlim([60, 600]) ax.set_yscale('log') ax.set_xscale('log') ax.legend(loc='best') samples[:, 3] = np.exp(samples[:, 3]) T_mcmc, A_mcmc, beta_mcmc, f_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]), zip(*np.percentile(samples, [16, 50, 84], axis=0))) print("MCMC Parameter estimates:") print("T: {0:1.2f} (+{1:1.2f}, -{2:1.2f}) K".format(T_mcmc[0], T_mcmc[1], T_mcmc[2])) print("beta: {0:1.2f} (+{1:1.2f}, -{2:1.2f})".format(beta_mcmc[0], beta_mcmc[1], beta_mcmc[2])) Explanation: The offsets between the MCMC median values and the maximum likelihood are at least partially a consequence of our chosen prior on the temperature. End of explanation
12,689
Given the following text description, write Python code to implement the functionality described below step by step Description: matplotlib 폰트 설정 및 한글 사용 여기에서는 리눅스 운영체제에서 matplotlib 을 사용한 플롯에서 디폴트 폰트가 아닌 다른 폰트를 사용하거나 한글을 사용하기 위한 방법을 설명한다. 폰트 설치 matplotlib에서 특정한 폰트를 사용하기 위해서는 우선 시스템에 폰트가 설치되어 있어야 한다. 폰트 설치 여부는 fc-list 명령으로 확인할 수 있다. datascienceschool/rpython에는 다음과 같은 폰트들이 설치되어 있다. 한글 폰트로는 나눔폰트, 은폰트 등이 설치되어 있다. Step1: matplotlib 에서 폰트 확인 matplotlib는 시스템에 설치된 폰트 중 True Type 폰트를 기존적으로 사용한다. matplotlib에서 사용할 수 있는 폰트를 확인하려면 폰트 매니저의 get_fontconfig_fonts 명령을 사용한다. Step2: 실제 폰트 설정을 위해서는 이 폰트 파일 이름이 아니라 폰트 이름을 알아야 한다. Step3: 폰트 사용 폰트를 사용하는 방법은 크게 두가지 이다. 인수를 사용하여 개별 텍스트 관련 명령에만 적용 rcParams 설정으로 이후의 그림 전체에 적용 인수 사용 개별 명령어마다 다른 폰트를 사용하려면 fontdict 인수를 사용한다. Step4: rcParams 설정 모든 그림에 동일한 폰트를 적용하려면 rcParams 설정 사전의 font.family 항목을 설정한다.
Python Code: !fc-list !fc-list Explanation: matplotlib 폰트 설정 및 한글 사용 여기에서는 리눅스 운영체제에서 matplotlib 을 사용한 플롯에서 디폴트 폰트가 아닌 다른 폰트를 사용하거나 한글을 사용하기 위한 방법을 설명한다. 폰트 설치 matplotlib에서 특정한 폰트를 사용하기 위해서는 우선 시스템에 폰트가 설치되어 있어야 한다. 폰트 설치 여부는 fc-list 명령으로 확인할 수 있다. datascienceschool/rpython에는 다음과 같은 폰트들이 설치되어 있다. 한글 폰트로는 나눔폰트, 은폰트 등이 설치되어 있다. End of explanation mpl.font_manager.get_fontconfig_fonts() mpl.font_manager.get_fontconfig_fonts() Explanation: matplotlib 에서 폰트 확인 matplotlib는 시스템에 설치된 폰트 중 True Type 폰트를 기존적으로 사용한다. matplotlib에서 사용할 수 있는 폰트를 확인하려면 폰트 매니저의 get_fontconfig_fonts 명령을 사용한다. End of explanation set(sorted([f.name for f in mpl.font_manager.fontManager.ttflist])) Explanation: 실제 폰트 설정을 위해서는 이 폰트 파일 이름이 아니라 폰트 이름을 알아야 한다. End of explanation font1 = {'family': 'UnPenheulim', 'color': 'black', 'size': 24} font2 = {'family': 'UnGungseo', 'color': 'darkred', 'weight': 'normal', 'size': 18} font3 = {'family': 'NanumGothic', 'color': 'blue', 'weight': 'light', 'size': 12} x = np.linspace(0.0, 5.0, 100) y = np.cos(2*np.pi*x) * np.exp(-x) plt.plot(x, y, 'k') plt.title(u'한글 제목', fontdict=font1) plt.xlabel(u'시간 (s)', fontdict=font2) plt.ylabel(u'전압 (mV)', fontdict=font3) plt.subplots_adjust() plt.show() Explanation: 폰트 사용 폰트를 사용하는 방법은 크게 두가지 이다. 인수를 사용하여 개별 텍스트 관련 명령에만 적용 rcParams 설정으로 이후의 그림 전체에 적용 인수 사용 개별 명령어마다 다른 폰트를 사용하려면 fontdict 인수를 사용한다. End of explanation mpl.rcParams['font.family'] = 'Nanum Brush Script' x = np.linspace(0.0, 5.0, 100) y = np.cos(2*np.pi*x) * np.exp(-x) plt.plot(x, y, 'k') plt.title(u'한글 제목', size=36) plt.xlabel(u'시간 (s)', size=24) plt.ylabel(u'전압 (mV)', size=18) plt.subplots_adjust() plt.show() Explanation: rcParams 설정 모든 그림에 동일한 폰트를 적용하려면 rcParams 설정 사전의 font.family 항목을 설정한다. End of explanation
12,690
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'pcmdi', 'sandbox-3', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: PCMDI Source ID: SANDBOX-3 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:36 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
12,691
Given the following text description, write Python code to implement the functionality described below step by step Description: Class 04 ML Models Step1: We will start with a subset of this data to illustrate what we are trying to do here. We use the sample() function to get a small piece of the data (we use the random_state option to make sure we use the same set of data every time, otherwise the data will change). Step2: What we want to do is have the computer learn where the boundary lies between the fast data points and the slow data points. That way we can input in any grade and any bumpiness and the computer will tell us whether to go fast or slow. It looks like there is a region between the two sets of data where we could potentially put our boundary. Step3: How do we decide where in this region to put the boundary? There are a couple of different algorithms that will do the job for us. We're not going to spend time describing how they work - you can look them up if you are interested in the mathematics. Instead, we'll look at how to apply them and look at how well they work. Perceptron The first algorithm is called the Perceptron (information on how it works is found on Wikipedia Step4: Now we import the model and train it, just like we did with the linear regression. Step5: We would like to visualize the decision boundary between the two classes. There are a couple of ways we could do this. For linear models like the perceptron, we can get the coefficients from the model and then plot them as a line. There are a couple of other steps to this, but fortunately, there is code to help us figure it out. Step6: Note that the line isn't very good - remember that we only used a subset of the data to fit the decision boundary. But it still lies in the expected range. There is another way we could plot this Step7: At this point, let's go back to the entire test dataset and fit the decision boundary for it. We'll also look at the out-of-sample performance by plotting the test data instead of the train data. Step8: So, there are a few things to note here. First, the Perceptron has given us a boundary that works fairly well. However, it isn't perfect. There are a few points that are labeled "fast" that will now be classified as "slow". It would be nice to have a way to quantify how well the classifier has performed. We'll look at a new set of tools to do that. Evaluation Metrics First, we review the evaluation metric we've already seen Step9: We can also visualize this as a graphic, showing a shade of color for each of the different values. This is especially useful when we have more than two classes. Because we'll use this again, we define a function that takes the class labels and confusion matrix as inputs and creates the plot. Step10: We can see now that the diagonal entries are what we want- the darker they are, the better we are doing. The off-diagonal terms (the slow-fast and fast-slow terms) are points that have been incorrectly identified. It would be nice if we could distill this matrix down into a single number. Unfortunately, there is no unique way of doing that. There are a couple of different metrics that people use and we can quickly go through them. There is a nice summary here of some of the metrics and how people use them. Class-dependent Metrics The first three metrics depend on what your target is. For example, with the Sensitivity/Recall score, the goal is to either correctly predict when to go slow or to correctly predict when to go fast. So there are two outputs from the score, depending on which is more important to you. Of course you could average them if you want and get something in the middle. Class-independent Metrics The last two metrics take all the possibilities into account and wrap them up as a single number. Which metric you use is something of a personal preference. However, it is good practice to use the same metric when comparing different models. Step11: The Perceptron is typically slow and not very flexible. With a large dataset it takes a long time to reach a solution. Altough it is simple to implement, it isn't very good and isn't used much. We'll do one more classifier to compare the two. Naïve Bayes We'll now try the Naïve Bayes classifier. If you are interested in how the classifier works, I suggest either this tutorial or reading the Wikipedia page.. We'll stick to the application and evaluation of the model. One of the advantages of the Naïve Bayes classifier is that it isn't fixed to a linear decision boundary. That means we can account for curved boundaries and maybe do a little bit better than the Perceptron. We use the same set of training/testing features and labels as we used with the Perceptron. That will give us a head-to-head comparison between the two models. Step12: There are a couple of things to note here Step13: Almost across the board, the Naïve Bayes classifier does a little bit better than the Perceptron classifier. It isn't a huge difference, though. On the other hand, the Naïve Bayes classifier is a faster algorithm and handles large datasets better. It also gives us one additional piece of information that can be useful Step14: How confident is the model of that prediction? Let's get the prediction proabilities for that point. Step15: So, we can see that, for this point, the model outputs a 68% chance that the point should be classified as "fast" and, therefore, a 32% chance that it is "slow". We can plot the confidence intervals for these points to show how the model is mapping input values to output values. (Note Step16: So we see that the model has a pretty high probabily of getting the label correct in both corners, but closer to the decision boundary the probability of each label approaches the midpoint of 50%. Logloss Metric We've got one more metric we can use for models that give us access to the prediction probabilities. This metric has the property that if all the points are correctly predicted, it will be 0.0. The closer to zero you are, the better the model is doing at predicting the correct outcomes. It is a class-independent metric and works for models with more than two classes, too. Step17: Assignment You assignment this week is to run through both the Perceptron and the Naïve Bayes classifiers with your classification data. Evaluate both models using each of the metrics we've learned about and compare the performance of the models. If you find that the model fit is taking a long time, you should note that in your assignment as well. How long a model takes to learn is an important parameter. There is a simple way of timing the model performance. We'll run both models again and compare their timing. For the small number of data points we have in this dataset, the timing isn't very different. That may not be the case for your models.
Python Code: import pandas as pd import seaborn as sns sns.set_style("white") #Note the new use of the dtype option here. We can directly tell pandas to use the Speed column as a category in one step. speeddf = pd.read_csv("Class04_speed_data.csv",dtype={'Speed':'category'}) lm = sns.lmplot(x='Grade', y='Bumpiness', data=speeddf, hue='Speed', fit_reg=False) sns.despine(ax=lm.ax, top=False, right=False) Explanation: Class 04 ML Models: Naïve Bayes + Evaluation Metrics We are going to work with classifier models today. We start with a sample dataset from Sebastian Thrun's Udacity Machine Learning course. Here's the scenario: we are building a self-driving car. We have mapped out the course we are taking and created a dataset that indicates, on a scale from 0 to 1, how bumpy the road is and, on the same scale, how steep the road is (measured in "grade"). For each road we need to know whether we should have the car drive "slow" or "fast". For example, we want to slow down for bumpy roads. But we may want to speed up when we are going up steep hills. I've created a sample dataset from fake data that maps this out. We start by loading and plotting the data. End of explanation speedsub = speeddf.sample(16,random_state=55) sns.lmplot(x='Grade', y='Bumpiness', data=speedsub, hue='Speed', fit_reg=False) sns.despine(top=False, right=False) Explanation: We will start with a subset of this data to illustrate what we are trying to do here. We use the sample() function to get a small piece of the data (we use the random_state option to make sure we use the same set of data every time, otherwise the data will change). End of explanation lm = sns.lmplot(x='Grade', y='Bumpiness', data=speedsub, hue='Speed', fit_reg=False) sns.despine(ax=lm.ax, top=False, right=False) from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection patches=[] polygon = Polygon([[.92,0],[1,0],[1,.24],[0,.9],[0,.67]], True) patches.append(polygon) p = PatchCollection(patches, alpha=0.4) lm.ax.add_collection(p) Explanation: What we want to do is have the computer learn where the boundary lies between the fast data points and the slow data points. That way we can input in any grade and any bumpiness and the computer will tell us whether to go fast or slow. It looks like there is a region between the two sets of data where we could potentially put our boundary. End of explanation from sklearn.model_selection import train_test_split trainsub, testsub = train_test_split(speedsub, test_size=0.2, random_state=23) Explanation: How do we decide where in this region to put the boundary? There are a couple of different algorithms that will do the job for us. We're not going to spend time describing how they work - you can look them up if you are interested in the mathematics. Instead, we'll look at how to apply them and look at how well they work. Perceptron The first algorithm is called the Perceptron (information on how it works is found on Wikipedia: https://en.wikipedia.org/wiki/Perceptron#Learning_algorithm). The documentation for the Scikit Learn Perceptron is found here. We'll use a syntax very similar to the pattern we used in Class02. First, we split the data into training and testing sets. End of explanation from sklearn.linear_model import Perceptron # Step 1: Create linear regression object model = Perceptron() # Step 2: Train the model using the training sets features = trainsub[['Grade','Bumpiness']].values labels = trainsub['Speed'].values model.fit(features,labels) print("Model Coefficients: {}".format(model.coef_)) print("Model Intercept: {}".format(model.intercept_)) Explanation: Now we import the model and train it, just like we did with the linear regression. End of explanation import matplotlib.pyplot as plt import numpy as np w = model.coef_[0] a = -w[0] / w[1] xx = np.linspace(0,1) yy = a * xx - (model.intercept_[0]) / w[1] # Plot the points lm2 = sns.lmplot(x='Grade', y='Bumpiness', data=speedsub, hue='Speed', fit_reg=False) sns.despine(ax=lm2.ax, top=False, right=False) # Plot our range estimate p2 = PatchCollection(patches, alpha=0.4) lm2.ax.add_collection(p2) # Plot the actual decision boundary plt.plot(xx, yy, 'k-') Explanation: We would like to visualize the decision boundary between the two classes. There are a couple of ways we could do this. For linear models like the perceptron, we can get the coefficients from the model and then plot them as a line. There are a couple of other steps to this, but fortunately, there is code to help us figure it out. End of explanation # Plot the decision boundary. For that, we will assign a color to each # point in the mesh x_min = 0.0; x_max = 1.0 # Mesh x size y_min = 0.0; y_max = 1.0 # Mesh y size h = .01 # step size in the mesh xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Now predict the results at each point and get the categorical values Zpred = model.predict(np.c_[xx.ravel(), yy.ravel()]) Zseries = pd.Series(Zpred, dtype='category') Zvalues = Zseries.cat.codes.values Z = Zvalues.reshape(xx.shape) # First plot our points lm2 = sns.lmplot(x='Grade', y='Bumpiness', data=speedsub, hue='Speed', fit_reg=False) sns.despine(ax=lm2.ax, top=False, right=False) # Now add in the decision boundary plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1) Explanation: Note that the line isn't very good - remember that we only used a subset of the data to fit the decision boundary. But it still lies in the expected range. There is another way we could plot this: we could split our figure into small boxes, then make a prediction for each box. We then plot all the decisions in two different colors, showing the prediction for each box. This gives us a more general tool for plotting not only linear boundaries, but any possible decision boundary. End of explanation train, test = train_test_split(speeddf, test_size=0.2, random_state=23) model2 = Perceptron() features_train = train[['Grade','Bumpiness']].values labels_train = train['Speed'].values features_test = test[['Grade','Bumpiness']].values labels_test = test['Speed'].values model2.fit(features_train,labels_train) Zpred = pd.Series(model2.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values Z = Zpred.reshape(xx.shape) # First plot our points lm = sns.lmplot(x='Grade', y='Bumpiness', data=test, hue='Speed', fit_reg=False) sns.despine(ax=lm.ax, top=False, right=False) plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1) Explanation: At this point, let's go back to the entire test dataset and fit the decision boundary for it. We'll also look at the out-of-sample performance by plotting the test data instead of the train data. End of explanation from sklearn.metrics import confusion_matrix class_labels = ["slow", "fast"] y_pred = model2.predict(features_test) cnf_matrix = confusion_matrix(labels_test, y_pred,labels=class_labels) print(cnf_matrix) Explanation: So, there are a few things to note here. First, the Perceptron has given us a boundary that works fairly well. However, it isn't perfect. There are a few points that are labeled "fast" that will now be classified as "slow". It would be nice to have a way to quantify how well the classifier has performed. We'll look at a new set of tools to do that. Evaluation Metrics First, we review the evaluation metric we've already seen: the RMS value for the linear regression. Recall from Class 02 that we calculated this by taking our model prediction, subtracting the actual value, squaring the difference, then averaging over all points in the test set. Finally, we took the square root of this to get the RMS: "[Square]Root [of the] Mean-Squared". A perfect fit would give an RMS of 0.0 and larger RMS values mean that the fit is not performing as well. There are more ways to evaluate the performance of a classifier model. They all start with the confusion matrix, so we'll start there. The Confusion Matrix The first thing we do is recognize that there are, for a binary, or two-state classifier, four possible outcomes when we evaluate each test point: 1. The prediction says "slow" and the actual label says "slow" 2. The prediction says "fast", but the actual label says "slow" 3. The prediction says "slow", but the actual label says "fast" 1. The prediction says "fast" and the actual label says "fast" The first and last possibilies indicate that the prediction did a good job, but the other two mean there were problems. Let's make this into a table: | | | Predicted | Predicted| |:--------: |:-----:|:-----:|:-----:| | | | Slow | Fast | |Actual |Slow | #1 | #2 | |Actual | Fast | #3 | #4 | | Now we need to count how many of each possibility there were using the test data. There is, naturally, a tool to do this for us. End of explanation def show_confusion_matrix(cnf_matrix, class_labels): plt.matshow(cnf_matrix,cmap=plt.cm.YlGn,alpha=0.7) ax = plt.gca() ax.set_xlabel('Predicted Label', fontsize=16) ax.set_xticks(range(0,len(class_labels))) ax.set_xticklabels(class_labels) ax.set_ylabel('Actual Label', fontsize=16, rotation=90) ax.set_yticks(range(0,len(class_labels))) ax.set_yticklabels(class_labels) ax.xaxis.set_label_position('top') ax.xaxis.tick_top() for row in range(len(cnf_matrix)): for col in range(len(cnf_matrix[row])): ax.text(col, row, cnf_matrix[row][col], va='center', ha='center', fontsize=16) show_confusion_matrix(cnf_matrix,class_labels) Explanation: We can also visualize this as a graphic, showing a shade of color for each of the different values. This is especially useful when we have more than two classes. Because we'll use this again, we define a function that takes the class labels and confusion matrix as inputs and creates the plot. End of explanation import sklearn.metrics as metrics recall_score = metrics.recall_score(labels_test, y_pred,labels=class_labels,average=None) prec_score = metrics.precision_score(labels_test, y_pred,labels=class_labels,average=None) f1_score = metrics.f1_score(labels_test, y_pred,labels=class_labels,average=None) acc_score = metrics.accuracy_score(labels_test, y_pred) matt_score = metrics.matthews_corrcoef(labels_test, y_pred) print("Class-dependent Metrics") print("Sensitivity/Recall Score: {}".format(recall_score)) print("Precision Score: {}".format(prec_score)) print("F1 Score: {}".format(f1_score)) print("\nClass-independent Metrics") print("Accuracy Score: {}".format(acc_score)) print("Matthews Correlation Coefficient (MCC): {}".format(matt_score)) Explanation: We can see now that the diagonal entries are what we want- the darker they are, the better we are doing. The off-diagonal terms (the slow-fast and fast-slow terms) are points that have been incorrectly identified. It would be nice if we could distill this matrix down into a single number. Unfortunately, there is no unique way of doing that. There are a couple of different metrics that people use and we can quickly go through them. There is a nice summary here of some of the metrics and how people use them. Class-dependent Metrics The first three metrics depend on what your target is. For example, with the Sensitivity/Recall score, the goal is to either correctly predict when to go slow or to correctly predict when to go fast. So there are two outputs from the score, depending on which is more important to you. Of course you could average them if you want and get something in the middle. Class-independent Metrics The last two metrics take all the possibilities into account and wrap them up as a single number. Which metric you use is something of a personal preference. However, it is good practice to use the same metric when comparing different models. End of explanation from sklearn.naive_bayes import GaussianNB nb_model = GaussianNB() nb_model.fit(features_train, labels_train) # Plot the decision boundary Zpred = pd.Series(nb_model.predict(np.c_[xx.ravel(), yy.ravel()]), dtype='category').cat.codes.values Z = Zpred.reshape(xx.shape) lm = sns.lmplot(x='Grade', y='Bumpiness', data=test, hue='Speed', fit_reg=False) sns.despine(ax=lm.ax, top=False, right=False) plt.pcolormesh(xx, yy, Z, cmap= plt.cm.cool, alpha=0.1) # Plot the confusion matrix y_pred_nb = nb_model.predict(features_test) cnf_matrix_nb = confusion_matrix(labels_test, y_pred_nb,labels=class_labels) show_confusion_matrix(cnf_matrix_nb, class_labels) Explanation: The Perceptron is typically slow and not very flexible. With a large dataset it takes a long time to reach a solution. Altough it is simple to implement, it isn't very good and isn't used much. We'll do one more classifier to compare the two. Naïve Bayes We'll now try the Naïve Bayes classifier. If you are interested in how the classifier works, I suggest either this tutorial or reading the Wikipedia page.. We'll stick to the application and evaluation of the model. One of the advantages of the Naïve Bayes classifier is that it isn't fixed to a linear decision boundary. That means we can account for curved boundaries and maybe do a little bit better than the Perceptron. We use the same set of training/testing features and labels as we used with the Perceptron. That will give us a head-to-head comparison between the two models. End of explanation recall_score = metrics.recall_score(labels_test, y_pred_nb,labels=class_labels,average=None) prec_score = metrics.precision_score(labels_test, y_pred_nb,labels=class_labels,average=None) f1_score = metrics.f1_score(labels_test, y_pred_nb,labels=class_labels,average=None) acc_score = metrics.accuracy_score(labels_test, y_pred_nb) matt_score = metrics.matthews_corrcoef(labels_test, y_pred_nb) print("Class-dependent Metrics") print("Sensitivity/Recall Score: {}".format(recall_score)) print("Precision Score: {}".format(prec_score)) print("F1 Score: {}".format(f1_score)) print("\nClass-independent Metrics") print("Accuracy Score: {}".format(acc_score)) print("Matthews Correlation Coefficient (MCC): {}".format(matt_score)) Explanation: There are a couple of things to note here: first: the decision boundary is curved! However, it is a fairly simple curve in that it doesn't wiggle very much - it is a smooth arc. This is related to the class Learning Principle of Occam's Razor. A straight-line is the simplest possible decision boundary and, therefore, is valued highly from the perspective of keeping the model as simple as possible. A smooth curve is slightly more complicated, but still fairly simple. The question is: do we gain out-of-sample performance by adding in the complexity of making the decision boundary curve? That brings us to the second point: the confusion matrix now shows us that we have mis-classified 17 points. We compare that to the Perceptron model where we mis-classified 20 points. So we've done a little bit better in terms of out-of-sample performance, which is good. Let's take a look at the other metrics to see how they compare. End of explanation print("Input values: {}".format(features_test[0])) print("Prediction: {}".format(nb_model.predict([features_test[0]]))) Explanation: Almost across the board, the Naïve Bayes classifier does a little bit better than the Perceptron classifier. It isn't a huge difference, though. On the other hand, the Naïve Bayes classifier is a faster algorithm and handles large datasets better. It also gives us one additional piece of information that can be useful: it will tell us the prediction probabilities for each test point. That will give us access to another metric that can be useful. Prediction Probabilities When we make a prediction on one of the test features, the Naïve Bayes classifier will not only tell us its prediction for what the label should be, it will also tell us with what probability it thinks that label is correct. For example, we input in the following values to get the prediction. End of explanation print("Prediction Probabilities: {}".format(nb_model.predict_proba([features_test[0]]))) Explanation: How confident is the model of that prediction? Let's get the prediction proabilities for that point. End of explanation #first, get all the predictions y_proba_nb = nb_model.predict_proba(features_test) test['fastprob'] = y_proba_nb[:,0] cm = plt.cm.get_cmap('YlGn') sc = plt.scatter(x=test['Grade'], y=test['Bumpiness'], c=test['fastprob'] , vmin=0, vmax=1, s=35, cmap=cm) cbr = plt.colorbar(sc) cbr.set_label('Probability of "fast"') plt.xlabel('Grade') plt.xlabel('Bumpiess') Explanation: So, we can see that, for this point, the model outputs a 68% chance that the point should be classified as "fast" and, therefore, a 32% chance that it is "slow". We can plot the confidence intervals for these points to show how the model is mapping input values to output values. (Note: there may be a pandas warning... it doesn't appear to affect the outcome, so don't worry about it.) End of explanation logloss = metrics.log_loss(labels_test, y_proba_nb) print("Log loss: {}".format(logloss)) Explanation: So we see that the model has a pretty high probabily of getting the label correct in both corners, but closer to the decision boundary the probability of each label approaches the midpoint of 50%. Logloss Metric We've got one more metric we can use for models that give us access to the prediction probabilities. This metric has the property that if all the points are correctly predicted, it will be 0.0. The closer to zero you are, the better the model is doing at predicting the correct outcomes. It is a class-independent metric and works for models with more than two classes, too. End of explanation import time # Perceptron Model start1 = time.time() model2.fit(features_train,labels_train) stop1 = time.time() print("Elapsed time: {} seconds".format(stop1-start1)) # Naïve Bayes model start2 = time.time() nb_model.fit(features_train,labels_train) stop2 = time.time() print("Elapsed time: {} seconds".format(stop2-start2)) Explanation: Assignment You assignment this week is to run through both the Perceptron and the Naïve Bayes classifiers with your classification data. Evaluate both models using each of the metrics we've learned about and compare the performance of the models. If you find that the model fit is taking a long time, you should note that in your assignment as well. How long a model takes to learn is an important parameter. There is a simple way of timing the model performance. We'll run both models again and compare their timing. For the small number of data points we have in this dataset, the timing isn't very different. That may not be the case for your models. End of explanation
12,692
Given the following text description, write Python code to implement the functionality described below step by step Description: VSA Syntax and Control Flow This notebook is meant as just a place to express my current thoughts on how to do large-scale VSA development. In particular, I'm interesting in finding a programming syntax that has the right level of abstraction and flexibility, and that syntax also needs to include the ability to control the flow of a VSA system. The vast majority of my current thoughts on this topic come from many discussions over the years at the Univesity of Waterloo Centre for Theoretical Neuroscience with many students, and especially with Jan Gosmann, who was also the primary developer for the nengo-spa package which instantiates many of these ideas. https Step1: Now we can plot the results. The Probe records the vector coming out of the answer variable, so in order to interpret that we can do the dot-product of that vector with the BLUE and CIRCLE vectors in the Vocabulary. Step2: It works! When the query changes from CIRCLE to BLUE at t=5, we get the desired change in output. If you have nengo_gui installed (pip install nengo_gui) you can also draw the resulting network diagram. (Note Step3: Timesteps There is (at least) one surprising thing about the result above Step4: Of course, even if we do specify that we want a one-time-step delay on each connection, then the compiler would still be free to do this sort of optimization; it would just have to also add in some delays to make sure everything works out how the user expects. In Nengo, for historical reasons the way of specifying a one-time-step delay is the unintuitive process of setting synapse=0. Since that is a bit weird, there's also the alternative provided by nengolib (pip install nengolib) of explicitly using the discrete-time operator z and specifying synapse=z**(-1) (or whatever other delay you want). (Note Step5: Now that we have one way of controlling the flow of time in our network, we can use this to start implementing algorithms. For example, let's turn our combined variable into a memory. That is, rather than having the variable go back to zero if the input is removed, let's have it remember its value. An easy way to do this is just to connect it back to itself. But, of course, when we do this we need to specify that there should be a one-time-step delay here (i.e. the input should be the output on the previous time-step). (If you try to make a recurrent loop like this all with synapse=None, the compiler will give you an error since that's impossible). If we get multiple inputs at the same time, I think the default thing that should happen would be to take the sum of the inputs (or, more generally, use whatever bundling operator is defined on your VSA). In this case, this means we can build up a memory by inputing multiple pairs of vectors over time, and then later query any of them. In the following case, we will input a BLUE CIRCLE and then a RED SQUARE, and the memory vector should end up at BLUE*CIRCLE+RED*SQUARE. Step6: Overall, I find this ability to control the flow of time to be extremely important when building up VSA-style algorithms. However, it is unclear to me what the best syntax for this sort of thing might be. The syntax shown above (i.e. setting the synapse parameter) is rather opaque and confusing to new readers. However, one other option that makes things a bit more clear would be to use a context manager, so you could do something like this Step7: Another possibility that has been talked about, but not implemented yet, is to try something like this Step8: Non-Linear Mapping Between VSAs (sort of like a cleanup memory but using two different Vocabularies) Step9: More advanced control flow Have a bunch of different actions and choose 1 based on some state similarity measure (i.e. do the action that is the closest match) (or maybe do weighted versions of each action based on the similarity? Many options)
Python Code: import nengo_spa as spa import nengo model = spa.Network() with model: # configure Nengo to just directly conpute things, rather than trying to implement the # network with neurons model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = None # This defines the VSA to use, its dimensionality, and maintains the map from symbols to vectors # If `strict` is True, then you have to predefine all symbols; otherwise it'll randomly generate # new ones as you use them. The default VSA to use is HRR. vocab = spa.Vocabulary(64, strict=False) # Define the variables, and indicate they're all using the same VSA and vocabulary color = spa.State(vocab) shape = spa.State(vocab) combined = spa.State(vocab) query = spa.State(vocab) answer = spa.State(vocab) # Connect it up color * shape >> combined combined * ~query >> answer # Define the inputs # Note that it should be possible to infer what the vocabulary should be based on where we're # connecting it to, but Nengo doesn't do that at the moment, so we have to specify that explicitly # Also note that you can specify constants either as strings or via `spa.sym.X` spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color spa.Transcode('CIRCLE', output_vocab=vocab) >> shape spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query p_answer = nengo.Probe(answer.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) Explanation: VSA Syntax and Control Flow This notebook is meant as just a place to express my current thoughts on how to do large-scale VSA development. In particular, I'm interesting in finding a programming syntax that has the right level of abstraction and flexibility, and that syntax also needs to include the ability to control the flow of a VSA system. The vast majority of my current thoughts on this topic come from many discussions over the years at the Univesity of Waterloo Centre for Theoretical Neuroscience with many students, and especially with Jan Gosmann, who was also the primary developer for the nengo-spa package which instantiates many of these ideas. https://www.nengo.ai/nengo-spa/ A lot of the ideas below are Jan's and came out of some very long discussions over the years. One core requirement for me is for the system to be as agnostic as possible to different VSA approaches. Indeed, I want to be able to mix-and-match VSA approaches within the same model. For another core requirement, I want the resulting syntax to be compatible with a Python parser (i.e. I don't want to invent a whole new language -- I'm happy doing fun things with the Python AST if necessary, but sticking with Python-compatible syntax makes things a lot simpler and accessible). Basic Syntax We need to be able to refer to both variables and constants. By convention, I'm going to use all lower-case to be a variable, and all upper-case to be a constant. So, I might have a variable a that at one point in time contains the vector for DOG and another time contains the vector for CAT. For binding, we use * and for bundling we use +. If there is an inversion operation for the binding operator, we use ~ to indicate that. So we could have a variable memory that contains BLUE*CIRCLE+RED*SQUARE and if we wanted to find out what color the circle is, we could compute memory*~CIRCLE. Basic Variables When we define a variable, we tend to want to be able to configure things about it. For example, we might want to configure what VSA is being used, and what dictionary (i.e. the mapping from symbols to vectors). In other words, we want this to be a strongly typed approach. So we'll need to be able to do something like a = Variable(...) and configure that variable on instantiation. Basic Control Flow We need some way to indicate that the result of some computation should be stored in a variable. One obvious way to do that would be to say something like memory = color*shape, and have that mean to store the result of binding whatever is in color and shape into the variable memory. However, that's not going to work in a language like Python, as that overrides whatever we'd defined memory to be when we instantiated it. So instead we need something like memory.set_to(color*shape), but that's rather ugly. Another option (and what we'll use here) is to take a hint from C++ and do color*shape &gt;&gt; memory. (We might also support memory &lt;&lt; color*shape, but I like forcing it to be the other way around so as to remind myself that it's not exactly the same as an assignment statement). A Simple Example The syntax as described above is close enough to what we are using in nengo_spa that we can put together a simple concrete example to see how this might work. To run this example, you will need to pip install nengo_spa. We have to do a bit of configuring of nengo to tell it not to try to implement everything in neurons, and not to include any synapse models between components. Also, the nengo_spa systems calls a variable a State. The basic model here is a simple binding and unbinding network. We'll bind together two inputs, store that vector, and then bind the result with the inverse of a query vector to get an answer. So if we've bound together BLUE and CIRCLE and we query it with CIRCLE, we should get BLUE out. We also need to be able to provide inputs and outputs from our system. In nengo_spa, this is a Transcode object, which acts like a variable but you can set its value either to be a constant or based on a function. While these can be used for both inputs and outputs, Nengo also provides a Probe to record a value over time while a model is run. We'll use that to record the answer from the network. End of explanation %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(6,1.5), dpi=200) plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T)) plt.legend(vocab.keys()) plt.xlabel('time') plt.ylabel('similarity') plt.show() Explanation: Now we can plot the results. The Probe records the vector coming out of the answer variable, so in order to interpret that we can do the dot-product of that vector with the BLUE and CIRCLE vectors in the Vocabulary. End of explanation import nengo_gui.ipython nengo_gui.ipython.InlineGUI(model) Explanation: It works! When the query changes from CIRCLE to BLUE at t=5, we get the desired change in output. If you have nengo_gui installed (pip install nengo_gui) you can also draw the resulting network diagram. (Note: if you try this and get an error message below, follow the instructions in the error message to configure IPython Notebook to allow the inline user interface to run.) End of explanation import nengo_spa as spa import nengo model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = None vocab = spa.Vocabulary(64, strict=False) color = spa.State(vocab) shape = spa.State(vocab) query = spa.State(vocab) answer = spa.State(vocab) (color*shape)*~query >> answer spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color spa.Transcode('CIRCLE', output_vocab=vocab) >> shape spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query p_answer = nengo.Probe(answer.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(6,1.5), dpi=200) plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T)) plt.legend(vocab.keys()) plt.xlabel('time') plt.ylabel('similarity') plt.show() Explanation: Timesteps There is (at least) one surprising thing about the result above: The network produces the correct answer on the very first timestep!. How is that possible? Given the network diagram, we have six layers in this network (The initial Transcode providing the input; the shape and color variables; something to do the binding of those together; the combined variable, the unbinding; and the answer variable). The reason this is happening is that we have specified that no time at all is being spent on the connections between components. This is what setting synapse=None means in Nengo. It means that when I connect two components, the input to the second component will be whatever the output was from the first component on that same timestep. Exactly how this is handled when the network is run is, of course, hardware-dependent: some hardware might just not support this, or we can do fun tricks like having mulitple hardware timesteps correspond to one simulation timestep. As for why we would want to support something like this, the main reason is that it lets me specify my expectations of the time flow of the algorithm. In particular, I'm saying that I want the answer available on the very same timestep that the input is provided, and that's going to be important if I'm connecting this up to a larger system. I don't want to have to count layers in order to sort out the timing. Furthermore, having this timing expectiation be explicit in this way opens the door to a variety of optimizations available in the compilation process. For example, because the compiler knows that the State variables are just variables and don't do any processing themselves, that means the compiler is free to optimize them out. We don't actually need the combined variable; we can just directly do (color*shape)*~query &gt;&gt; answer. A good compiler should be able to automatically turn that into this (further optimizations are left as an exercise to the reader). End of explanation import nengo_spa as spa import nengo model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = 0 # this actually means one timestep! vocab = spa.Vocabulary(64, strict=False) color = spa.State(vocab) shape = spa.State(vocab) query = spa.State(vocab) answer = spa.State(vocab) (color*shape)*~query >> answer spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color spa.Transcode('CIRCLE', output_vocab=vocab) >> shape spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query p_answer = nengo.Probe(answer.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(6,1.5), dpi=200) plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T)) plt.legend(vocab.keys()) plt.xlabel('time') plt.ylabel('similarity') plt.show() Explanation: Of course, even if we do specify that we want a one-time-step delay on each connection, then the compiler would still be free to do this sort of optimization; it would just have to also add in some delays to make sure everything works out how the user expects. In Nengo, for historical reasons the way of specifying a one-time-step delay is the unintuitive process of setting synapse=0. Since that is a bit weird, there's also the alternative provided by nengolib (pip install nengolib) of explicitly using the discrete-time operator z and specifying synapse=z**(-1) (or whatever other delay you want). (Note: in general, Nengo allows for arbitrary continuous and discrete linear filters to be applied at any connection between components, so this is just a special case of that overall system). End of explanation import nengo_spa as spa import nengo model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = None vocab = spa.Vocabulary(64, strict=False) color = spa.State(vocab) shape = spa.State(vocab) memory = spa.State(vocab) query = spa.State(vocab) answer = spa.State(vocab) color * shape >> memory memory * ~query >> answer # make sure the connection from memory back to itself has a one-time-step delay model.config[nengo.Connection].synapse = 0 memory >> memory model.config[nengo.Connection].synapse = None # present two different inputs on the first two timesteps spa.Transcode(lambda t: 'BLUE' if t<=1 else ('RED' if t<=2 else '0'), output_vocab=vocab) >> color spa.Transcode(lambda t: 'CIRCLE' if t<=1 else ('SQUARE' if t<=2 else '0'), output_vocab=vocab) >> shape # present 4 queries, starting at t=5 def query_func(t): if 5<t<=6: return 'BLUE' if 6<t<=7: return 'CIRCLE' if 7<t<=8: return 'RED' if 8<t<=9: return 'SQUARE' return '0' spa.Transcode(query_func, output_vocab=vocab) >> query p_answer = nengo.Probe(answer.output) p_query = nengo.Probe(query.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(6,3), dpi=200) plt.subplot(2, 1, 1) plt.plot(sim.trange(), np.dot(sim.data[p_query], vocab.vectors.T)) plt.legend(vocab.keys(), loc='upper left') plt.ylabel('query') plt.subplot(2, 1, 2) plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T)) plt.legend(vocab.keys(), loc='upper left') plt.xlabel('time') plt.ylabel('answer') plt.show() Explanation: Now that we have one way of controlling the flow of time in our network, we can use this to start implementing algorithms. For example, let's turn our combined variable into a memory. That is, rather than having the variable go back to zero if the input is removed, let's have it remember its value. An easy way to do this is just to connect it back to itself. But, of course, when we do this we need to specify that there should be a one-time-step delay here (i.e. the input should be the output on the previous time-step). (If you try to make a recurrent loop like this all with synapse=None, the compiler will give you an error since that's impossible). If we get multiple inputs at the same time, I think the default thing that should happen would be to take the sum of the inputs (or, more generally, use whatever bundling operator is defined on your VSA). In this case, this means we can build up a memory by inputing multiple pairs of vectors over time, and then later query any of them. In the following case, we will input a BLUE CIRCLE and then a RED SQUARE, and the memory vector should end up at BLUE*CIRCLE+RED*SQUARE. End of explanation import nengo_spa as spa import nengo # define the parameters to use in different contexts no_delay = nengo.Config(nengo.Connection) no_delay[nengo.Connection].synapse = None single_step_delay = nengo.Config(nengo.Connection) single_step_delay[nengo.Connection].synapse = 0 model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() vocab = spa.Vocabulary(64, strict=False) color = spa.State(vocab) shape = spa.State(vocab) memory = spa.State(vocab) query = spa.State(vocab) answer = spa.State(vocab) with no_delay: color * shape >> memory memory * ~query >> answer with single_step_delay: memory >> memory with no_delay: spa.Transcode(lambda t: 'BLUE' if t<=1 else ('RED' if t<=2 else '0'), output_vocab=vocab) >> color spa.Transcode(lambda t: 'CIRCLE' if t<=1 else ('SQUARE' if t<=2 else '0'), output_vocab=vocab) >> shape def query_func(t): if 5<t<=6: return 'BLUE' if 6<t<=7: return 'CIRCLE' if 7<t<=8: return 'RED' if 8<t<=9: return 'SQUARE' return '0' with no_delay: spa.Transcode(query_func, output_vocab=vocab) >> query p_answer = nengo.Probe(answer.output) p_query = nengo.Probe(query.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) %matplotlib inline import matplotlib.pyplot as plt import numpy as np plt.figure(figsize=(6,3), dpi=200) plt.subplot(2, 1, 1) plt.plot(sim.trange(), np.dot(sim.data[p_query], vocab.vectors.T)) plt.legend(vocab.keys(), loc='upper left') plt.ylabel('query') plt.subplot(2, 1, 2) plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T)) plt.legend(vocab.keys(), loc='upper left') plt.xlabel('time') plt.ylabel('answer') plt.show() Explanation: Overall, I find this ability to control the flow of time to be extremely important when building up VSA-style algorithms. However, it is unclear to me what the best syntax for this sort of thing might be. The syntax shown above (i.e. setting the synapse parameter) is rather opaque and confusing to new readers. However, one other option that makes things a bit more clear would be to use a context manager, so you could do something like this: python with no_delay: color * shape &gt;&gt; memory memory * ~query &gt;&gt; answer with single_step_delay: memory &gt;&gt; memory Nengo also has support for this approach, although it doesn't quite have the semantics I think I'd want. In particular, it interprets the single_step_delay as being defined at the level of connections between components, rather than at the level of the overall statement. This means that if I did this: python with single_step_delay: color * shape &gt;&gt; memory I'd actually get a two-time-step delay -- one for taking color and shape and feeding them into the binding operation, and one for taking the output of the binding and feeding it to memory. I can get around this by doing the following: python with no_delay: temp = color * shape # note the assignment statement here! with single_step_delay: temp &gt;&gt; memory In any case, for the particular case we have for the memory system, this isn't a problem as the semantics fit with what we want: End of explanation import nengo_spa as spa import nengo model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = None vocab1 = spa.Vocabulary(64) vocab1.populate('DOG;CAT;HAT;CAR') # pre-populate the vocabulary so that we have generated vectors vocab2 = spa.Vocabulary(512) vocab2.populate('DOG;CAT;MOUSE;ELEPHANT') # pre-populate the vocabulary so that we have generated vectors a = spa.State(vocab1) # use different vocabularies for the two components b = spa.State(vocab2) def input_func(t): if 1<t<=2: return '0.7*DOG' if 2<t<=3: return 'CAT' if 3<t<=4: return 'HAT' if 4<t<=5: return 'CAR' if 5<t<=6: return '0.7*DOG+0.7*CAT' return '0' spa.Transcode(input_func, output_vocab=vocab1) >> a # translate from one to the other a.translate(vocab2) >> b p_in = nengo.Probe(a.output) p_out = nengo.Probe(b.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) plt.figure(figsize=(6,3), dpi=200) plt.subplot(2, 1, 1) plt.plot(sim.trange(), np.dot(sim.data[p_in], vocab1.vectors.T)) plt.legend(vocab1.keys(), loc='upper right') plt.ylabel('a') plt.subplot(2, 1, 2) plt.plot(sim.trange(), np.dot(sim.data[p_out], vocab2.vectors.T)) plt.legend(vocab2.keys(), loc='upper right') plt.xlabel('time') plt.ylabel('b') plt.show() Explanation: Another possibility that has been talked about, but not implemented yet, is to try something like this: python memory.delay(1) &gt;&gt; memory Or perhaps this, if people like discrete-time operators: python memory.z**(-1) &gt;&gt; memory Or perhaps this, in reference to timesteps: python memory.t[-1] &gt;&gt; memory Or maybe even this, although it might be easily confused with referring to the last value in the VSA vector: python memory[-1] &gt;&gt; memory Combining Vocabularies and VSAs When building up a large system, we generally don't want to use the same VSA everywhere. This might involve just changing dimensionality (maybe my vision system uses 1000-dimensional vectors while my planning and reasoning system uses 500-dimensional, for example). Or maybe I want entirely different VSA approaches in different parts of the model, such as using HRRs in one part and MAP in another and FHRRs elsewhere. This means we're going to need some way of translating between them. I think there's three different classes of processing that I might want to do here. First, I might want to do nothing at all. If the vectors are the same dimensionality, I can imagine there are situations where I just want to keep the vector the same and just interpret it differently in different parts of the system. However, this is probably a pretty rare thing to want to do, but I think it should be supported. Second, I might want to do a linear transform between the two VSAs. There are lots of different possible ways of converting, but a linear transformation is a pretty simple one. In particular, I tend to use a matrix built from the outer product of the vector representation of items in the two different VSAs (see below for an example). Third, I might want any one of a large number of non-linear operations to convert between Vocabularies. This could involve winner-take-all type circuits or any number of a variety of options. One example would be an Associative Memory -- i.e. a system that is just a Cleanup Memory but where the output is in some different space as the input. (Indeed, I often think of Cleanup Memories as a special case of an Associative Memory where the input and output vocabularies are the same). Since there are a large number of configuration options for the third option (there are many different types of non-linearities I might want there), as far as programming syntax goes I tend to think of the third option as introducing a new component to my system (kind of like a Variable, but one where I feed in input in one VSA and it comes out in a different VSA). For the first and second option, however, since they are simpler (nothing at all or a linear transform, respectively), I instead think it's clearer to handle that while passing data from one place to the next. For example, given the syntax above, if I wrote a &gt;&gt; b and both a and b used different Vocabularies, then I might want to automatically do this conversion without any syntax at all. This would be sort of like not requiring an explicit cast when converting a variable from one type to another (e.g. int d = (int)(a+b/c)). However, in practice I found that having this happen automatically just led to more problems as I'd often forget that different parts of my system were using different VSAs. Having some explicit notation at least indicates that some conversion is happening. For nengo_spa we went with a.translate(vocab) &gt;&gt; b, where the vocab argument tells you what VSA to translate to. The system then inspects whatever symbols are in vocab and in the Vocabulary used by a (specified when it was initialized) and build up the sum of the outer products of any symbols that are defined in both Vocabularies. If you want to do nothing at all, then the syntax is a.reinterpret(vocab) &gt;&gt; b Below are two examples of using this approach, one using the linear map and one using the associative memory. Linear Mapping Between VSAs In the following model we have two vocabularies, one containing three-letter words (CAT, DOG, HAT, CAR) and one containing animals (DOG, CAT, MOUSE, ELEPHANT). The first is 64-dimensional and the second is 512-dimensional, but they could also be the same dimensionality. If I try to directly connect a variable using the first vocabulary to one using the second vocabulary, one thing I might want is that if the first variable contains DOG, then that should drive the second variable to its representation of DOG. And, indeed, if the first contained DOG+CAT, then we would also get out DOG+CAT. One simple way to do this for many VSAs is to generate a matrix formed by summing together the outer products of the two different representations for each term. That is, we compute $CAT_1 \times CAT_2 + DOG_1 \times DOG_2$ and then use the resulting matrix as a linear transform that maps from one vocabulary to the next. It should be noted that, while this method is pretty simple, it does rely on the vectors being known at the time of construction of the model. That's why we do the vocab.populate command to make sure those are generated. End of explanation import nengo_spa as spa import nengo model = spa.Network() with model: model.config[nengo.Ensemble].neuron_type = nengo.Direct() model.config[nengo.Connection].synapse = None vocab1 = spa.Vocabulary(64) vocab1.populate('DOG;CAT;HAT;CAR') # pre-populate the vocabulary so that we have generated vectors vocab2 = spa.Vocabulary(512) vocab2.populate('DOG;CAT;MOUSE;ELEPHANT') # pre-populate the vocabulary so that we have generated vectors a = spa.State(vocab1) # use different vocabularies for the two components b = spa.State(vocab2) # am = spa.ThresholdingAssocMem(input_vocab=vocab1, output_vocab=vocab2, threshold=0.3, mapping=['DOG','CAT'], function=lambda x: x > 0.3) def input_func(t): if 1<t<=2: return '0.7*DOG' if 2<t<=3: return 'CAT' if 3<t<=4: return 'HAT' if 4<t<=5: return 'CAR' if 5<t<=6: return '0.7*DOG+0.7*CAT' return '0' spa.Transcode(input_func, output_vocab=vocab1) >> a # translate from one to the other a >> am am >> b p_in = nengo.Probe(a.output) p_out = nengo.Probe(b.output) sim = nengo.Simulator(model, dt=1, optimize=False) sim.run(10) plt.figure(figsize=(6,3), dpi=200) plt.subplot(2, 1, 1) plt.plot(sim.trange(), np.dot(sim.data[p_in], vocab1.vectors.T)) plt.legend(vocab1.keys(), loc='upper right') plt.ylabel('a') plt.subplot(2, 1, 2) plt.plot(sim.trange(), np.dot(sim.data[p_out], vocab2.vectors.T)) plt.legend(vocab2.keys(), loc='upper right') plt.xlabel('time') plt.ylabel('b') plt.show() Explanation: Non-Linear Mapping Between VSAs (sort of like a cleanup memory but using two different Vocabularies) End of explanation # Switch statements and/or ifmax match state: case 'DOG': 'PET' >> action case 'CAT': 'FEED' >> action Explanation: More advanced control flow Have a bunch of different actions and choose 1 based on some state similarity measure (i.e. do the action that is the closest match) (or maybe do weighted versions of each action based on the similarity? Many options) End of explanation
12,693
Given the following text description, write Python code to implement the functionality described below step by step Description: Python for Science Python is free, it is open source, and it has a huge community. Python is one of the most popular and loved programming languages in the world! Many blogs come out every year listing the most popular programming languages. Python has been among the top choices for at least 5 years now. For example Step1: Let's review what happened there. We first loaded numpy, giving us the full power to use arrays. We created two arrays Step2: Exercise Step3: Exercise Step4: Exercise Step5: What's this from business about? matplotlib is a pretty big (and awesome!) library. All that we need is a subset of the library for creating 2D plots, so we ask for the pyplot module of the matplotlib library. Plotting the data is as easy as calling the function plot() from the module pyplot. Step6: But what if we'd like to get a title on this plot, or add labels to the axes? (We should always have labelled axes!). Also, we notice a long jump from the year 1960 to 1970
Python Code: import numpy # By the way: comments in code cells start with a hash. # here are two arrays, saved as variables x and y: x = numpy.array([1.0, 0.5, 2.5]) y = numpy.array([[ 1.0, 0.5, 2.5], [ 0.5, 1.1, 2.0]]) # The print function works on arrays: print(x) print(y) numpy.shape(y) numpy.shape(x) Explanation: Python for Science Python is free, it is open source, and it has a huge community. Python is one of the most popular and loved programming languages in the world! Many blogs come out every year listing the most popular programming languages. Python has been among the top choices for at least 5 years now. For example: The 7 Most In-Demand Programming Languages of 2018, by CodingDojo; or the post The Most In-Demand Programming Languages of 2018, on Third Republic. Python can be used for many things: managing databases, creating graphical user interfaces, making websites, and much more… including science. Because of the many uses, the world of Python includes many, many Libraries (you load the parts that you need, when you need them). In science, the two libraries that are king and queen of the world are: NumPy, and Matplotlib. Numpy NumPy is for working with data in the form of arrays (vectors, matrices). It has a myriad built-in functions or methods that work on arrays directly. To load the library into your current session of interactive Python, into a saved Python script, or into a Jupyter notebook, you use: python import numpy Tips: a one-dimensional array (vector) has the form: [1.0, 0.5, 2.5] a two-dimensional array (matrix) has the form: [[ 1.0, 0.5, 2.5], [ 0.5, 1.1, 2.0]] the elements in an array are numbered with an index that starts at 0 the colon notation: in any index position, a : means "all elements in this dimension" once numpy is loaded, its built-in functions are called like this: numpy.function(arg) (where arg is the function argument: arrays to operats on, and parameters) Try it! End of explanation x[0] Explanation: Let's review what happened there. We first loaded numpy, giving us the full power to use arrays. We created two arrays: x and y… then we print x and we print y. They look nice. Numpy has a built-in function to find out the "shape" of an array, which means: how many elements does this array have in each dimension? We find that y is a two-by-three array (it has two dimensions). What is the first element of x? We can use square brackets and the zero-index to find out: End of explanation y[0][0] Explanation: Exercise: Now, try it yourself. What is the first element of y? Right. The first element of y is a 3-wide array of numbers. If we want to access the first element of this now, we use: End of explanation #Load the data from local disk year, av_size = numpy.loadtxt(fname='data/statistic_id183648.csv', delimiter=',', skiprows=1, unpack=True) print(year) Explanation: Exercise: Try picking out different elements of the array y… We learned that: The square brackets allow us to pick out the elements of an array using an index: x[i] For a two-dimensional array, we can use two indices: y[i][j] All indices start at zero. This is super powerful! Matplotlib Matplotlib is for making all kinds of plots. To get an idea of the great variety of plots possibe, have a look at the online Gallery. You can see that Matplotlib itself is a pretty big library. We can load a portion of the library (called a module) that has the basic plotting funtions with: python from matplotlib import pyplot Once the pyplot module is loaded, its built-in functions are called like this: pyplot.function(arg) (where arg is the function argument). An example: size of households in the US Did you know that the size of households—that is, the number of people living in each household—has been steadily decreasing in the US and many other countries? This has perhaps surprising consequences. Even if population growth slows down, or stops altogether, the number of households keeps increasing at a fast rate. More households means more $CO_2$ emissions! This is bad for the planet. Get the data Here, we're assuming that you have all the files from this tutorial, or are working on the lesson after launching Binder. In that case you have a dataset in the data folder. To load the data into two arrays, named year and av-size, execute the following cell: End of explanation from matplotlib import pyplot %matplotlib inline Explanation: Exercise: Now print the variable av_size, correspondig to the average size of households (in numbers of people) for each year: Great! The next thing we want to do is make a plot of the changing size of households over the years. To do that, we need to load the Matplotlib module called pyplot: End of explanation pyplot.plot(year, av_size) Explanation: What's this from business about? matplotlib is a pretty big (and awesome!) library. All that we need is a subset of the library for creating 2D plots, so we ask for the pyplot module of the matplotlib library. Plotting the data is as easy as calling the function plot() from the module pyplot. End of explanation pyplot.plot(year, av_size, linestyle=':', marker='o') pyplot.title("Household size in the US, 1960–2016 \n", fontsize=16) pyplot.ylabel("Average number of people per household") Explanation: But what if we'd like to get a title on this plot, or add labels to the axes? (We should always have labelled axes!). Also, we notice a long jump from the year 1960 to 1970: let's add markers to the plot and change the line style to a dotted line. End of explanation
12,694
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below Step9: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token Step11: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step13: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step15: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below Step18: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders Step21: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) Step24: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. Step27: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) Step30: Build the Neural Network Apply the functions you implemented above to Step33: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements Step35: Neural Network Training Hyperparameters Tune the following parameters Step37: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step41: lstm_layers = 1 batch_size = 256 rnn_size = 512 train_loss = 0.726 (200 epochs) lstm_layers = 2 batch_size = 256 rnn_size = 512 train_loss = 2.163 (200 epochs), 0.123 (500 epochs) Save Parameters Save seq_length and save_dir for generating a new TV script. Step43: Checkpoint Step46: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names Step49: Choose Word Implement the pick_word() function to select the next word using probabilities. Step51: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) # TODO: Implement Function cnt = Counter() for word in text: cnt[word] += 1 vocab_to_int = {} int_to_vocab = {} for i, (word, count) in enumerate(cnt.most_common()): vocab_to_int[word] = i int_to_vocab[i] = word return vocab_to_int, int_to_vocab DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_create_lookup_tables(create_lookup_tables) Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation def token_lookup(): Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token # TODO: Implement Function token_dict = {'.': '||period||', ',': '||comma||', '"': '||quotation_mark||', ';': '||semicolon||', '!': '||exclamation_mark||', '?': '||question_mark||', '(': '||left_parentheses||', ')': '||right_parentheses||', '--': '||dash||', '\n': '||return|'} return token_dict DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_tokenize(token_lookup) Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation def get_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) # TODO: Implement Function input = tf.placeholder(tf.int32, shape=[None, None], name='input') targets = tf.placeholder(tf.int32, shape=[None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return input, targets, learning_rate DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_inputs(get_inputs) Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (Input, Targets, LearingRate) End of explanation lstm_layers = 2 def get_init_cell(batch_size, rnn_size): Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) # TODO: The script generation script doesn't have the capability to set keep_prob to 1. #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm] * lstm_layers) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name='initial_state') return cell, initial_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_init_cell(get_init_cell) Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation def get_embed(input_data, vocab_size, embed_dim): Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. # TODO: Implement Function embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim], minval=-1, maxval=1)) embed = tf.nn.embedding_lookup(embedding, input_data) return embed DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_embed(get_embed) Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation def build_rnn(cell, inputs): Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name='final_state') return outputs, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_rnn(build_rnn) Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation embed_dim = 300 def build_nn(cell, rnn_size, input_data, vocab_size): Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) # TODO: Implement Function embed = get_embed(input_data, vocab_size, embed_dim) outputs, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) return logits, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_nn(build_nn) Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation def get_batches(int_text, batch_size, seq_length): Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array # TODO: Implement Function batch = batch_size*seq_length n_batches = (len(int_text) - 1)//batch int_text = int_text[:n_batches*batch + 1] batches = np.zeros((n_batches, 2, batch_size, seq_length), dtype=np.int32) for i in range(0, n_batches): for j in range(0, batch_size): idx = (j*n_batches + i)*seq_length batches[i][0][j] = int_text[idx:idx+seq_length] batches[i][1][j] = int_text[idx+1:idx+seq_length+1] return batches DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_batches(get_batches) Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ``` End of explanation # Number of Epochs num_epochs = 500 # Batch Size batch_size = 256 # RNN Size rnn_size = 512 # Sequence Length seq_length = 30 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 16 DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE save_dir = './save' Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) Explanation: lstm_layers = 1 batch_size = 256 rnn_size = 512 train_loss = 0.726 (200 epochs) lstm_layers = 2 batch_size = 256 rnn_size = 512 train_loss = 2.163 (200 epochs), 0.123 (500 epochs) Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() Explanation: Checkpoint End of explanation def get_tensors(loaded_graph): Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) # TODO: Implement Function input = loaded_graph.get_tensor_by_name('input:0') initial_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probabilities = loaded_graph.get_tensor_by_name('probs:0') return input, initial_state, final_state, probabilities DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_tensors(get_tensors) Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation def pick_word(probabilities, int_to_vocab): Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word # TODO: Implement Function vocab_size = len(int_to_vocab) return int_to_vocab[np.random.choice(vocab_size, 1, p=probabilities)[0]] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_pick_word(pick_word) Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation
12,695
Given the following text description, write Python code to implement the functionality described below step by step Description: I've been thinking a lot about software achitecure lately. Not just thinking, because I wouldn't come up with these ideas on my own, but consuming a lot about it -- books, talks, slide decks, blog posts. And while thinking about all this, I've been hacking away at some projects in my spare time. And I noticed something, there's a lot of things in these projects that look a lot like this Step1: This is a bog standard user registration endpoint. We create a form, check if it's valid, shove that information on a user model and then into the database and redirect off. If it's not valid or if it wasn't submitted (the user just navigated to the page), we render out some HTML. It's all very basic, well trodden code. Besides, who wants to do registration again? It's boring. We want to do the interesting stuff. But there's some very real consequences to this code Step2: What's even the point of this? We're just testing if Mock works at this point. There's actual things we can do to make it more testable, but before delving into that, It hides logic If registering a user was solely about, "Fill this form out and we'll shove it into a database" there wouldn't be a blog post here. However, there is some logic hiding out here in the form Step3: When we call RegisterUserForm.validate_on_submit it also runs these two methods. However, I'm not of the opinion that the form should talk to the database at all, let alone run validation against database contents. So, let's write a little test harness that can prove that an existing user with a given username and email causes us to not register Step4: If these pass -- which they should, but you may have to install mock if you're not on Python 3 -- I think we should move the username and email validation into their own callables that are independently testable Step5: And then use these in the endpoint itself Step6: This is really hard to test, so instead of even attempting that -- being honest, I spent the better part of an hour attempting to test the actual endpoint and it was just a complete mess -- let's extract out the actual logic and place it into it's own callable Step7: Now we're beginning to see the fruits of our labors. These aren't the easiest functions to test, but there's less we need to mock out in order to test the actual logic we're after. Step8: Of course, we should also write tests for the controller. I'll leave that as an exercise. However, there's something very important we're learning from these tests. We have to mock.patch everything still. Our validators lean directly on the database, our user creation leans directly on the database, everything leans directly on the database. And I don't want to do that, we've found that it makes testing hard. We're also seeing if we need to add another registration restriction -- say we don't like people named Fred so we won't let anyone register with a username or email containing Fred in it -- we need to crack open the register_user function and add it directly. We can solve both of these problems. The Database Problem To address the database problem we need to realize something. We're not actually interested in the database, we're interested in the data it stores. And since we're interested in finding data rather than where it's stored at, why not stuff an interface in the way? Step9: Hmm...that's interesting. Since we'll end up depending on this instead of a concrete implementation, we can run our tests completely in memory and production on top of SQLAlchemy, Mongo, a foreign API, whatever. But we need to inject it into our validators instead of reaching out into the global namespace like we currently are. Step10: These validators are simple enough that closures work instead of full-fledged objects. The important part here is to maintain a consistent interface -- if we need to use classes all of a sudden, we need to define a __call__ on them to maintain this interface. We can also change our register callable to accept the repository as well Step11: Of course the tests break now, and that's okay. We made a very sweeping change to the architecture here. We need to go back through and alter the tests one by one, but instead of patching everything out we can do something better Step12: But to test that our validators function correctly in this context, we need to fake out find_by_email and find_by_username indpendently. This is a symptom of our code not being Open-Closed. The Open-Closed Problem Revisiting the other major issue from how the code is laid out right now is that it's not Open-Closed. If you're not familiar with the principle, Wikipedia says this Step13: Of course, our tests break again, so let's revisit the currently breaking one first Step14: We'll need to tweak the validation logic some to make up for the fact that we're passing the whole user object now Step15: The tests for these are pretty straight forward as well, so I'll omit them. But we need a way to stitch them together... Step16: And then hook it all up like this Step17: Our neglected Controller We've spent a lot of time looking at how to compartmentalize the registration logic and portion out its concerns. However, the controller itself needs some attention as well. When we last left, it looked like this Step18: But we can do beter than that. The problem here is that the logic is set in stone, nested flows of control. But mostly, I really like any excuse to use class based views.
Python Code: @app.route('/register', methods=['GET', 'POST']) def register(): form = RegisterUserForm() if form.validate_on_submit(): user = User() form.populate_obj(user) db.session.add(user) db.session.commit() return redirect('homepage') return render_template('register.html', form=form) Explanation: I've been thinking a lot about software achitecure lately. Not just thinking, because I wouldn't come up with these ideas on my own, but consuming a lot about it -- books, talks, slide decks, blog posts. And while thinking about all this, I've been hacking away at some projects in my spare time. And I noticed something, there's a lot of things in these projects that look a lot like this: End of explanation @mock.patch('myapp.views.RegisterUserForm') @mock.patch('myapp.views.db') @mock.patch('myapp.views.redirect') @mock.patch('myapp.views.url_for') @mock.patch('myapp.views.render_template') def test_register_new_user(render, url_for, redirect, db, form): # TODO: Write test assert True Explanation: This is a bog standard user registration endpoint. We create a form, check if it's valid, shove that information on a user model and then into the database and redirect off. If it's not valid or if it wasn't submitted (the user just navigated to the page), we render out some HTML. It's all very basic, well trodden code. Besides, who wants to do registration again? It's boring. We want to do the interesting stuff. But there's some very real consequences to this code: It's not testable Everything is wrapped up together, form validation, database stuff, rendering. Honestly, I'm not interested in testing if SQLAlchemy, WTForms of Jinja2 work -- they have their own tests. So testing this ends up looking like this: End of explanation class RegisterUserForm(Form): def validate_username(self, field): if User.query.filter(User.username == field.data).count(): raise ValidationError("Username in use already") def validate_email(self, field): if User.query.filter(User.email == field.data).count(): raise ValidationError("Email in use already") Explanation: What's even the point of this? We're just testing if Mock works at this point. There's actual things we can do to make it more testable, but before delving into that, It hides logic If registering a user was solely about, "Fill this form out and we'll shove it into a database" there wouldn't be a blog post here. However, there is some logic hiding out here in the form: End of explanation from myapp.forms import RegisterUserForm from myapp.models import User from collections import namedtuple from unittest import mock FakeData = namedtuple('User', ['username', 'email', 'password', 'confirm_password']) def test_existing_username_fails_validation(): test_data = FakeData('fred', '[email protected]', 'a', 'a') UserModel = mock.Mock() UserModel.query.filter.count.return_value = 1 form = RegisterUserForm(obj=test_data) with mock.patch('myapp.forms.User', UserModel): form.validate() assert form.errors['username'] == "Username in use already" def test_existing_email_fails_validation(): test_user = FakeUser('fred', '[email protected]', 'a', 'a') UserModel = mock.Mock() UserModel.query.filter.first.return_value = True form = RegisterUserForm(obj=test_user) with mock.patch('myapp.forms.User', UserModel): form.validate() assert form.errors['username'] == "Email in use already" Explanation: When we call RegisterUserForm.validate_on_submit it also runs these two methods. However, I'm not of the opinion that the form should talk to the database at all, let alone run validation against database contents. So, let's write a little test harness that can prove that an existing user with a given username and email causes us to not register: End of explanation def is_username_free(username): return User.query.filter(User.username == username).count() == 0 def is_email_free(email): return User.query.filter(User.email == email).count() == 0 Explanation: If these pass -- which they should, but you may have to install mock if you're not on Python 3 -- I think we should move the username and email validation into their own callables that are independently testable: End of explanation @app.route('/register', methods=['GET', 'POST']) def register(): form = RegisterUserForm() if form.validate_on_submit(): if not is_username_free(form.username.data): form.errors['username'] = ['Username in use already'] return render_template('register.html', form=form) if not is_email_free(form.email.data): form.errors['email'] = ['Email in use already'] return render_template('register.html', form=form) user = User() form.populate_obj(user) db.session.add(user) db.session.commit() return redirect('homepage') return render_template('register.html', form=form) Explanation: And then use these in the endpoint itself: End of explanation class OurValidationError(Exception): def __init__(self, msg, field): self.msg = msg self.field = field def register_user(username, email, password): if not is_username_free(username): raise OurValidationError('Username in use already', 'username') if not is_email_free(email): raise OurValidationError('Email in use already', 'email') user = User(username=username, email=email, password=password) db.session.add(user) db.session.commit() @app.route('/register', methods=['GET', 'POST']) def register_user_view(): form = RegisterUserForm() if form.validate_on_submit(): try: register_user(form.username.data, form.email.data, form.password.data) except OurValidationError as e: form.errors[e.field] = [e.msg] return render_template('register.html', form=form) else: return redirect('homepage') return render_template('register.html', form=form) Explanation: This is really hard to test, so instead of even attempting that -- being honest, I spent the better part of an hour attempting to test the actual endpoint and it was just a complete mess -- let's extract out the actual logic and place it into it's own callable: End of explanation def test_duplicated_user_raises_error(): ChasteValidator = mock.Mock(return_value=False) with mock.patch('myapp.logic.is_username_free', ChasteValidator): with pytest.raises(OurValidationError) as excinfo: register_user('fred', '[email protected]', 'fredpassword') assert excinfo.value.msg == 'Username in use already' assert excinfo.value.field == 'username' def test_duplicated_user_raises_error(): ChasteValidator = mock.Mock(return_value=False) PromisciousValidator = mock.Mock(return_value=True) with mock.patch('myapp.logic.is_username_free', PromisciousValidator), mock.patch('myapp.logic.is_email_free', ChasteValidator): with pytest.raises(OurValidationError) as excinfo: register_user('fred', '[email protected]', 'fredpassword') assert excinfo.value.msg == 'Email in use already' assert excinfo.value.field == 'email' def test_register_user_happy_path(): PromisciousValidator = mock.Mock(return_value=True) MockDB = mock.Mock() with mock.patch('myapp.logic.is_username_free', PromisciousValidator), mock.patch('myapp.logic.is_email_free', ChasteValidator), mock.patch('myapp.logic.db', MockDB): register_user('fred', '[email protected]', 'freddpassword') assert MockDB.commit.call_count Explanation: Now we're beginning to see the fruits of our labors. These aren't the easiest functions to test, but there's less we need to mock out in order to test the actual logic we're after. End of explanation from abc import ABC, abstractmethod class AbstractUserRepository(ABC): @abstractmethod def find_by_username(self, username): pass @abstractmethod def find_by_email(self, email): pass @abstractmethod def persist(self, user): pass Explanation: Of course, we should also write tests for the controller. I'll leave that as an exercise. However, there's something very important we're learning from these tests. We have to mock.patch everything still. Our validators lean directly on the database, our user creation leans directly on the database, everything leans directly on the database. And I don't want to do that, we've found that it makes testing hard. We're also seeing if we need to add another registration restriction -- say we don't like people named Fred so we won't let anyone register with a username or email containing Fred in it -- we need to crack open the register_user function and add it directly. We can solve both of these problems. The Database Problem To address the database problem we need to realize something. We're not actually interested in the database, we're interested in the data it stores. And since we're interested in finding data rather than where it's stored at, why not stuff an interface in the way? End of explanation def is_username_free(user_repository): def is_username_free(username): return not user_repository.find_by_username(username) return is_username_free def is_email_free(user_repository): def is_email_free(email): return not user_repository.find_by_email(email) return is_email_free Explanation: Hmm...that's interesting. Since we'll end up depending on this instead of a concrete implementation, we can run our tests completely in memory and production on top of SQLAlchemy, Mongo, a foreign API, whatever. But we need to inject it into our validators instead of reaching out into the global namespace like we currently are. End of explanation def register_user(user_repository): email_checker = is_email_free(user_repository) username_checker = is_username_free(user_repository) def register_user(username, email, password): if not username_checker(username): raise OurValidationError('Username in use already', 'username') if not email_checker(email): raise OurValidationError('Email in use already', 'email') user = User(username=username, email=email, password=password) user_repository.persist(user) return register_user Explanation: These validators are simple enough that closures work instead of full-fledged objects. The important part here is to maintain a consistent interface -- if we need to use classes all of a sudden, we need to define a __call__ on them to maintain this interface. We can also change our register callable to accept the repository as well: End of explanation def test_duplicated_email_causes_false(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_email.return_value = True checker = is_email_free(fake_user_repository) assert not checker('[email protected]') def test_duplicated_username_causes_false(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_username.return_value = True checker = is_username_free(fake_user_repository) assert not checker('fred') def test_register_user_happy_path(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_user_repository.find_by_email.return_value = False fake_user_repository.find_by_username.return_value = False registrar = register_user(fake_user_repository) registrar('fred', '[email protected]', 'fredpassword') assert fake_user_repository.persist.call_count Explanation: Of course the tests break now, and that's okay. We made a very sweeping change to the architecture here. We need to go back through and alter the tests one by one, but instead of patching everything out we can do something better: Dependency Injection. End of explanation def register_user(user_repository, validator): def registrar(username, email, password): user = User(username, email, password) validator(user) user_repository.persist(user) return registrar Explanation: But to test that our validators function correctly in this context, we need to fake out find_by_email and find_by_username indpendently. This is a symptom of our code not being Open-Closed. The Open-Closed Problem Revisiting the other major issue from how the code is laid out right now is that it's not Open-Closed. If you're not familiar with the principle, Wikipedia says this: "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification" Or in a different way, "You should be able to change functionality without editing existing code." -- I believe I need to credit Sandi Metz with this, but I'm not sure. We've actually already used this idea by injecting the User Repository. In tests, we inject a fake or in memory repository, but in production it can be a SQLAlchemy implementation, or maybe wrap that up into a caching repository. We can do the same thing with the validators. End of explanation def test_register_user_happy_path(): fake_user_repository = mock.create_autospec(AbstractUserRepository) registrar = register_user(fake_user_repository, lambda user: None) registrar('fred', '[email protected]', 'fredpassword') assert fake_user_repository.persist.call_count def test_register_user_fails_validation(): fake_user_repository = mock.create_autospec(AbstractUserRepository) fake_validator = mock.Mock(side_effect=OurValidationError('username in use already', 'username')) registrar = register_user(fake_user_repository, fake_validator) try: registrar('fred', '[email protected]', 'fredpassword') except OurValidationError as e: assert e.msg == 'username in use already' assert e.field == 'username' else: assert False, "Did not Raise" Explanation: Of course, our tests break again, so let's revisit the currently breaking one first: End of explanation def validate_username(user_repoistory): def validator(user): if not user_repoistory.find_by_username(user.username): raise OurValidationError('Username in use already', 'username') return True return validator def validate_email(user_repoistory): def validator(user): if not user_repoistory.find_by_email(user.email): raise OurValidationError("Email in use already", 'email') return True return validator Explanation: We'll need to tweak the validation logic some to make up for the fact that we're passing the whole user object now: End of explanation def validate_many(*validators): def checker(input): return all(validator(input) for validator in validators) return checker Explanation: The tests for these are pretty straight forward as well, so I'll omit them. But we need a way to stitch them together... End of explanation validator = validate_username(validate_email(user_repository), validate_username(user_repository)) registrar = register_user(user_repository, validator) Explanation: And then hook it all up like this: End of explanation @app.route('/register', methods=['GET', 'POST']) def register_user_view(): form = RegisterUserForm() if form.validate_on_submit(): try: register_user(form.username.data, form.email.data, form.password.data) except OurValidationError as e: form.errors[e.field] = [e.msg] return render_template('register.html', form=form) else: return redirect('homepage') return render_template('register.html', form=form) Explanation: Our neglected Controller We've spent a lot of time looking at how to compartmentalize the registration logic and portion out its concerns. However, the controller itself needs some attention as well. When we last left, it looked like this: End of explanation class RegisterUser(MethodView): def __init__(self, form, registrar, template, redirect): self.form = form self.registrar = registrar self.template = template self.redirect = redirect def get(self): return self._render() def post(self): if self.form.validate_on_submit(): return self._register() else: return self._render() def _register(self): try: self.registrar(self.form.username.data, self.form.email.data, self.form.password.data) except OurValidationError as e: self._handle_error(e) self._render() else: return self._redirect() def _render(self): return render_template(self.template, self.form=form) def _redirect(self): return redirect(url_for(self.redirect)) def _handle_error(self, e): self.form.error[e.field] = [e.msg] Explanation: But we can do beter than that. The problem here is that the logic is set in stone, nested flows of control. But mostly, I really like any excuse to use class based views. End of explanation
12,696
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'awi', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: AWI Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:37 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
12,697
Given the following text description, write Python code to implement the functionality described below step by step Description: Statistical Data Modeling Pandas, NumPy and SciPy provide the core functionality for building statistical models of our data. We use models to Step1: Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case of the normal distribution Step2: Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria Step3: The first step is recognixing what sort of distribution to fit our data to. A couple of observations Step4: Now, let's calculate the sample moments of interest, the means and variances by month Step5: We then use these moments to estimate $\alpha$ and $\beta$ for each month Step6: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January Step7: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution Step8: Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution Step9: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true. Given these data, how likely is this model? In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$. $$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$ So, for any given value of $y$, we can calculate its likelihood Step10: We can plot the likelihood function for any value of the parameter(s) Step11: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$. Step12: Why are we interested in the likelihood function? A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem. Going back to the rainfall data, if we are using a gamma distribution we need to maximize Step13: Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function Step14: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is Step15: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function. Step16: Time to optimize! Step17: And now plug this back into the solution for beta Step18: We can compare the fit of the estimates derived from MLE to those from the method of moments Step19: For some common distributions, SciPy includes methods for fitting via MLE Step20: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example Step21: We can construct a log likelihood for this function using the conditional form Step22: For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages Step23: In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation. Step24: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution Step25: Exercise
Python Code: %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set() Explanation: Statistical Data Modeling Pandas, NumPy and SciPy provide the core functionality for building statistical models of our data. We use models to: Concisely describe the components of our data Provide inference about underlying parameters that may have generated the data Make predictions about unobserved data, or expected future observations. This section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty. End of explanation x = np.array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604, 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745, 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 , 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357, 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ]) _ = plt.hist(x, bins=8) Explanation: Estimation An recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data. In parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data. e.g. $\mu$ and $\sigma^2$ in the case of the normal distribution End of explanation precip = pd.read_table("../data/nashville_precip.txt", index_col=0, na_values='NA', delim_whitespace=True) precip.head() _ = precip.hist(sharex=True, sharey=True, grid=False) plt.tight_layout() Explanation: Fitting data to probability distributions We start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria: Method of moments chooses the parameters so that the sample moments (typically the sample mean and variance) match the theoretical moments of our chosen distribution. Maximum likelihood chooses the parameters to maximize the likelihood, which measures how likely it is to observe our given sample. Discrete Random Variables $$X = {0,1}$$ $$Y = {\ldots,-2,-1,0,1,2,\ldots}$$ Probability Mass Function: For discrete $X$, $$Pr(X=x) = f(x|\theta)$$ e.g. Poisson distribution The Poisson distribution models unbounded counts: <div style="font-size: 150%;"> $$Pr(X=x)=\frac{e^{-\lambda}\lambda^x}{x!}$$ </div> $X={0,1,2,\ldots}$ $\lambda > 0$ $$E(X) = \text{Var}(X) = \lambda$$ Continuous Random Variables $$X \in [0,1]$$ $$Y \in (-\infty, \infty)$$ Probability Density Function: For continuous $X$, $$Pr(x \le X \le x + dx) = f(x|\theta)dx \, \text{ as } \, dx \rightarrow 0$$ e.g. normal distribution <div style="font-size: 150%;"> $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{(x-\mu)^2}{2\sigma^2}\right]$$ </div> $X \in \mathbf{R}$ $\mu \in \mathbf{R}$ $\sigma>0$ $$\begin{align}E(X) &= \mu \cr \text{Var}(X) &= \sigma^2 \end{align}$$ Example: Nashville Precipitation The dataset nashville_precip.txt contains NOAA precipitation data for Nashville measured since 1871. The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case. End of explanation precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True) Explanation: The first step is recognixing what sort of distribution to fit our data to. A couple of observations: The data are skewed, with a longer tail to the right than to the left The data are positive-valued, since they are measuring rainfall The data are continuous There are a few possible choices, but one suitable alternative is the gamma distribution: <div style="font-size: 150%;"> $$x \sim \text{Gamma}(\alpha, \beta) = \frac{\beta^{\alpha}x^{\alpha-1}e^{-\beta x}}{\Gamma(\alpha)}$$ </div> The method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters. So, for the gamma distribution, the mean and variance are: <div style="font-size: 150%;"> $$ \hat{\mu} = \bar{X} = \alpha \beta $$ $$ \hat{\sigma}^2 = S^2 = \alpha \beta^2 $$ </div> So, if we solve for these parameters, we can use a gamma distribution to describe our data: <div style="font-size: 150%;"> $$ \alpha = \frac{\bar{X}^2}{S^2}, \, \beta = \frac{S^2}{\bar{X}} $$ </div> Let's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values. We will learn more sophisticated methods for handling missing data later in the course. End of explanation precip_mean = precip.mean() precip_mean precip_var = precip.var() precip_var Explanation: Now, let's calculate the sample moments of interest, the means and variances by month: End of explanation alpha_mom = precip_mean ** 2 / precip_var beta_mom = precip_var / precip_mean alpha_mom, beta_mom Explanation: We then use these moments to estimate $\alpha$ and $\beta$ for each month: End of explanation from scipy.stats.distributions import gamma precip.Jan.hist(normed=True, bins=20) plt.plot(np.linspace(0, 10), gamma.pdf(np.linspace(0, 10), alpha_mom[0], beta_mom[0])) Explanation: We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January: End of explanation axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False) for ax in axs.ravel(): # Get month m = ax.get_title() # Plot fitted distribution x = np.linspace(*ax.get_xlim()) ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m])) # Annotate with parameter estimates label = 'alpha = {0:.2f}\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m]) ax.annotate(label, xy=(10, 0.2)) plt.tight_layout() Explanation: Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution: End of explanation y = np.random.poisson(5, size=100) plt.hist(y, bins=12, normed=True) plt.xlabel('y'); plt.ylabel('Pr(y)') Explanation: Maximum Likelihood Maximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. There is a ton of theory regarding ML. We will restrict ourselves to the mechanics here. Say we have some data $y = y_1,y_2,\ldots,y_n$ that is distributed according to some distribution: <div style="font-size: 120%;"> $$Pr(Y_i=y_i | \theta)$$ </div> Here, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts: End of explanation poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod() lam = 6 value = 10 poisson_like(value, lam) np.sum(poisson_like(yi, lam) for yi in y) lam = 8 np.sum(poisson_like(yi, lam) for yi in y) Explanation: The product $\prod_{i=1}^n Pr(y_i | \theta)$ gives us a measure of how likely it is to observe values $y_1,\ldots,y_n$ given the parameters $\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true. Given these data, how likely is this model? In the above model, the data were drawn from a Poisson distribution with parameter $\lambda =5$. $$L(y|\lambda=5) = \frac{e^{-5} 5^y}{y!}$$ So, for any given value of $y$, we can calculate its likelihood: End of explanation lambdas = np.linspace(0,15) x = 5 plt.plot(lambdas, [poisson_like(x, l) for l in lambdas]) plt.xlabel('$\lambda$') plt.ylabel('L($\lambda$|x={0})'.format(x)) Explanation: We can plot the likelihood function for any value of the parameter(s): End of explanation lam = 5 xvals = np.arange(15) plt.bar(xvals, [poisson_like(x, lam) for x in xvals]) plt.xlabel('x') plt.ylabel('Pr(X|$\lambda$=5)') Explanation: How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\lambda=5$. End of explanation from scipy.optimize import newton Explanation: Why are we interested in the likelihood function? A reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem. Going back to the rainfall data, if we are using a gamma distribution we need to maximize: $$\begin{align}l(\alpha,\beta) &= \sum_{i=1}^n \log[\beta^{\alpha} x^{\alpha-1} e^{-x/\beta}\Gamma(\alpha)^{-1}] \cr &= n[(\alpha-1)\overline{\log(x)} - \bar{x}\beta + \alpha\log(\beta) - \log\Gamma(\alpha)]\end{align}$$ (Its usually easier to work in the log scale) where $n = 2012 − 1871 = 141$ and the bar indicates an average over all i. We choose $\alpha$ and $\beta$ to maximize $l(\alpha,\beta)$. Notice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above. Finding the MLE To find the maximum of any function, we typically take the derivative with respect to the variable to be maximized, set it to zero and solve for that variable. $$\frac{\partial l(\alpha,\beta)}{\partial \beta} = n\left(\frac{\alpha}{\beta} - \bar{x}\right) = 0$$ Which can be solved as $\beta = \alpha/\bar{x}$. However, plugging this into the derivative with respect to $\alpha$ yields: $$\frac{\partial l(\alpha,\beta)}{\partial \alpha} = \log(\alpha) + \overline{\log(x)} - \log(\bar{x}) - \frac{\Gamma(\alpha)'}{\Gamma(\alpha)} = 0$$ This has no closed form solution. We must use numerical optimization! Numerical optimization alogarithms take an initial "guess" at the solution, and iteratively improve the guess until it gets "close enough" to the answer. Here, we will use Newton-Raphson algorithm: <div style="font-size: 120%;"> $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$ </div> Which is available to us via SciPy: End of explanation # some function func = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1 xvals = np.linspace(0, 6) plt.plot(xvals, func(xvals)) plt.text(5.3, 2.1, '$f(x)$', fontsize=16) # zero line plt.plot([0,6], [0,0], 'k-') # value at step n plt.plot([4,4], [0,func(4)], 'k:') plt.text(4, -.2, '$x_n$', fontsize=16) # tangent line tanline = lambda x: -0.858 + 0.626*x plt.plot(xvals, tanline(xvals), 'r--') # point at step n+1 xprime = 0.858/0.626 plt.plot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:') plt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16) Explanation: Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function: End of explanation from scipy.special import psi, polygamma dlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log dl2gamma = lambda m, *args: 1./m - polygamma(1, m) Explanation: To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is: End of explanation # Calculate statistics log_mean = precip.mean().apply(np.log) mean_log = precip.apply(np.log).mean() Explanation: where log_mean and mean_log are $\log{\bar{x}}$ and $\overline{\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function. End of explanation # Alpha MLE for December alpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1])) alpha_mle Explanation: Time to optimize! End of explanation beta_mle = alpha_mle/precip.mean()[-1] beta_mle Explanation: And now plug this back into the solution for beta: <div style="font-size: 120%;"> $$ \beta = \frac{\alpha}{\bar{X}} $$ </div> End of explanation dec = precip.Dec dec.hist(normed=True, bins=10, grid=False) x = np.linspace(0, dec.max()) plt.plot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-', label='Moment estimator') plt.plot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--', label='ML estimator') plt.legend() Explanation: We can compare the fit of the estimates derived from MLE to those from the method of moments: End of explanation from scipy.stats import gamma gamma.fit(precip.Dec) Explanation: For some common distributions, SciPy includes methods for fitting via MLE: End of explanation x = np.random.normal(size=10000) # Truncation point a = -1 # Resample until all points meet criterion x_small = x < a while x_small.sum(): x[x_small] = np.random.normal(size=x_small.sum()) x_small = x < a _ = plt.hist(x, bins=100) Explanation: This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution. Example: truncated distribution Suppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then: $$ P(X \le x) = P(Y \le x|Y \gt a) = \frac{P(a \lt Y \le x)}{P(Y \gt a)}$$ (so, $Y$ is the original variable and $X$ is the truncated variable) Then X has the density: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ Suppose $Y \sim N(\mu, \sigma^2)$ and $x_1,\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\mu$ and $\sigma$. First, we can simulate a truncated distribution using a while statement to eliminate samples that are outside the support of the truncated distribution. End of explanation from scipy.stats.distributions import norm trunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum() Explanation: We can construct a log likelihood for this function using the conditional form: $$f_X(x) = \frac{f_Y (x)}{1−F_Y (a)} \, \text{for} \, x \gt a$$ The denominator normalizes the truncated distribution so that it integrates to one. End of explanation from scipy.optimize import fmin fmin(trunc_norm, np.array([1,2]), args=(-1, x)) Explanation: For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages: it does not require derivatives it can optimize (minimize) a vector of parameters SciPy implements this algorithm in its fmin function: End of explanation # Some random data y = np.random.random(15) * 10 y x = np.linspace(0, 10, 100) # Smoothing parameter s = 0.4 # Calculate the kernels kernels = np.transpose([norm.pdf(x, yi, s) for yi in y]) plt.plot(x, kernels, 'k:') plt.plot(x, kernels.sum(1)) plt.plot(y, np.zeros(len(y)), 'ro', ms=10) Explanation: In general, simulating data is a terrific way of testing your model before using it with real data. Kernel density estimates In some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation. End of explanation # Create a bi-modal distribution with a mixture of Normals. x1 = np.random.normal(0, 3, 50) x2 = np.random.normal(4, 1, 50) # Append by row x = np.r_[x1, x2] plt.hist(x, bins=8, normed=True) from scipy.stats import kde density = kde.gaussian_kde(x) xgrid = np.linspace(x.min(), x.max(), 100) plt.hist(x, bins=8, normed=True) plt.plot(xgrid, density(xgrid), 'r-') Explanation: SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution: End of explanation cdystonia = pd.read_csv("../data/cdystonia.csv") cdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8); # Write your answer here Explanation: Exercise: Cervical dystonia analysis Recall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study. Use maximum likelihood to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs variable is normal: $$f(x \mid \mu, \sigma^2) = \sqrt{\frac{1}{2\pi\sigma^2}} \exp\left{ -\frac{1}{2} \frac{(x-\mu)^2}{\sigma^2} \right}$$ HINT: the normal distribution PDF is available in the scipy.stats module. End of explanation
12,698
Given the following text description, write Python code to implement the functionality described below step by step Description: Quick, Draw! GAN code based directly on Grant Beyleveld's, which is derived from Rowel Atienza's under MIT License data provided by Google under Creative Commons Attribution 4.0 license Select processing devices Step1: Load dependencies Step2: Load data NumPy bitmap files are here -- pick your own drawing category -- you don't have to pick apples Step3: Create discriminator network Step4: Create generator network Step5: Create adversarial network Step6: Train!
Python Code: # import os # os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # # os.environ["CUDA_VISIBLE_DEVICES"] = "" # os.environ["CUDA_VISIBLE_DEVICES"] = "1" Explanation: Quick, Draw! GAN code based directly on Grant Beyleveld's, which is derived from Rowel Atienza's under MIT License data provided by Google under Creative Commons Attribution 4.0 license Select processing devices End of explanation # for data input and output: import numpy as np import os # for deep learning: import keras from keras.models import Sequential, Model from keras.layers import Input, Dense, Conv2D, BatchNormalization, Dropout, Flatten from keras.layers import Activation, Reshape, Conv2DTranspose, UpSampling2D # new! from keras.optimizers import RMSprop # for plotting: import pandas as pd from matplotlib import pyplot as plt %matplotlib inline Explanation: Load dependencies End of explanation input_images = "../quickdraw_data/apple.npy" data = np.load(input_images) # 28x28 (sound familiar?) grayscale bitmap in numpy .npy format; images are centered data.shape data[4242] data = data/255 data = np.reshape(data,(data.shape[0],28,28,1)) # fourth dimension is color img_w,img_h = data.shape[1:3] data.shape data[4242] plt.imshow(data[4242,:,:,0], cmap='Greys') Explanation: Load data NumPy bitmap files are here -- pick your own drawing category -- you don't have to pick apples :) End of explanation def discriminator_builder(depth=64,p=0.4): # Define inputs inputs = Input((img_w,img_h,1)) # Convolutional layers conv1 = Conv2D(depth*1, 5, strides=2, padding='same', activation='relu')(inputs) conv1 = Dropout(p)(conv1) conv2 = Conv2D(depth*2, 5, strides=2, padding='same', activation='relu')(conv1) conv2 = Dropout(p)(conv2) conv3 = Conv2D(depth*4, 5, strides=2, padding='same', activation='relu')(conv2) conv3 = Dropout(p)(conv3) conv4 = Conv2D(depth*8, 5, strides=1, padding='same', activation='relu')(conv3) conv4 = Flatten()(Dropout(p)(conv4)) # Output layer output = Dense(1, activation='sigmoid')(conv4) # Model definition model = Model(inputs=inputs, outputs=output) model.summary() return model discriminator = discriminator_builder() discriminator.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.0008, decay=6e-8, clipvalue=1.0), metrics=['accuracy']) Explanation: Create discriminator network End of explanation def generator_builder(z_dim=100,depth=64,p=0.4): # Define inputs inputs = Input((z_dim,)) # First dense layer dense1 = Dense(7*7*64)(inputs) dense1 = BatchNormalization(momentum=0.9)(dense1) # default momentum for moving average is 0.99 dense1 = Activation(activation='relu')(dense1) dense1 = Reshape((7,7,64))(dense1) dense1 = Dropout(p)(dense1) # De-Convolutional layers conv1 = UpSampling2D()(dense1) conv1 = Conv2DTranspose(int(depth/2), kernel_size=5, padding='same', activation=None,)(conv1) conv1 = BatchNormalization(momentum=0.9)(conv1) conv1 = Activation(activation='relu')(conv1) conv2 = UpSampling2D()(conv1) conv2 = Conv2DTranspose(int(depth/4), kernel_size=5, padding='same', activation=None,)(conv2) conv2 = BatchNormalization(momentum=0.9)(conv2) conv2 = Activation(activation='relu')(conv2) conv3 = Conv2DTranspose(int(depth/8), kernel_size=5, padding='same', activation=None,)(conv2) conv3 = BatchNormalization(momentum=0.9)(conv3) conv3 = Activation(activation='relu')(conv3) # Output layer output = Conv2D(1, kernel_size=5, padding='same', activation='sigmoid')(conv3) # Model definition model = Model(inputs=inputs, outputs=output) model.summary() return model generator = generator_builder() Explanation: Create generator network End of explanation def adversarial_builder(z_dim=100): model = Sequential() model.add(generator) model.add(discriminator) model.compile(loss='binary_crossentropy', optimizer=RMSprop(lr=0.0004, decay=3e-8, clipvalue=1.0), metrics=['accuracy']) model.summary() return model adversarial_model = adversarial_builder() Explanation: Create adversarial network End of explanation def make_trainable(net, val): net.trainable = val for l in net.layers: l.trainable = val def train(epochs=2000,batch=128): d_metrics = [] a_metrics = [] running_d_loss = 0 running_d_acc = 0 running_a_loss = 0 running_a_acc = 0 for i in range(epochs): if i%100 == 0: print(i) real_imgs = np.reshape(data[np.random.choice(data.shape[0],batch,replace=False)],(batch,28,28,1)) fake_imgs = generator.predict(np.random.uniform(-1.0, 1.0, size=[batch, 100])) x = np.concatenate((real_imgs,fake_imgs)) y = np.ones([2*batch,1]) y[batch:,:] = 0 make_trainable(discriminator, True) d_metrics.append(discriminator.train_on_batch(x,y)) running_d_loss += d_metrics[-1][0] running_d_acc += d_metrics[-1][1] make_trainable(discriminator, False) noise = np.random.uniform(-1.0, 1.0, size=[batch, 100]) y = np.ones([batch,1]) a_metrics.append(adversarial_model.train_on_batch(noise,y)) running_a_loss += a_metrics[-1][0] running_a_acc += a_metrics[-1][1] if (i+1)%500 == 0: print('Epoch #{}'.format(i+1)) log_mesg = "%d: [D loss: %f, acc: %f]" % (i, running_d_loss/i, running_d_acc/i) log_mesg = "%s [A loss: %f, acc: %f]" % (log_mesg, running_a_loss/i, running_a_acc/i) print(log_mesg) noise = np.random.uniform(-1.0, 1.0, size=[16, 100]) gen_imgs = generator.predict(noise) plt.figure(figsize=(5,5)) for k in range(gen_imgs.shape[0]): plt.subplot(4, 4, k+1) plt.imshow(gen_imgs[k, :, :, 0], cmap='gray') plt.axis('off') plt.tight_layout() plt.show() return a_metrics, d_metrics a_metrics_complete, d_metrics_complete = train(epochs=3000) ax = pd.DataFrame( { 'Generator': [metric[0] for metric in a_metrics_complete], 'Discriminator': [metric[0] for metric in d_metrics_complete], } ).plot(title='Training Loss', logy=True) ax.set_xlabel("Epochs") ax.set_ylabel("Loss") ax = pd.DataFrame( { 'Generator': [metric[1] for metric in a_metrics_complete], 'Discriminator': [metric[1] for metric in d_metrics_complete], } ).plot(title='Training Accuracy') ax.set_xlabel("Epochs") ax.set_ylabel("Accuracy") Explanation: Train! End of explanation
12,699
Given the following text description, write Python code to implement the functionality described below step by step Description: The ipyrad.analysis tool kit Deren Eaton Install software All required software for this walkthrough is available on conda. Step1: Start an ipyparallel cluster In a separate terminal run the following command to start a cluster of engines. If working on a notebook running remotely, use the dashboard to open a new terminal. Step2: You should then be able to connect to the engines in your notebook Step3: Assemble a RAD data set The code here is to assemble the example empirical data set from the ipyrad tutotial. Step4: Minimal workflow Step5: Modify more parameters Step6: Assemble the data set You can run one or more steps just like in the CLI. Step7: Access assembly results You can easily access summary stats for the assembly as a data frame. Step8: Plot statistics Step9: Access result files You can also access the stats files for each step, and the output files for downstream analyses. Step10: ipyrad.analysis tools The ipyrad.analysis module includes many wrapper tools that can be used to efficiently run evolutionary analysis tools in a notebook. Step11: RAxML analysis Simply enter the location of the phylip file, which can be accessed from the .outfiles attribute of the Assembly object. You can also provide a name and output directory, and set many other optional parameters. Step12: Minimal workflow Step13: Modify parameters and other functions Step14: Access the tree files and plot Step15: introgression (abba-baba) analysis The baba object can be used to set up abba-baba tests, to calculate results, and to generate plots to visualize them. Minimal example, scroll down for details. Step16: Auto-generate tests Instead of writing out many tests explicitly, you can instead enter a rooted tree to the baba object and use this function to auto-generate four-taxon test fitting the tree and constraints. Step17: Run all tests linked to a baba object Step18: Plot results Step19: Species tree inference by phylogenetic invariants The program tetrad follows the algorithm of SVDquartets by inferring all possible quartet trees from a large SNP alignment and uses the program quartet maxcut (Snir et al. 2012) to infer a species tree by quartet joining. Step20: Access result tetrad trees and draw Step21: Infer a species tree with BPP Step22: Set parameters and filters You can define all of the parameter settings that will be used in the BPP .ctl file by modifying the .params attributes. Similarly, you can modify which loci will be included in the analysis using the .filters attributes. Step23: Track running jobs Unlike some of the other ipyrad.analysis tools, the bpp object does not "block" while the jobs are running. Meaning that after it sends jobs to run on the cluster you can continue to interact with the notebook. This is useful since BPP is not multi-threaded, so you will likely want to submit many different types of jobs. You can check on running jobs like below. Step24: Structure analyses Step25: Modify parameters settings Step26: Summarize results with CLUMPP
Python Code: # conda install -c ipyrad ipyrad structure clumpp bpp # conda install -c eaton-lab toytree toyplot # conda install -c bioconda raxml Explanation: The ipyrad.analysis tool kit Deren Eaton Install software All required software for this walkthrough is available on conda. End of explanation # ipcluster start --n=4 Explanation: Start an ipyparallel cluster In a separate terminal run the following command to start a cluster of engines. If working on a notebook running remotely, use the dashboard to open a new terminal. End of explanation ## connect to the cluster import ipyparallel as ipp ipyclient = ipp.Client() ## print number of engines print len(ipyclient), "connected engines" Explanation: You should then be able to connect to the engines in your notebook: End of explanation ## import ipyrad import ipyrad as ip Explanation: Assemble a RAD data set The code here is to assemble the example empirical data set from the ipyrad tutotial. End of explanation ## create an Assembly object data = ip.Assembly("simdata") ## set I/O paths for the data data.set_params("project_dir", "~/workshop") data.set_params("raw_fastq_path", "ipsimdata/rad_example_R1_.fastq.gz") data.set_params("barcodes_path", "ipsimdata/rad_example_barcodes.txt") ## run all steps of the Assembly data.run("1234567") Explanation: Minimal workflow: scroll down for details. End of explanation ## set params data.set_params("filter_adapters", 2) data.set_params("output_formats", "lpask") ## show params data.get_params() Explanation: Modify more parameters End of explanation ## run all steps of assembly data.run("1234567") Explanation: Assemble the data set You can run one or more steps just like in the CLI. End of explanation ## summary stats data.stats Explanation: Access assembly results You can easily access summary stats for the assembly as a data frame. End of explanation import toyplot ## plot barplot c, a, m = toyplot.bars( data.stats.hetero_est, height=250, width=500, ) ## style the axes a.x.ticks.locator = toyplot.locator.Explicit( locations=range(len(data.stats)), labels=data.stats.index) a.y.label.text = "Heterozygosity" a.y.ticks.show = True Explanation: Plot statistics End of explanation ## s2 stats file print data.stats_files.s2 ## the .loci file location print data.outfiles.loci Explanation: Access result files You can also access the stats files for each step, and the output files for downstream analyses. End of explanation ## import the toolkit import ipyrad.analysis as ipa Explanation: ipyrad.analysis tools The ipyrad.analysis module includes many wrapper tools that can be used to efficiently run evolutionary analysis tools in a notebook. End of explanation import ipyrad as ip import ipyparallel as ipp data = ip.load_json("/home/deren/workshop/simdata.json") ipyclient = ipp.Client() Explanation: RAxML analysis Simply enter the location of the phylip file, which can be accessed from the .outfiles attribute of the Assembly object. You can also provide a name and output directory, and set many other optional parameters. End of explanation ## create a raxml object s = ipa.raxml( name=data.name, phyfile=data.outfiles.phy, workdir="~/workshop/analysis-raxml"); ## run the analysis s.run() Explanation: Minimal workflow: scroll down for details. End of explanation ## modify params s.params.T = 4 s.params.N = 100 ## print the raxml command as a string print s.command ## overwrite existing result with this 'name' s.run(force=True) Explanation: Modify parameters and other functions End of explanation print s.trees import toytree tre = toytree.tree(s.trees.bipartitions) tre.root(wildcard='3') tre.draw( width=300, node_labels=tre.get_node_values("support"), node_size=20, ); Explanation: Access the tree files and plot End of explanation ## create a baba object b = ipa.baba(data=data.outfiles.loci) ## generate tests given the rooted tree b.tests = [ {"p4":["3L_0"], "p3":["2F_0"], "p2":["1D_0"], "p1":["1A_0"]}] ## run jobs distributed across the cluster b.run(ipyclient) b.results_table Explanation: introgression (abba-baba) analysis The baba object can be used to set up abba-baba tests, to calculate results, and to generate plots to visualize them. Minimal example, scroll down for details. End of explanation ## init baba object b = ipa.baba(data=data.outfiles.loci, newick=tre) ## generate all possible tests on this tree b.generate_tests_from_tree() ## set constraints on tests cdict = {"p4": ["3L_0"], "p3": ["2E_0", "2F_0"], "p2": ["1D_0"]} ## generate constrainted number of tests b.generate_tests_from_tree( constraint_dict=cdict, constraint_exact=False, ) Explanation: Auto-generate tests Instead of writing out many tests explicitly, you can instead enter a rooted tree to the baba object and use this function to auto-generate four-taxon test fitting the tree and constraints. End of explanation ## run the tests (in this case 4) linked to the baba object b.run(ipyclient) ## show results table b.results_table Explanation: Run all tests linked to a baba object End of explanation b.plot( height=350, pct_tree_x = 0.4, pct_tree_y = 0.2, ); ### Save the plot import toyplot.pdf canvas, axes, mark = b.plot(height=350, pct_tree_x=0.4, pct_tree_y=0.2) toyplot.pdf.render(canvas, "/home/deren/workshop/abba-baba.pdf") ## save the results table b.results_table.to_csv("~/workshop/abba-baba.csv", sep="\t") Explanation: Plot results End of explanation ## create a tetrad class object tet = ipa.tetrad( name=data.name, seqfile=data.outfiles.snpsphy, mapfile=data.outfiles.snpsmap, workdir="~/workshop/analysis-tetrad", nboots=100 ) ## run the analysis tet.run(ipyclient) Explanation: Species tree inference by phylogenetic invariants The program tetrad follows the algorithm of SVDquartets by inferring all possible quartet trees from a large SNP alignment and uses the program quartet maxcut (Snir et al. 2012) to infer a species tree by quartet joining. End of explanation tet.trees ## load unrooted result tree with toytree and draw tre = toytree.tree(tet.trees.cons) tre.draw( node_labels=tre.get_node_values("support"), node_size=20, ); Explanation: Access result tetrad trees and draw End of explanation # conda install bpp -c ipyrad ## setup: define how samples group into 'species' IMAP = { "1": ["1A_0", "1B_0", "1C_0"], "D": ["1D_0"], "2": ["2F_0", "2E_0", "2G_0"], "H": ["2H_0"], "3": ["3J_0", "3I_0", "3K_0"], "L": ["3L_0"], } ## setup: define a guidetree GUIDE = "(((1,D),(2,H)),(3,L));" ## init a bpp object bpp = ipa.bpp( locifile=data.outfiles.loci, imap=IMAP, guidetree=GUIDE, workdir="~/workshop/analysis-bpp" ); ## submit jobs to run on the cluster bpp.submit_bpp_jobs("A00", nreps=2, ipyclient=ipyclient) Explanation: Infer a species tree with BPP End of explanation ## set some parameters bpp.params.burnin = 1000 bpp.params.nsample = 5000 bpp.params.infer_sptree = 1 bpp.params.infer_delimit = 0 ## set some filters bpp.filters.maxloci = 200 bpp.filters.minsnps = 2 ## submit jobs to run on the cluster bpp.submit_bpp_jobs("A00", nreps=2, ipyclient=ipyclient) Explanation: Set parameters and filters You can define all of the parameter settings that will be used in the BPP .ctl file by modifying the .params attributes. Similarly, you can modify which loci will be included in the analysis using the .filters attributes. End of explanation ## a list of submitted jobs print bpp.asyncs ## a list of result files produced by jobs print bpp.files Explanation: Track running jobs Unlike some of the other ipyrad.analysis tools, the bpp object does not "block" while the jobs are running. Meaning that after it sends jobs to run on the cluster you can continue to interact with the notebook. This is useful since BPP is not multi-threaded, so you will likely want to submit many different types of jobs. You can check on running jobs like below. End of explanation import ipyrad as ip import ipyrad.analysis as ipa import ipyparallel as ipp data = ip.load_json("/home/deren/workshop/simdata.json") ipyclient = ipp.Client() # conda install structure -c ipyrad # conda install clumpp -c ipyrad ## create a structure class object s = ipa.structure( name=data.name, strfile=data.outfiles.str, mapfile=data.outfiles.snpsmap, workdir="~/workshop/analysis-structure", ); s.mainparams.burnin = 100 s.mainparams.numreps = 1000 ## submit jobs to run on the cluster for kpop in [2, 3, 4, 5]: s.submit_structure_jobs(kpop=kpop, nreps=5, ipyclient=ipyclient) Explanation: Structure analyses End of explanation s.mainparams.burnin = 10000 s.mainparams.numreps = 100000 s.extraparams.usepopinfo = 0 Explanation: Modify parameters settings End of explanation ## get results for a single K value s.get_clumpp_table(3) ## make a dict for all results tables = {} for kpop in [2, 3, 4, 5]: tables[kpop] = s.get_clumpp_table(kpop) Explanation: Summarize results with CLUMPP End of explanation